Now blogging at diego's weblog. See you over there!

life: the disorder

From Salon: Life: The disorder. Quote:

So the same way we recognized that adult disorders can be applied to children, we are now, with ADD, noting that those of childhood can be applied to adults. It makes it hard not to imagine a future in which the smallest hardships (trouble studying, stress over a breakup, or perhaps a desire to prevent such nuisances) lead seamlessly to a fully medicated existence starting well before the onset of adulthood.
Yep. It's not 1984, it's Brave New World everyone should have been thinking about. Huxley was right on the money on that one.

Categories: science
Posted by diego on November 25, 2005 at 10:31 AM

IEEE Internet Computing: Social Networking

Last month's IEEE Internet Computing focuses on Social Networks and Social Networking. Some interesting articles there, but more importantly there's a good list of resources & references at the end. Very cool.

Categories: science
Posted by diego on November 24, 2005 at 7:42 AM

lord kelvin's predictions

Re: yesterday's April fools post. Lord Kelvin also said "I can state flatly that heavier than air flying machines are impossible,", noted that "In science there is only physics; all the rest is stamp collecting," and that "Landing and moving around on the moon offer so many serious problems for human beings that it may take science another 200 years to lick them." (Notice the "lick them" reference, along with the "stamp collecting" reference in an earlier quote: clearly related). Kelvin also disregarded atomic theory and radioactivity, among other nuggets. His quote about "nothing new to be discovered in physics" dates from the end of the 19th century.

Maybe predicting the future just wasn't his thing. But I read that his biographer put it as follows: "he spent the first half of his career being right and the second half being wrong."

Categories: science
Posted by diego on April 2, 2005 at 8:02 AM

...but a new research center opens: CTVR

This is something I forgot to mention recently, which is as good an antidote as any to the news regarding MLE: on the upside for Ireland (and Europe), there's the Centre for Telecommunications Value-Chain-Driven Research (CTVR) which started operating recently, run as a partnership between various Irish institutions, including TCD, and Bell Labs, and directed by Prof. Donal O'Mahony (my thesis supervisor at Trinity).

Don't let the Centre's unwieldy name fool you, they're going to be working on some pretty cool stuff, including cutting-edge optical networking, ad hoc networks, and more. Plus, they're looking for people.

Categories: science
Posted by diego on January 21, 2005 at 2:25 AM

media lab europe closes...

Failing to summon sleep, I started catching up with news/blogs in the last few days (travel affects reading as well as writing, I've found). And eventually I found this news item of the "WHAT?!?" type: Media Lab Europe is shutting down. That's really too bad. Here's some comments on the closing from David Reitter, who worked there until recently.

Categories: science
Posted by diego on January 21, 2005 at 2:12 AM

a manifold kind of day

I'm about to go out for a bit, but before that, here's what I've been talking about for days now: the manifold weblog. (About my thesis work). The PDF of the dissertation is posted there (finally!), as well as two new posts that explain some more things:

as well as the older posts on the topic, plus a presentation and some code in the last one. Keep in mind that this is research work. Hope it's interesting! And that it doesn't instantaneously induce heavy sleep. :)

the fun side of supersymmetry

I mean: "squarks", "gravitinos", "photinos", "gluinos", "selectrons" and even "winos" (no alcohol involved there, just the superpartner, or shadow partner, of the W+- boson). Maybe nobody's sure of what String Theory is, exactly, but the name variations are certainly entertaining. Used to be, you just needed a copy of Joyce's Finnegans Wake to name a particle (Murray Gell-Mann took "quark" it from the sentence "Three quarks for Muster Mark" in that book).

Anyway, from today's New York Times: String Theory, At 20, Explains It All (or not):

"String theory, the Italian physicist Dr. Daniele Amati once said, was a piece of 21st-century physics that had fallen by accident into the 20th century.

And, so the joke went, would require 22nd-century mathematics to solve."

Albert Einstein: "God does not play dice with the universe."

Stephen Hawking: "Not only does God play dice, but... he sometimes throws them where they cannot be seen."

Niels Bohr: "How wonderful that we have met with a paradox. Now we have some hope of making progress."

Exactly.

Categories: science
Posted by diego on December 7, 2004 at 10:52 AM

manifold, the 30,000 ft. view

As a follow-up to my thesis abstract, I wanted to add a sort of introduction-ish post to explain a couple of things in more detail. People have asked for the PDF of the thesis, which I haven't published yet, for a simple reason: everything is ready, everything's approved, and I have four copies nicely bound (two to submit to TCD) but... there's a signature missing somewhere in one of the documents, and they're trying to fix that. Bureaucracy. Yikes. Hopefully that will be fixed by next week. When that is done, right after I've submitted it, I'll post it here (or, more likely, I'll create a site for it... I want to maintain some coherency on the posts and here it gets mixed up with everything else).

Anyway, I was saying. Here's a short intro.

Resource Location, Resource Discovery

In essence, Resource Location creates a level of indirection, and therefore a decoupling, between a resource (which can be a person, a machine, a software services or agents, etc.) and its location. This decoupling can then be used for various things: mapping human-readable names to machine names, obtaining related information, autoconfiguration, supporting mobility, load balancing, etc.

Resource discovery, on the other hand, facilitates search for resources that match certain characteristics, allowing then to perform a location request or to use the resulting data set directly.

The canonical example of Resource Location is DNS, while Resource Discovery is what we do with search engines. Sometimes, Resource Discovery will involve a Location step afterwards. Web search is an example of this as well. Other times, discovery on its own will give you what you need, particularly if the result of the query contains enough metadata and what you're looking for is related information.

RLD always involves search, but the lines seemed a bit blurry. When was something one and not the other? What defines it? My answer was to look at usage patterns.

It's all about the user

It's the user's needs that determine what will be used, how. The user isn't necessarily a person: more often than not, RLD happens between systems, at the lower levels of applications. So, I settled on the usage patterns according to two main categories: locality of the (local/global) search, and whether the search was exact or inexact. I use the term "search" as an abstract action, the action of locating something. "Finding a book I might like to read" and "Finding my copy of Neuromancer among my books" and "Finding reviews of a book on the web" are all examples of search as I'm using it here.

Local/Global, defining at a high level the "depth" that the search will have. This means, for the current search action, the context of the user in relation to what they are trying to find.

Exact/Inexact, defining the "fuziness" of the search. Inexact searches will generally return one or more matches; Exact searches identify a single, unique, item or set.

These categories combined define four main types of RLD.

Examples: DNS is Global/Exact. Google is Global/Inexact. Looking up my own printer on the network is Local/Exact. Looking up any available printer on the network is Local/Inexact.

Now, none of these concepts will come as a shock to anybody. But writing them down, clearly identifying them, was useful to define what I was after, served as a way to categorize when a system did one but not the other, and to know the limits of what I was trying to achieve.

The Manifold Algorithms

With the usage patterns in hand, I looked at how to solve one or more of the problems, considering that my goal was to have something where absolutely no servers of any kind would be involved.

Local RLD is comparatively simple, since the size of the search space is going to be limited, and I had already looked at that part of the problem with my Nom system for ad hoc wireless networks. Looking at the state of the art, one thing that was clear was that every one of the systems currently existing or proposed for global RLD depends on infrastructure of some kind. In some of them, the infrastructure is self-organizing to a large degree, one of the best examples of this being the Internet Indirection Infrastructure (i3). So I set about to design an algorithm that would would work at global scales with guaranteed upper time bounds, which later turned out to be an overlay network algorithm (which ended up being based on a hypercube virtual topology), as opposed to the broadcast type that Nom was. For a bit more on overlays vs. broadcast networks, check out my IEEE article on the topic.

Then the question was whether to use one or the other, and it occurred to me that there was no reason I couldn't use both. It is possible to to embed a multicast tree in an overlay and thus use a single network, but there are other advantages to the broadcast algorithm that were pretty important in completely "disconnected" environments such as wireless ad hoc networks.

So Nom became the local component, Manifold-b, and the second algorithm became Manifold-g.

So that's about it for the intro. I know that the algorithms are pretty crucial but I want to take some time to explain them properly, and their implications, so I'll leave that for later.

As usual, comments welcome!

Categories: science, soft.dev, technology
Posted by diego on December 3, 2004 at 12:41 PM

the drive to discover

earthrising.jpg

In the latest issue of Wired magazine, James Cameron has a great article, The Drive to Discover. Reminds me of this post I wrote a little over a year ago.

Also in the latest Wired, lots of other great exploration-related articles, such as The New Space Race by Bruce Sterling and Taming the Red Planet, by Kim Stanley Robinson, author of the Mars trilogy (Red Mars, Blue Mars, and Green Mars).

Categories: science
Posted by diego on November 30, 2004 at 6:34 AM

two wired articles

Before I forget: two good articles in the latest Wired. First, The Crusade Against Evolution (pretty much a self-explanatory title, no?) and The Incredible Shrinking Man on 'godfather of nanotech' K. Eric Drexler.

PS: actually, three. Is that a pilot in your pocket?: "Somewhere in Florida, 25,000 disembodied rat neurons are thinking about flying an F-22."

Categories: science
Posted by diego on October 24, 2004 at 12:35 PM

molecular medicine

In this week's Economist: Molecular medicine. An interesting article on some changes that are coming in the way we understand and treat disease (at least where there's enough money).

Too bad there isn't some molecular thingamagic to relieve me from this wretched cold/flu right now. Aside from that old medicine called "sleep and rest"...

Categories: science
Posted by diego on October 19, 2004 at 8:41 PM

the pendulum and the eclipse

Fascinating article in this week's Economist on an apparent gravitational anomaly (first observed on a pendulum during a Solar eclipse):

“ASSUME nothing” is a good motto in science. Even the humble pendulum may spring a surprise on you. In 1954 Maurice Allais, a French economist who would go on to win, in 1988, the Nobel prize in his subject, decided to observe and record the movements of a pendulum over a period of 30 days. Coincidentally, one of his observations took place during a solar eclipse. When the moon passed in front of the sun, the pendulum unexpectedly started moving a bit faster than it should have done.

Since that first observation, the “Allais effect”, as it is now called, has confounded physicists. If the effect is real, it could indicate a hitherto unperceived flaw in General Relativity—the current explanation of how gravity works.

True or not true, it's still interesting. :) And here's a recent paper on the topic).

Categories: science
Posted by diego on August 20, 2004 at 3:47 PM

DNA in XML

I spent a bit of time last night going through the our DNA information as mapped by the Human Genome Project. Aside from learning some things, there were two things that caught my eye. First is that the genome has "builds", and as of today they stand as "build 34 version 3" which sounded like a meld of technology and our humanity in interesting if subtle ways (did they every have a beta? Will we have genomic procedures only compatible with certain builds? Okay, that's in jest, but you know what I mean).

The other was that they've got XML dialects for the genetic information: witness this sequence which is part of our first chromosome. For some reason I can't quite explain, I also find this fascinating. They have different XML dialects which show more information too, with names like TinySeq XML, GBSeq XML, and just "XML" (is this one the standard?), all with DTDs. I imagine they had similar fights as in other fields (such as syndication) over which tags to use, formats, and such--I wonder if there's a mailing list where we can find geneticists and molecular biologists arguing over which tag is best...

Categories: science
Posted by diego on June 25, 2004 at 12:09 PM

Riemann proved?

[via Slashdot] Purdue University has put out a press release noting that one of its mathematicians, Louis de Branges de Bourcia, has proven the Riemann Hypothesis. Here's the proof, and its defense (Note the word "Apology" in the title, which leads to a formal use of the word, as defined by the dictionary: "A formal justification or defense"). Hopefully I'll make time to read it all after the next release (more "Sunday entertainment"), in the meantime I've skimmed it and while of course I have no idea if he has proven the hypothesis or not, it certainly looks good. The Apology is excellent (as far as writing is concerned), with a brief history as intro. I guess we'll have to wait for a more formal peer review...

Categories: science
Posted by diego on June 10, 2004 at 9:06 AM

paleoclimatology

I spent a bit of time on Saturday reading Abrupt Climate Change: Paleo Perspective from the National Oceanic & Atmospheric Administration paleoclimatology branch of the NCDC. A good summary on paleoclimatology and the climate change record.

Categories: science
Posted by diego on June 7, 2004 at 11:02 AM

bill joy: on technology, markets, and other things

An excellent article on Bill Joy from the New York Times magazine. Sadly a couple of comments I've seen on this piece seem to focus on how Joy considered whether to join Google or not, which is literally a sentence, and discount the rest. And it's the rest which is really interesting of course. Too bad he's not going to publish those books anytime soon...

ps: tangentially related: this, and this.

Categories: science
Posted by diego on June 7, 2004 at 10:58 AM

from analog to digital

News.com: Using high-energy physics to preserve old records. Quote:

Haber and Berkeley Lab colleague Vitaliy Fadeyev are working on a breakthrough way of digitizing and archiving old recordings, such as wax cylinders and traditional flat records, that are too far gone for a standard stylus. If successful, the pair may be able to help archivists at The Library of Congress and elsewhere rescue swaths of recorded musical and audio history that are today in danger of being lost.
I always wonder about the fragility of the digital medium (i.e., sans technology, a CD is simply a nice shiny coaster, and recovering information from digital mediums when their platforms and formats are long-gone is as hard, if not harder), but recovering really old analog information is also important. Reminds me of The Long Now as well.

Categories: science
Posted by diego on May 13, 2004 at 3:54 PM

an internet of ends

Yesterday I gave a talk at Netsoc at TCD titled "An Internet of Ends". Here's the PDF of the slides. There are many ideas that I think are in there that finally jelled in the last few days, ideas that have been buzzing around my head for quite some time but that I haven't been able to connect or express in a single thread up to this point. I thought it would be a good idea to start expanding on them here, using this post as a kick-off point.

Yes, more later. And as always, questions and comments most welcome!

Categories: science, soft.dev, technology
Posted by diego on April 29, 2004 at 9:21 PM

exactly

"The physicists say that I am a mathematician, and the mathematicians say that I am a physicist."

Albert Einstein

Source: this article from The Guardian about a recently discovered diary that casts some light on Einstein's last years.

Incidentally, The Einstein Archives contains digital versions of his writings, and there are a couple of others over at the always-excellent Gutenberg Project.

Categories: science
Posted by diego on April 26, 2004 at 7:48 PM

iapetus

In this week's Economist, The Dark Side of the Moon:

IN THE science-fiction classic “2001”, a spacecraft is dispatched to examine Iapetus, Saturn's third-largest moon, because of an anomalous signal sent from Earth's moon to Iapetus. In the book, as in reality, there is something else odd about Iapetus: unlike any other object in the solar system, one-half of its surface is ten times darker than the other. Arthur Clarke speculated that it was a signal from an alien civilisation. Astronomers naturally tend to doubt that explanation but have had difficulty coming up with a better one.

[...]

It turns out that the two sides look rather similar to the radar, and about the same amount of signal bounced off either side. This [...] means that whatever the dark material is, it is either electrically non-absorbing, or placed in a very thin layer, only a few centimetres thick. Though astronomers are fairly certain that Iapetus is mostly made of water, they are unsure of what else is there. The radar results point towards ammonia as a likely component, because it absorbs radar signals. But it may lie just below the surface.

The radar data still leave much to be explained. But Cassini, an unmanned American spaceprobe, is due to arrive at Saturn on July 1st. Its four-year mission is planned to include a close approach to Iapetus. So even if the scheduled date of 2001 has now passed, a definite answer should come soon.

Interesting. But let's not mention that HAL 9000 is still fiction. And that we don't have a base on the moon. And that the space station we do have in orbit around the Earth is a glorified shoebox, instead of the magnificent ring in 2001. And...

Okay, I'll stop complaining.

On a lighter note, this reminds me that I've been thinking of re-reading Rendezvous with Rama (of which a movie has also been long-rumored, btw). That book and Rama II are among my SF favorites of all time.

Categories: science
Posted by diego on April 23, 2004 at 12:19 PM

weather chaos: an analysis of its strategic implications

Related to my post a few weeks ago on 'weather chaos', I was just reading a Pentagon report on the strategic implications of such a change (SF Chronicle article here). Here's a link to the full report (PDF, about 1 MB). Quote from the summary:

There is substantial evidence to indicate that significant global warming will occur during the 21st century. Because changes have been gradual so far, and are projected to be similarly gradual in the future, the effects of global warming have the potential to be manageable for most nations. Recent research, however, suggests that there is a possibility that this gradual global warming could lead to a relatively abrupt slowing of the ocean’s thermohaline conveyor, which could lead to harsher winter weather conditions, sharply reduced soil moisture, and more intense winds in certain regions that currently provide a significant fraction of the world’s food production. With inadequate preparation, the result could be a significant drop in the human carrying capacity of the Earth’s environment.

[...]

The report explores how such an abrupt climate change scenario could potentially de-stabilize the geo-political environment, leading to skirmishes, battles, and even war due to resource constraints such as:

  1. Food shortages due to decreases in net global agricultural production
  2. Decreased availability and quality of fresh water in key regions due to shifted precipitation patters, causing more frequent floods and droughts
  3. Disrupted access to energy supplies due to extensive sea ice and storminess
As global and local carrying capacities are reduced, tensions could mount around the world, leading to two fundamental strategies: defensive and offensive. Nations with the resources to do so may build virtual fortresses around their countries, preserving resources for themselves. Less fortunate nations especially those with ancient enmities with their neighbors, may initiate in struggles for access to food, clean water, or energy. Unlikely alliances could be formed as defense priorities shift and the goal is resources for survival rather than religion, ideology, or national honor.
Harsh, yes, but a good objective analysis as far as I can see. Must read. Only criticism I can think of is that the report seems to downplay completely internal strife within the US, something that would be unlikely given that coastal communities would be seriously disrupted creating internal migration patterns and the subsequent pressures on society (not counting that the SF Bay area and the New York area are responsible for huge amounts of the economic output of the US). They predict that the political integrity of the EU would be in doubt given these conditions, but similar (though milder) results could be expected within the US. Maybe they consider that as part of the "internally manageable" stuff... I'm not sure.

I'm working non-stop, but that doesn't mean I can't take a break for a moment and read depressing stuff like this. Right.

Categories: geopolitics, science
Posted by diego on February 29, 2004 at 2:48 PM

the lack of an ethics conversation in computer science

In April 2000 Bill Joy published in Wired an excellent article titled Why the future doesn't need us. In the article he was saying that, for once, maybe we should stop for a moment and think, because the technologies that are emerging now (molecular nanotechnology, genetic engineering, etc) present both a promise and a distinctive threat to the human species: things like near immortality on one hand, and complete destruction on the other. I'd like to quote at relative length a few paragraphs with an eye on that I want to discuss, so bear with me a little:

["Unabomber" Theodore] Kaczynski's dystopian vision describes unintended consequences, a well-known problem with the design and use of technology, and one that is clearly related to Murphy's law - "Anything that can go wrong, will." (Actually, this is Finagle's law, which in itself shows that Finagle was right.) Our overuse of antibiotics has led to what may be the biggest such problem so far: the emergence of antibiotic-resistant and much more dangerous bacteria. Similar things happened when attempts to eliminate malarial mosquitoes using DDT caused them to acquire DDT resistance; malarial parasites likewise acquired multi-drug-resistant genes.

The cause of many such surprises seems clear: The systems involved are complex, involving interaction among and feedback between many parts. Any changes to such a system will cascade in ways that are difficult to predict; this is especially true when human actions are involved.

[...]

What was different in the 20th century? Certainly, the technologies underlying the weapons of mass destruction (WMD) - nuclear, biological, and chemical (NBC) - were powerful, and the weapons an enormous threat. But building nuclear weapons required, at least for a time, access to both rare - indeed, effectively unavailable - raw materials and highly protected information; biological and chemical weapons programs also tended to require large-scale activities.

The 21st-century technologies - genetics, nanotechnology, and robotics (GNR) - are so powerful that they can spawn whole new classes of accidents and abuses. Most dangerously, for the first time, these accidents and abuses are widely within the reach of individuals or small groups. They will not require large facilities or rare raw materials. Knowledge alone will enable the use of them.

Thus we have the possibility not just of weapons of mass destruction but of knowledge-enabled mass destruction (KMD), this destructiveness hugely amplified by the power of self-replication.

I think it is no exaggeration to say we are on the cusp of the further perfection of extreme evil, an evil whose possibility spreads well beyond that which weapons of mass destruction bequeathed to the nation-states, on to a surprising and terrible empowerment of extreme individuals.

[...]

Nothing about the way I got involved with computers suggested to me that I was going to be facing these kinds of issues.

(My emphasis). What Joy (who I personally consider among some of the greatest people in the history of computing) describes in that last sentence is striking not because of what it implies, but because we don't hear it often enough.

When we hear the word "ethics" together with "computers" we immediately think about issues like copyright, file trading, and the like. While at Drexel as an undergrad I took a "computer ethics" class where indeed the main topics of discussion where copying, copyright law, the "hacker ethos", etc. The class was fantastic, but there was something missing, and it took me a good while to figure out what it was.

What was missing was a discussion of the most fundamental problems of ethics of all when dealing with a certain discipline, particularly one like ours where "yesterday" means an hour ago and last year is barely last month. We try to run faster and faster, trying to "catch up" and "stay ahead of the curve" (and any number of other cliches). But we never, ever ask ourselves: should we do this at all?

In other words: what about the consequences?

Let's take a detour through history. Pull back in time: It is June, 1942. Nuclear weapons, discussed theoretically for some time, are rumored to be under development in Nazi Germany (the rumors started around 1939--but of course, back then most people didn't quite realize the viciousness of the Nazis). The US government, urged by some of the most brilliant scientists in history (including Einstein) started the Manhattan project at Los Alamos to develop its own nuclear weapon, a fission device, or A-Bomb. (Fusion devices --also known as H-bombs-- , that use a fission reaction as the starting point and are orders of magnitude more powerful, would come later, based on the breakthroughs of the Manhattan Project).

But then, after the first successful test at the Trinity Test site in July 16, 1945, something happened. The scientists, which up until that point had been too worried with technological questions that they had forgotten to think about the philosophical ones, realized what they had built. Oppenheimer, the scientific leader of the project, famously said

I remembered the line from the Hindu scripture, the Bhagavad-Gita: Vishnu is trying to persuade the Prince that he should do his duty and to impress him he takes on his multi-armed form and says, "Now I am become Death, the destroyer of worlds."
While Kenneth Bainbridge, in charge of the test, later said at that time that he told Oppenheimer:
"Now we are all sons of bitches."
Following the test, the scientists got together and tried to stop the bomb from ever being used. To which Truman said (I'm paraphrasing):
"What did they think they were building it for? We can't uninvent it."
Which was, of course, quite true.

"All of this sanctimonious preaching is all well and good" (I hear you think) "But what the hell does this have to do with computer science?".

Well. :)

When Bill Joy's piece came out, there was a lot of discussion on the topic. Many reacted viscerally, attacking Joy as a doomsayer, a Cassandra, and so on. Eventually the topic sort of died down. Not much happened. September 11 and then the war in Iraq, surprisingly, did nothing to revive it (contrary to what one might expect). Technology was called upon in aid of the military, spying, anti-terrorism efforts, and so on. The larger question, of whether we should stop to think for a moment before rushing to create things that "we can't uninvent" has been largely set aside. Joy was essentially trying to jump-start the discussion that should have happened before the Mahattan project was started. True, given the Nazi threat, it might have been done anyway. But the more important point to make is that if the Manhattan Project had never started, nuclear weapons might not exist today.

Uh?

After WW2 Europe was in tatters, and Germany in particular was completely destroyed. There were only two powers left, only two that had the resources, the know-how, and the incentive, to create Nuclear Weapons. So if the US had not developed them, it would be reasonable to ask: What about the Soviets?

As it has been documented in books like The Sword and the Shield (based on KGB files), the Soviet Union, while powerful and full of brilliant scientists, could not have brought its own nuclear effort to fruition but for two reasons: 1) The Americans had nuclear weapons, and 2) they stole the most crucial parts of the technology from the Americans. The Soviet Union was well informed, through spies and "conscientious objectors" of the advances in the US nuclear effort. Key elements, such as the spherical implosion device, were copied verbatim. And even so, it took the Soviet Union two four more years (until its first test in August 29, 1949) to duplicate the technology.

Is it obvious then, that, had the Manhattan project never existed, nuclear weapons wouldn't have been developed? Of course not. But it is clear that the nature of the Cold War might have been radically altered (if there was to be a Cold War at all), and at a minimum nuclear weapons wouldn't have existed for several more years.

Now, historical revisionism is not my thing: what happened, happened. But we can learn from it. Had there been a meaningful discussion on nuclear power before the Manhattan Project, even if it had been completed, maybe we would have come up with ways to avert the nuclear arms race that followed. Maybe protective measures that took time, and trial, and error, to work out would have been in place earlier.

Maybe not. But at least it wouldn't have been for lack of trying.

"Fine. But why do you talk about computer science?" Someone might say. "What about, say, bioengineering?". Take cloning, for example, a field similarly ripe with both peril and promise. An ongoing discussion exists, even among lawmakers. Maybe the answer we'll get to at the end will be wrong. Maybe we'll bungle it anyway. But it's a good bet that whatever happens, we'll be walking into it with our eyes wide open. It will be our choice, not an unforseen consequence that is almost forced upon us.

The difference between CS and everything else is that we seem to be blissfully unaware of the consequences of what we're doing. Consider for a second: of all the weapon systems that exist today, of all the increasingly sophisticated missiles and bombs, of all the combat airplanes designed since the early 80's, which would have been possible without computers?

The answer: Zero. Zilch. None.

Airplanes like the B-2 bomber or the F-117, in fact, cannot fly at all without computers. They're too unstable for humans to handle. Reagan's SDSI (aka "Star Wars"), credited by some with bringing about the fall of the Soviet Union, was a perfect example of the influence of computers (unworkable at the time, true, but a perfect example nevertheless).

During the war in Iraq last year, as I watched the (conveniently) sanitized nightscope visuals of bombs falling on Baghdad and other places in Iraq, I couldn't help but think, constantly, of the amount of programs and microchips and PCI buses that were making it possible. Forget about whether the war was right or wrong. What matters is that, for ill or good, it is the technology we built and continue to build every day that enables this capabilities for both defense and destruction.

So what's our share of the responsibility in this? If we are to believe the deafening silence on the matter, absolutely none.

This responsibility appears obvious when something goes wrong (like in this case, or in any of the other occasions when bugs have caused crashes, accidents, or equipment failures), but it is always there.

It could be argued that after the military-industrial complex (as Eisenhower aptly described it) took over, market forces, which are inherently non-ethical (note, non-ethical, not un-ethical), we lost all hope of having any say in this. But is that the truth? Isn't it about people in the end?

And this is relevant today. Take cameras in cell phones. Wow, cool stuff we said. But now that we've got 50 million of the little critters out there, suddenly people are screaming: the vanishing of privacy! aiee!. Well, why didn't we think of it before? How many people were involved at the early stages of this development? A few, as with anything. And how many thought about the consequences? How many tried to anticipate and maybe even somehow circumvent some of the problems we're facing today?

Wanna bet on that number?

Now, to make it absolutely clear: I'm not saying we should all just stow our keyboards away and start farming or something of the sort. I'm all too aware that this sounds too preachy and gloomy, but I put myself squarely with the rest. I am no better, or worse, and I mean that.

All I'm saying is that, when we make a choice to go forward, we should be aware of what we know, and what we don't know. We should have thought about the risks. We should be thinking about ways to minimize them. We should pause for a moment and, in Einstein's terms, perform a small gedankenexperiment: what are the consequences of what I'm doing? Do the benefits outweigh the risks? What would happen if anyone could build this? How hard is it to build? What would others do with it? And so on.

We should be discussing this topic in our universities, for starters. Talking about copyright is useful, but there are larger things at stake, the RIAA's pronouncements notwhistanding.

This is all the more necessary because we're reaching a point were technologies are increasingly dealing with self-replicating systems that are even more difficult to understand, not to mention control (computer viruses, anyone?), as Joy so clearly put it in his article.

We should be having a meaningful, ongoing conversation about what we do and why. Yes, market forces are all well and good, but in the end it comes down to people. And it's people, us, that should be thinking about these issues before we do things, not after.

These are difficult questions, with no clear-cut answers. Sometimes the questions themselves aren't even clear. But we should try, at least.

Because, when there's an oncoming train and you're tied to the tracks, closing your eyes and humming to yourself doesn't really do anything to get you out of there.

Categories: science, soft.dev, technology
Posted by diego on February 23, 2004 at 10:27 PM

gender and computer science

Jon had a good post a couple of days ago titled gender, personality, and social software, based on a column of his at InfoWorld: is social software just another men's group?. He makes some interesting points in both.

There is part of that thread that I wanted to comment on (but didn't get around to doing it until now for some reason!), and it's the question of how much computer science is "gender biased." Towards men, of course.

Having spent a good part of the last 10 years in academic institutions one way or another in various countries and continents (as well as in companies of all sizes), here are my impressions. This is of course, just what I've observed.

That there are few women in computer science is obviously true. Surveys or not, you can see it and feel it. That said, I have noted that something similar happens in other disciplines, such as civil engineering. In the basic sciences, there are more men than women in Physics for example, but the difference is not as marked. In Chemistry, or Biology, the differences largely disappear.

One thing I can say, from experience, is this: of my groups of students, both here in Ireland and in the US, an interesting thing happened: even though there are fewer women (much fewer) than men, the number of women that are very good is roughly similar to the number of men that are very good. (Hacker types, the "take no showers or bathroom breaks until I finish coding this M-Tree algorithm using Scheme, just for the fun of it" have been invariably men in my experience, and generally there has been, if any, one of those at most per class, but I have no doubt that there are women like this, I just haven't met them. :)) Note that I'm referring specifically to computer hacking (in the good sense) here--I know women with the same attitude toward their work, just not computer hacking :).

To elaborate a bit on the point of the last paragraph, if, say you have a CS class of 40 people, maybe 5 at most would be women. But of those five women, two would be very good. And there would be maybe three, at most four good computer-scientists-in-brewing on the boys' side.

My conclusion after all this time is that there's a difference of quality over quantity. In some weird way, talent for computer science seems to me to be constant regardless of gender (maybe this is the case for everything?). There might be more men doing development, sure, but there are also more that are not very good at it (or do it for the wrong reasons, such as money, or parent's pressure, or just "because"; in my opinion, if you don't really like doing something, you shouldn't be doing it, period.)

The other thing I've noticed in recent years is that, as software (and hardware) have become more oriented towards art, social and real-world interactions (The stuff done at the Media Lab is a good example), I've seen more women on that side of the fence. In fact, in some of these areas women dominate the landscape.

Now, I don't want to get carried away on speculating on the reasons for this split since they would almost certainly be hand-waving of the nth order. I will say however that I think that sexism (which I despise--for example I enjoy James Bond movies but the blatant misogynism in them gives me the creeps--, and, btw, if you want to know how serious I am in using the word "despise" here, you can read this to see what I think about semantics in our world today) has to be partly a factor here. But I'm sure there are others, and history plays a part too. Consider that we're still using UIs and sometimes tools that have very clear roots twenty, sometimes thirty or even forty (!) years ago (e.g., LISP, or COBOL, or Mainframes). Back then gender-based prejudices were even worse, and it's reasonable to assume that we're still carrying that burden in indirect fashion.

So maybe it's not a surprise that now that we're working on technologies that had its start ten or fifteen years ago women are getting more into it? Maybe. I sure hope so.

What do others think? Women's opinions are especially welcome. And if any of this sounds ridiculous (I'm under no illusion that what I've said here is completely accurate), please feel free to whack me in the head. I'm taking painkillers for a horrible pain in the neck I have, so it won't hurt too much. :-)

Categories: science, soft.dev
Posted by diego on February 19, 2004 at 11:01 AM

the russian space program wakes up

From CNN: Russia to build new spacecraft:

The new craft will be able to carry at least six cosmonauts and have a reusable crew section, Russian Aerospace Agency director Yuri Koptev said at a news conference. Soyuz carries three cosmonauts and isn't reusable.

The spacecraft, designed by the RKK Energiya company, will have a takeoff weight of 12-14 metric tons (13-15 tons) -- about twice as much as the Soyuz, which was developed in the late 1960s.

Energiya has also proposed developing a new booster rocket based on its Soyuz booster to carry the new spacecraft to orbit.

Koptev wouldn't say how long it could take to build the spacecraft or how much it would cost, but said that Energiya had done a lot of work on the new vehicle already.

I'll forget about the political implications for the moment (a new cold-war style space race?) and just be happy that things seem to be moving again in this area. I'll say one thing though: I'm sure that this and this had a little something to do with it. :)

Categories: science
Posted by diego on February 17, 2004 at 9:47 PM

digital music and subculture

An interesting paper by Sean Ebare on:

[...] a new approach for the study of online music sharing communities, drawing from popular music studies and cyberethnography. I describe how issues familiar to popular music scholars — identity and difference, subculture and genre hybridity, and the political economy of technology and music production and consumption — find homologues in the dynamics of online communication, centering around issues of anonymity and trust, identity experimentation, and online communication as a form of "productive consumption." Subculture is viewed as an entry point into the analysis of online media sharing, in light of the user–driven, interactive experience of online culture. An understanding of the "user–driven" dynamics of music audience subcultures is an invaluable tool in not only forecasting the future of online music consumption patterns, but in understanding other online social dynamics as well.
While focused on music, there are interesting ideas for the area of sharing in general.

Some comments: anonymity, in my opinion, is a big factor in the (exploitable, see also here) power-law behavior of virtual communities (alluded to but not explicitly mentioned in the paper with the "citizen/leech" concept among other things), and it also affects group dynamics, even, possibly, affecting producer/consumer dynamics. The fact that these networks are anonymous is almost implicit in the paper, I find it interesting that it is often taken as an axiom. Additionally, the perceived "safety" (also mentioned in the paper) given by anonymity is mostly mirage: in most cases the only thing you are achieving is partial hiding (Networks like Freenet are a different matter in this sense), and the possibility that the content might be manipulated (remember this?) or used as a trojan for something else (read: ads, viruses...) is very real and yet barely considered. These networks create a parallel universe that requires people to engage in behavior like that described in the paper, since your "identity" has to be created from the ground up. Forget music: even types of content sharing that are not in dispute are generally of this type. So what are we missing?

I think that advances in this sense will be seen in the mixing of meatspace trust/knowledge relationships with the ability to share/utilize the network, and in fact feed back into it. Cyperspace and meatspace complementing each other, not moving in parallel universes.

Categories: science
Posted by diego on February 11, 2004 at 10:35 AM

a new kind of science - online

[via Danny] Stephen Wolfram's A New Kind of Science is online here. Good seeing it like that, but it's better to get the book (slightly expensive though, but it was worth it). I got it when it came out in 2002. Must-read. :)

Categories: science
Posted by diego on February 9, 2004 at 12:24 PM

on nanotech

An article in the Washington Post on nanotechnology. A good read, even though if (as usual) compressing topics like these to a few pages invariably creates some oversimplifications. Reminded me of this too.

Categories: science
Posted by diego on February 2, 2004 at 12:15 AM

weather chaos

I've been thinking about writing this for a few days now, maybe more than a week, and for some reason I never get around to it. I start writing down a primer for chaotic dynamical systems and then I think that it will be too much and not very interesting for most people ... but anyway, here are some thoughts.

The recent harsh cold weather in the Northern Hemisphere has lead to some global-warming naysayers to use this as "proof" that global warming is not happening.

Right.

The problem is that global warming will not simply make the Earth "warmer." By raising the temperature, several things happen. For example, glaciers start to melt (which is happening now at an alarming rate) and the cold water not only raises water levels but affects warm currents that are vital to preserving elements of global climate (a big factor is reducing the salinity --and thus the density--of the water, others are that the increased temperatures affect wind patterns which a;sp play a role in determining ocean currents). This could for example slow down or shutdown key oceanic streams such as the Gulf Stream. The imbalance created by the higher temperatures and changes in ocean currents would create extreme weather patterns all over the globe: superstorms on some parts, droughts on others, floods, and so on.

Global warming is, then, a misnomer. In our MTV-3-seconds-a-news-clip culture we probably need a new phrase to describe what global warming does. "Weather chaos" is pretty accurate, but it doesn't have the 'zing' I think.

Additionally, these changes reinforce each other, bigger storms pour even more rain on the oceans, which affects the water even further, just as the winds start behaving in more violent and uncommon patterns. The increased cloud cover brought on by increased temperatures also feeds the greenhouse effect, trapping heat and increasing the temperatures even more, making the climate even more unstable. In the end, we might not see it coming. Dynamical systems have a way of shifting directions dramatically and without warning.

None of this is 100% certain of course. But what ever is? The real question is not "are we sure this could happen?" but "What can we do to stop it from happening if possible?" Assuming it happens, once it starts there will be no turning back, no quick fixes. The weather will be out of control, and we can kiss our precious little all-singing all-dancing civilization goodbye.

All of this is a long prelude to say that I simply don't understand what is the problem in getting some action behind trying to curtail emissions, etc. I can't understand at all why some people argue that trying to cut back on CO2 emissions would "hit the bottom line". What happens when there's no bottom line to hit anymore? Why, why, why is it that western societies, that are so conscious of "health care" seem to worry little when a disaster of massive proportions maybe not too far ahead? Only because it's 20, 30, 100 years into the future? We can all hope that these scenarios are indeed wrong. But the evidence is piling up to the contrary. And if they happen to be right... what then? Will we just limit ourselves to pouting and moaning about it? This is potentially catastrophic as few things are, and not taking it more seriously seems to me a massive folly.

What comes to mind is a paragraph from Richard Preston's The Hot Zone:

In a sense, the earth is mounting an immune response against the human species.
In Preston's book it is related to the emergence of new viral threats, but it might as well apply in this case.

Now, as it happens, the other day when I saw The Return of the King I also saw the trailer for The Day After Tomorrow which is basically, from what I can see, about this scenario. One good thing about the movie is that it might increase awareness; the bad thing is that people might say "oh, it's just a movie" and since it's from the director of Independence Day reach the conclusion that this is as likely destroying an invading alien race by infecting the Alien's mother ship using an Apple Powerbook with software written in about a day. What is a;sp weird is that the first thing you hear (on the trailer at least) is that "Meteorologists are at a loss to explain what is causing this weather". This is complete crap of course. We will know. We will know that the delicate balance of the Earth's atmosphere has been broken, and that a new balance has to be found, and it will most likely come in the form of another Ice Age.

Maybe for the first time in history we have primitive knowledge that enables us to see this problem coming and take action (and yes, that same knowledge creates technology that is likely, in fact, to be exacerbating something that would have happened naturally anyway, or even creating it artifically). But instead of taking action we are sitting comfortably arguing about it. Even in a strict analysis "by the numbers" it seems to me that the potential cost (investment in cleaner technologies, paying for more serious research on the topic, intergovernmental cooperation on the matter) would be insignificant compared to the consequences.

Sorry if this seems a bit depressing. I'm just venting a little. I do hope that we'll turn around. Humans. You never know what we're going to do next!

Categories: science
Posted by diego on January 26, 2004 at 11:42 PM

fly me to the moon...

moon_earth.gif

Finally the long-rumored announcement from the Bush administration happened yesterday, and the New York Times has both an article and analysis (more coverage from CNN, the Washington Post 1, 2, 3, and space.com). At first I was excited, since as I've expressed before I wholeheartedly support spaceflight. True spacefaring abilities is be among the short list of things mankind should strive to achieve in this century. (Along with tending to some...err... tiny problems we still seem to have when taking care of our home planet).

The plan is (apparently) to phase out what's left of the Shuttle fleet (STS, or Space Transportation System). There are three Shuttles left: Discovery, Atlantis, and Endeavour. (an early model of the orbiter, the Enterprise, only performed tests flights). Additionally, NASA space science programs will be downsized, including cancellation of further servicing to the Hubble Space Telescope. The STS phase-out would be complete by 2010 (which would also be the "date of completion" of the International Space Station), and the new transportation vehicle would be ready by 2014.

And herein lies the first problem with this plan. Are we seriously saying that the US will stay out of space for four years? I find this very hard to believe, considering that the Chinese are certain to have made some progress by then on their own goal of landing on the moon. (And let's not forget Russia...).

After the new launch, a lunar base would be established, "at most" by 2020, and subsequently used as additional research, development and launch platform for launching a manned Mission to Mars.

This "schedule" seems to me slow, and with many of its targets are so far off that (as the NYTimes analysis makes clear), easy to derail. Not to mention that the announcement provided basically no new funding for the program ($1 billion, plus the money that would come from phasing out the STS fleet).

A big factor in this seems to be "safety". For example, the NY Times analysis mentions that the shuttles have been "prone to catastrophic failure". This statement appears to imply that other space vehicles have not been prone to catastrophic failure. Mmm. Let me see. The Shuttle has flown over a hundred missions (STS-107 was the last flight of the ill-fated Columbia) with exactly two catastrophic failures. In contrast, the Apollo program flew less than 15 manned missions (with a few more unmanned) and it had two massive failures, the first in Apollo 1 (which killed the crew during a test) and the second with Apollo 13, which barely made it back to earth. The number of Soviet failures at the same time is difficult to know with a high degree of confidence, but no one thinks that it was a walk in the park. The Soviet Union, after all, never managed to put a man on the moon, and Soviet technology, though constantly a bit behind the times, was never that bad.

This reminds me of one of Steve Buscemi's lines in Armageddon: "You know, Harry, we're sitting on 4 million pounds of fuel, one nuclear weapon, and a thing that has 270,000 moving parts built by the lowest bidder. Makes you feel good, doesn't it?"

Setting aside the nuclear weapon for the moment (Flying to Mars and beyond may well involve some sort of nuclear- or even antimatter-powered spacecraft), this is one of those "funny 'cause it's true" jokes.

What I'm saying is: I don't get it. Can't they get astronauts to fly? What's the problem? If they can't find anyone, sign me up! But of course, they can get astronauts to fly. They would, under whatever circumstances and whatever risks. But of course this whole obsession with safety is something that has been growing and growing in the Western world, with the US "leading the way" but with Europe particularly in the same boat. Apparently, people are just not supposed to die anymore.

And what about the technology? Does it really take more than 10 years to create a new moon crew transport vehicle? Of course not. Our science and technology has advanced by leaps and bounds since the 70s, particularly computer technology which is crucial to this whole endeavor. As the Washington Post notes:

Bush has outlined a tortoise-like pace, dictated by severe budget constraints, that allows a full decade just to develop a vehicle that would, once again, deliver people to the moon -- something Apollo engineers accomplished, starting from scratch, in about eight years.
The problem is not technology, it's political will, and funding. In fact, this new project is a mirror of something that was proposed ten years ago, which went nowhere, as one of the articles from the NYT describes:
In 1989, in a speech honoring the 20th anniversary of the initial lunar landing, the first President Bush proposed that the nation establish a base on the Moon and send an expedition to Mars to begin "the permanent settlement of space." He set the Mars goal for 2019 but the effort soon fizzled when the cost estimates hit $400 billion.
In today's western culture (but it's really happening all over the world) with our instant-satisfaction, one-click-shopping, celebrity-obsessed and 24-hour-of-irrelevant-news media, it's hard to think that popular support will keep steady over the course of the 15-25 years required for this project.

I must say, though, without cynicism, that I hope I'm wrong. I really, really hope that the US can stick with it. It's the one country that has the knowhow and the resources (and, at times, the spirit) necessary to pull it off. And for all the criticisms, it has maintained a continuing space program, to its credit. Does anyone think that the International Space Station would be anything but a blueprint by now if it wasn't for the time, money, and energy (however misdirected) that the US has spent on it?

And, by the way, why does the US have to do this by itself? The Chinese are moving forward, but if they keep at it there will be questions as to how much international aid they need, as this article from the economist notes. And, where's Japan, where's Russia? More importantly, where's the EU? There's been lots of talk about the potential world power the EU can become. But instead of talking about worthy goals, like using the European Space Agency for a daring multinational space exploration program, we keep discussing agricultural subsidies and whether one country has more votes than the other. It's not of course that those are not important issues, but there is zero attention, money, or "political capital" put forward for anything other than those things. I mean, Germany, France, the UK, and all the other great countries. Come on! Europe has to stop running scared from its past of internicine warfare and truly look forward to the future. The US can't be left alone holding the bag with this.

I suddenly think of part of a Sagan quote I posted sometime ago:

Spaceflight, therefore, is subversive. If they are fortunate enough to find themselves in Earth orbit, most people, after a little meditation, have similar thoughts. The nations that had insituted spaceflight had done so largely for nationalistic reasons; it was a small irony that almost everyone who entered space received a starting glimpse of a transnational perspective, of the Earth as one world.
We are not that far away. We can only hope that we, as a society, can for once look just a little beyond our noses and truly make it happen.

Categories: science
Posted by diego on January 15, 2004 at 1:45 PM

quote of the day

The most exciting phrase to hear in science, the one that heralds new discoveries, is not "Eureka!" but "That's funny ..."

Isaac Asimov

Categories: science
Posted by diego on January 9, 2004 at 12:29 PM

less PhDs -- in general or just in the US?

Well, one more thing I found notable. According to this article the number of new Science PhDs per year is still falling in the US. I wonder if this means that they are falling in general or simply that a lot of the people that would have got them in the US are just not doing that, choosing to do it in their home countries or elsewhere. Considering that more than half the engineering PhDs in the US are from foreign-born students (as the article notes), even a small dip in that number (due to less travel, tighter immigration laws, whatever) would definitely be noticeable.

Categories: science
Posted by diego on December 5, 2003 at 1:47 PM

brian eno on the long now

A fantastic article by Brian Eno on The Long Now project:

an we grasp this sense of ourselves as existing in time, part of the beautiful continuum of life? Can we become inspired by the prospect of contributing to the future? Can we shame ourselves into thinking that we really do owe those who follow us some sort of consideration – just as the people of the nineteenth century shamed themselves out of slavery? Can we extend our empathy to the lives beyond ours?

I think we can.

Yep. I think so too.

Categories: science
Posted by diego on November 18, 2003 at 4:17 PM

lunar eclipse

There's a lunar eclipse tonight (full Earth/Sun alignment at 1 am Zulu Time). But it's cloudy out there tonight, and I doubt that it will clear up in time. Looks like Dublin (and by extension, me) is going to miss it. Too bad.

Categories: science
Posted by diego on November 8, 2003 at 5:52 PM

space!

sp-china.jpgChina succeeds in sending a man into space, the third country to do so. Yeah! A Chinese manned mission to the moon in a few years has been rumored for a while now. Will this wake up ESA? (Sending satellites or unmanned probes only gets you so far, you know). And how about NASA? Am I the only one left on Earth that wants a manned mission to Mars like, yesterday? Politicians (theoretically reflecting what their constituents think) say that it's too expensive, too dangerous. People might die, you know. Does it matter that the astronauts would gladly give their lives for the chance of succeeding? (Heck, I'm not an astronaut, but I would too). No. They have to be protected from themselves it seems. Oh, and money is a problem? A mission to Mars could cost 20, 30 billion. I mean, that's too expensive right? Right... How much is the war in Irag going to cost? Isn't the US paying 4 billion a month for it already?

And, sending toy cars that run Java and take measurements and pictures of rocks is to me as interesting as reading a Microsoft press release. Sure, there is some information, but it's all distant and sanitized and in the end it doesn't do anything for you.

If Europe, Russia, the US, Japan and China get together, 30 billion is a drop in the bucket. The US could do the propulsion and vehicle design and construction, Europe and Russia could deal with science and probe design, and China and Japan could design the Mars habitat for the astronauts (No, I didn't choose randomly who was doing what).

Every time I watch a space launch of anything the haird on the back of my neck stand up. I am lifted up with that rocket, I imagine the unseen vistas of Alien landscapes. I feel inspired. But it's inspiration for our potential and what we have achieved in the past, rather than for the reality of a mission. The Apollo missions barely scratched the surface. We have gotten too used to CGI and thinking that a walking carpet (as Princess Leia put it) like Chewbacca is actually an Alien lifeform. Movies and books are great, but there's no replacement for the real thing.

I want to feel inspired by space exploration again. Don't you?

Categories: science
Posted by diego on October 15, 2003 at 9:56 AM

sunday entertainment: the Riemann Hypothesis

Working on some stuff on n-dimensional topologies I remembered the Riemann Hypothesis. I'm not sure when I first read about it... but it keeps popping up in my head (simply because it's a challenge--not that I have any delusions that I can actually solve it! :)). So as a way of relaxing a bit from work, here's some background on that. Not that it's something useful, but it should be at least entertaining.

A couple of years ago, a $1,000,000 prize was created for whoever came up with the proof. What I didn't know was that they actually had seven "millenium prize problems" of which the Riemann Hypothesis was one. Here is the list of all seven problems. One million per proof. Not bad!

Going back to the Riemann Hypothesis. Proving it would be interesting for number theory and the distribution of prime numbers, but it would have no direct practical applications whatsoever. Otherwise, I can see the headlines (maybe for The Onion?):

UN Security Council Declares Understanding the Distribution of Prime Numbers is Priority One - Troops To Be Deployed in the Real Region of C with R > 1 - President declares "we will hunt down these prime number folks and we will characterize them. If you ask me, there's something evil about a number being divisible only by itself and one."
Joking aside, the new mathematical techniques that usually have to be invented to solve these open problems do have applicability. But I digress.

The Hypothesis is generally considered one of the most important unproven hypothesis in mathematics, and I've read somewhere that some people think it was even more important than Fermat's famous Last Theorem. To put things in context, Fermat's Last Theorem stated that given:

xn + yn = zn

there are no integer solutions for n > 2 and x,y,z != 0. The statement of Fermat's Last Theorem is relatively simple and self-contained. the Riemann Hypothesis is not. Consider, from the prize page:
The Riemann hypothesis is that all nontrivial zeros of the Riemann zeta function have a real part of 1/2
Sure it reads easier than Fermat's theorem, but it gets away with that by putting all the complexity in the definition of the Riemann zeta function.

What I find it fascinating how you actually get to it. After some reading, it would appear to go like this: Riemann was trying to derive a formula that would calculate the number of primes lower than a given boundary number n. In doing this, he started to look at an infinite series based on a complex number, s. That series is defined by the Riemann zeta function:

zetafunction.png

which converges. Okay, given that function, the Riemann Hypothesis is saying that all the nontrivial zeros exist only at values of s that have a Real component of 1/2.

And this is one of the things that I find great about mathematics: you start pulling out threads and in the end you are left with just one or two hypothesis. If you prove those, everything else falls into place, domino-style. And what you're actually proving often appears to have no relationship at all with what you were originally interested in!

For more there's this cool page in MathWorld with a lot of interesting information on the Hypothesis and some of the attempts to solve it so far.

As I said, maybe not useful, but at least entertaining. :)

Categories: science
Posted by diego on September 28, 2003 at 3:56 PM

what went wrong with columbia

An in-depth article from the Washington Post on the causes of the Columbia disaster back in January. Lots of lessons in it, in particular concerning communication between groups. And, isn't it amazing that the Shuttle lasted as long as it did? The Shuttle is clearly a great piece of engineering, its many problems notwhistanding.

Categories: science
Posted by diego on August 24, 2003 at 1:13 PM

chaos, complexity and power cuts

Last Thursday, as I heard about the power cuts in the Eastern US, it got me thinking about Complexity and Chaos. Not that I think it was original, of course, but I was still a bit surprised to find an article written along those lines, published the very next day. Cool.

Categories: science
Posted by diego on August 17, 2003 at 10:36 AM

Manhattan offline

Just saw this on CNN.com as a news alert (now there's a short story posted now), and went to check it on the TV: there's a massive power-outage on the east coast of the US, affecting, apparently, Manhattan, Boston, Cleveland, Detroit, and other cities, as well as some Canadian cities such as Ottawa and Toronto. At the moment, in Manhattan there are no public transportation services: no buses, subways or trains. Most of Downtown Manhattan, including Wall street, have shut down. Many airports closed. Massive traffic jams.

What a mess.

I was just looking at the TV images of people walking across the FDR bridge (obviously, vehicle traffic has slowed to a standstill) and it's kind of a surreal scene. No cars, just a sea of people, in, out, and on the bridge. Apparently the power cut started as a hardware problem at transformer at a ConEd plant in NYC, and then it started to spread (no indication whatsoever that sabotage was involved, even though the news people keep bringing it up). At the moment it appears to have affected the balance of the whole of the US power grid. Amazing how this kind of thing can happen. Just one component in one power plant, in one city, and you get this result. Apparently the problem is still spreading through the network as the "domino effect" takes hold (Reminds me of the effect that brought down the long-distance AT&T network on January 15, 1990, as documented by Bruce Sterling in his excellent book The Hacker Crackdown--although in this case it's a different type of overload that is spreading, but the effect is exactly the same: a station goes down, which breeds more overload on the stations that are still up, which then shut down, which...). Hopefully it's being contained though.

We tend to think of network effects and related ideas in terms of concepts, in particularly concepts related to computer science (these ideas are big in the social sciences, group psychology and Economics too, though). In case anyone had any doubt, this shows that, more and more, everything will be affected by it. The science of Complexity (and the related topic of Chaos) will only grow in importance in the years to come.

Update: this article from the New York Time dismisses the "transformer theory" along with others. So even two days later it's still unclear what caused the problem--they seem to agree it originated in the mid-west though. The Economist has more information, as well as a historical perspective.

And Bruno said, in a comment, that reports that said that Wall Street had shut down were wrong. There were lots of similar comments to this effect in the newspapers yesterday and today as well. Another one of the many innacuracies in initial reporting, along with the transformer story, Boston being affected (it wasn't--not that much at least), and so on.

Categories: science
Posted by diego on August 14, 2003 at 9:21 PM

a robot that walks on water

Speaking of cool: a short article on a robotic water strider:

Water striders are insects that can perform amazing feats of dexterity on water. They perch on the surfaces of ponds and slow streams as if on solid ground. And, as their name suggests, they can skim blithely across the water's surface, just like surfers on a wave.

Not to be outdone by these humble creatures, John Bush and his colleagues at the Massachusetts Institute of Technology have [created] a mechanical device that can do the same.

Categories: science
Posted by diego on August 11, 2003 at 5:03 PM

what time is it?

Who would have thought that the a standard would be needed for time-keeping. Every day a surprise.

Categories: science
Posted by diego on August 3, 2003 at 3:12 PM

a new particle

I didn't see anything about this last week in other places (or maybe it hasn't been widely reported?), anyway: a new particle, the pentaquark, has been found. Quote:

James Joyce would have been delighted. Quarks, one of the basic building-blocks of matter, were named in the 1960s after a line from his novel “Finnegans Wake”—three quarks for Muster Mark!—because they were then thought to come in three types (the number is now known to be six). Protons and neutrons, however, do consist of three quarks each. And physicists have now discovered a particle that is made of five quarks—a bit of a promotion for Muster Mark.

Categories: science
Posted by diego on July 9, 2003 at 12:30 AM

the price of technology

Every once in a while news re-surface about the seemingly neverending conflict in Congo. Mostly it's European media (The Economist in particular keeps up on the subject), but in this case it's a Salon article on the war and the lack of engagement of the International community:

[...] The statistics of the war there are staggering: More than 3.3 million lives lost in five years, and more civilian deaths in one week than in the Iraqi war to date, according to a recent report by Watchlist, an international coalition of nongovernmental organizations focused on children in armed conflict. It is the deadliest conflict since World War II, and although a South African peace plan has been discussed, few are optimistic it will work.
And what fuels this war? Why does nobody press for intervention? Well, money, of course. The money made from large deposits of non-renewable resources, such as diamonds, or rare minerals used in high-tech devices. All our wonderful technologies built on the blood of innocents. Cell phones, computers, you name it.

This is not a tirade or anything, by the way, but it's something I've been thinking about for a while in different contexts, and I'll probably come back to this in the near future. But to begin with...

What I've been thinking is that In the tech world we are missing an element of responsibility, we haven't accepted the fact that we are creating tools that both exacerbate problems (enviromental, economic, wars, and so on) and are also used for unsavory ends (While it's an oft-repeated mantra that the US military is the best in the world, what I don't hear often is that this superiority is entirely due to technology, not forces. Even the North Korean army has at least half a million men more than the US Army).

Just as the scientists of the Manhattan project realized what they'd done and then called for controls, we should begin to take a step back and consider the results of our relentless drive for the next cool thing. A while ago Bill Joy made a similar argument in an article in Wired called "Why the Future doesn't need us" (a must read), but he was referring to nanotechnology and its potential future perils. I tend to agree with the counterpoint presented by Jaron Lanier in One Half of a Manifesto. His counterpoint, also worth reading in full, might be expressed (only half quizzically) as: "We can't get our machines to stop crashing, much less are they going to take over the world," or, as he himself puts it in the article, that "cyber-armageddonists have confused ideal computers with real computers, which behave differently."

But while I tend to agree with Lanier I also think that Joy's argument has an important kernel of truth in it (and he might still end up being right about nanotech), which is this: we have yet to take responsibilities for any of the technologies we create. While we demand limits to, say, research on genetics, we have no problems with people that design ever-faster supercomputers. But if you ask me, when you consider the immediate end result , in which advanced technology is immediately (and perhaps inevitably) used for military purposes, or creates an imbalance in society that later leads to suffering, then the super-computer designer should have as many constraints as the geneticist, if not more.

Idealistic? Sure. In fact, I'm also a hamster in the tech-wheel, running madly. I enjoy creating new things, and using cool gadgets. But somehow we need to start considering how to deal with this, to start assigning value to creation, and stop creating just because Moore's law sounds like a dandy way to live, or, pretty soon, we'll realize that the high-tech industry in general and Computer Science in particular has created its own Manhattan project to look back on and be terrified at, and stayed complicitly silent about it.

Categories: science
Posted by diego on July 6, 2003 at 1:51 AM

the great leap

[via Karlin] An article from the guardian on the change in human behavior that resulted in our dominant status today:

60,000 years ago [was] the low point. Then there were as few as 2,000 humans in existence. The worst time in the history of our species; one we nearly didn't survive.

Categories: science
Posted by diego on July 4, 2003 at 8:59 AM

mars express on its way

I guess that since fast-food culture calling an interplanetary mission 'mars express' was coming at some point... (but doesn't it matter that it will take more than six months to get there?) the mission left for Mars yesterday, featuring both an orbiter and a lander. It will be interesting!

Categories: science
Posted by diego on June 3, 2003 at 2:23 AM

the standard Kilogram is... losing weight

According to this New York Times article:

The kilogram is defined by a platinum-iridium cylinder, cast in England in 1889. No one knows why it is shedding weight, at least in comparison with other reference weights, but the change has spurred an international search for a more stable definition.

[...]

The kilogram is the only one of the seven base units of measurement that still retain its 19th-century definition. Over the years, scientists have redefined units like the meter (first based on the earth's circumference) and the second (conceived as a fraction of a day). The meter is now the distance light travels in one-299,792,458th of a second, and a second is the time it takes for a cesium atom to vibrate 9,192,631,770 times. Each can be measured with remarkable precision, and, equally important, can be reproduced anywhere.

[...]

The kilogram was conceived to be the mass of a liter of water, but accurately measuring a liter of water proved to be very difficult. Instead, an English goldsmith was hired to make a platinum-iridium cylinder that would be used to define the kilogram.

[...]

To update the kilogram, Germany is working with scientists from countries including Australia, Italy and Japan to produce a perfectly round one-kilogram silicon crystal. The idea is that by knowing exactly what atoms are in the crystal, how far apart they are and the size of the ball, the number of atoms in the ball can be calculated. That number then becomes the definition of a kilogram.

[...]

An intriguing characteristic of this smooth ball is that there is no way to tell whether it is spinning or at rest. Only if a grain of dust lands on the surface is there something for the eye to track.

Categories: science
Posted by diego on May 27, 2003 at 11:12 PM

the shape of protons

This article from the New York Times talks about recent delvings into the "shape" of protons:

Ask four physicists a seemingly simple question — Is a proton round? — and these might be their responses:

Yes.

No.

The first two answers are both correct.

What do you mean by "round"?

Heh. Us humans and our tendency to put things into arbitrary categories such as "round". Quantum mechanics showed us a while ago that doesn't really work (although, of course, it's immensely useful in many circumstances--it's just that interpretation doesn't necessarily say anything about the underlying reality).

Categories: science
Posted by diego on May 6, 2003 at 6:16 PM

tracking SARS

From The New York Times: an excellent article on SARS, and its lessons for how diseases jump species and spread in the modern world. Reminds me of The coming plague.

Categories: science
Posted by diego on April 28, 2003 at 12:38 AM

a long day's journey... into the night

I found this a few days ago... a short article on a professor of history at Virginia Tech that is writing a book on how our perception of what is the night (in terms of how it affects ours activities), and what we do with it has changed over history:

Normally a morning person who thinks best before noon, Ekirch spends a lot of time these days thinking about night, particularly night as experienced by people before the coming of artificial light. "Along with changes in diet, dress, and forms of communication -- all nearly as different as night and day -- variations occurred at night in popular mores, including attitudes toward magic, sexual relations, social authority, and the nocturnal landscape," he says. Nighttime back then was "a rich and complex universe in which persons passed nearly half of their lives -- a shadowy world Ö of blanket fairs, night freaks, and curtain lectures, sun-suckers, moon cursers, and night-kings," Ekirch says.
I find this fascinating. Many times I do my best work at night, but I've been able to work well at any time. I also enjoy early afternoon, especially in the summer, and early-early morning. I wonder: I know of many programmers who also do a lot of work late at night. If working a "nightshift" was almost unheard of, say, in the 17th century, I don't think it's a coincidence that the very things we are creating at these late hours are what ... enables us to work on them. Like electricity. And so on.

The concept of "what's normal" is something that I usually talk about with friends or family, since I my hours are really strange. For example, the "weekend" such as it is, has no meaning for me. Of course, I am affected since the rest of society does care. But for me, personally, a day is a day is a day.

I suddenly remembered I've talked about this before. Instead of repeating myself, I'll just let that entry do it for me. :-)

Categories: science
Posted by diego on April 28, 2003 at 12:34 AM

50 years of DNA

Yesterday was the 10th anniversary of the first release of Mosaic. In two days it's the 50th anniversary of the discovery of the structure of DNA. (And, totally unrelated, it's also William Shakespeare's birthday!).

Categories: science
Posted by diego on April 23, 2003 at 7:14 PM

science doesn't need hype

[via Philip Greenspun]: A wired article: The Lab That Fell to Earth. Quote:

The house that Negroponte built is dealing with a nasty postboom hangover. Corporate donations once accounted for 95 percent of the Lab's budget, with much of the booty coming from thriving sectors like telecom. Now the struggling companies of the world are, needless to say, no longer as liberal with their loot. The Lab's techno-optimism and demo-centric approach to R&D has fallen out of favor. Like many private-sector startups, it has responded with belt-tightening, layoffs, and lots of rhetoric about alternative funding. One look at the vacant lot next door, though, and it's obvious the crisis isn't over.

Even worse, the financial shortfall is dredging up long-festering issues. When times were flush, no one rocked the boat. Now the hard-science groups are bucking for independence, claiming that the Lab's art-meets-technology focus is passé. Students complain that egocentric professors are undermining the Lab's interdisciplinary spirit. And the Lab's reputation as a scientific lightweight - "all icing and no cake," as Negroponte sums up the rap - never seems to die. Designing props for the wacky Flying Karamazov Brothers juggling troupe isn't exactly what the Nobel committee is looking for.

The Media Lab, I think has suffered more than anything from its own hype. The vision its people described, Negroponte in particular, has always been cool, and in many cases right on the mark.

But for whom?

What I mean is, who would use the technology? I knew a person that got an advance copy of Being Digital in 1996 and I borrowed it. Of course, I read it avidly. But even as I read what I dreamed of, even as I found myself inspired by it, in a hidden corner of my mind I was thinking: This is all very good, but who is going to be able to afford it? Who can benefit by the holographic screens and having your refrigerator do the shopping for you? Certainly not most of the people in Africa, or Asia, or Latin America, or, hell, anyone who doesn't live in a few choice cities, mostly in the Western World, and their surrounding suburban areas. From the article:

Much of that reputation stems from Negroponte's punditry, especially the predictions that peppered his bestseller Being Digital. He was right about quite a bit - the untethering of data, the genesis of the digital video recorder. But there were also the outré prophecies about pill-sized computers that will diagnose illnesses and Barbie dolls that will go online to order new dresses. Crowd-pleasing stuff, but easy targets once the luster wore off technology's star. Britain's The Register now adds snarky quote marks to the phrase "technology expert" when reporting on Negroponte's latest flight of fancy.

To be fair, other Lab alums and personnel took even more outlandish stands. In 1997, Danny Hillis, a Lab graduate and founder of Thinking Machines, was touted in a Los Angeles Times Q&A as a biotechnology guru. Among his predictions: Telephones would be farmed, cabbages manufactured, and trees modified to produce kerosene. The performance earned Hillis the Technoquack of the Month award from the hype-busting Crypt newsletter and bolstered the Lab's reputation for goofiness.

Now, I don't mean that science shouldn't be pursued because it's not going to provide tangible benefits today, or tomorrow, or in ten years, or in the next century. Particle physics, just to take one example, is not really "useful" in that sense. But breakthroughs in science, and specially fundamental science, always end up trickling down to people, even to poor people, sometimes with far-reaching consequences.

But Negroponte's vision, and the Media Lab hype in general, always had this implication that this is going to change the world now, and for everyone. Wide ranging descriptions of how "everyone would do their shopping" (for example) in the future were, and in some cases still are, the norm. In part I imagine it was a sign of the times: if someone could think that selling pet food over the Internet was a world-changing paradigm, then certainly the Media Lab was entitled to, and with more reason. But then again, I wish they hadn't bought so deep into the self-promotion, and the hype.

Doing science, fundamental science, even things that other people consider useless, freaky, or strange, or ridiculous, is just fine.

Hyping it, pretending to know the future and telling everyone that you're the next hot item is not.

At least not if you're doing science. There's a name for that. It's called Marketing.

On top of that, the Media Lab's funding model is heavily tilted towards corporate money, and so makes it hard when recessions, or near-recessions, hit.

You know, like now. As the following paragraph from the article explains:

Unlike other university research entities, the Media Lab has relied almost solely on corporate money: Currently 125 sponsors each kick in a minimum of $200,000 annually, entitling them to license any Lab invention royalty-free and consult with the faculty at whim. In the halcyon days, that was good enough. Company executives happily camped out in the Lab on the off chance that a professor's random brainstorm might have the whiff of IPO about it. Now that corporate excess is out of vogue, sponsorship is a tougher sell. Stung by the telecom sector's demise, a Lucent or a Nortel is now loathe to sponsor the quest to build a "conversational humanoid," a current project in the Gesture & Narrative Language group. "Those companies are fucking dead," says one especially blunt Lab professor. "Where do we get the money from now? I don't know."
The funding model probably needs to be revised a bit, no? Recessions are not really new...

What's happening is also part of the way it goes: behind the hype, the naysayers follow. The truth is usually somewhere in between. If the hype stops, so does the counter-hype. And then the Media Lab could prosper again.

Categories: science
Posted by diego on April 18, 2003 at 11:56 PM

new flash memory from motorola

Motorola will show prototypes of Flash memory chips built with a new kind of nanocrystal, allowing the same storage density in half the area. Cool.

Categories: science
Posted by diego on March 31, 2003 at 2:33 PM

SARS fears spread

All but forgotten in the midst of the War coverage, SARS seems to be spreading, slowly but surely. CDC has warned that it seems to spread through contact and repeated exposure, and might even be airborne. The death rate related to the disease has remained around 4%, but since there's no cure, it can only be stopped by treatment, in many cases requiring mechanical respirators. It seems to me that if this hits an area with insuficient resources it could create a real health crisis. Even more, since they still aren't sure of how it spreads, every time there's an outbreak they have to quarantine everyone in an area, and this means hospitals will be the hardest hit, for obvious reasons. Today a hospital in Canada had to shut down and place everyone inside under quarantine.

As if humans weren't creating enough problems, now this...

Categories: science
Posted by diego on March 30, 2003 at 6:26 PM

modern plagues

Last year I read The Coming Plague by Laurie Garret and it scared the hell out of me. Today I saw this WHO travel advisory that reminded me of it. Not being an alarmist, I took it in stride, but it made me wonder about the kinds of new diseases that we'll see in the next few years, and how fast they will spread. We definitely need some new thinking on how to approach disease and health care in general.

Categories: science
Posted by diego on March 15, 2003 at 5:47 PM

selling gene pools

A slightly unsettling (creepy even maybe?) Salon article:

The newest resources "discovered" in Estonia are the genes of its 1.4 million citizens. The country's government and a Silicon Valley start-up called EGeen International are treating the Estonian gene pool as a commodity to be exploited for medical research and profit.

EGeen owns the exclusive commercial rights to data from the Estonian Gene Bank Project. In March the bank will begin a full-scale effort to collect blood samples and medical histories that will help scientists understand Estonians from the inside out.

Selling exclusive access to their gene pool? That sounds quite ridiculous. Setting aside the ethical implications, we could just question the issue of "gene ownership". Aren't my genes my genes? The mix between biotech and "free markets" is certainly creating some strange creatures. And just wait for nanotechnology to be a real force...

Categories: science
Posted by diego on March 10, 2003 at 8:08 PM

mobile mesh == ad hoc networks

Now, this really, really pissed me off.

Russ pointed to an article/press release on Mistubishi, about a technology they "developed". Here is an excerpt:

What Mitsubishi has developed, is the prototype of a relay-type mobile communications technology, called Mobile Telecommunications Radio and Relay Network (MOTERAN). The basic patent has been already granted in Europe and Japan and has been applied for in major countries around the world. Unlike conventional mobile communications, MOTERAN allows each terminal to act as a relay point communicating with other terminals without the requirement for infrastructure, such as base stations or switches. This could be known as peer to peer networking.
Russ's comments are good. My problem has nothing to do with his post of anything he said.

What pissed me off is the article itself, and Mistubishi's pretense that this is new, or innovative or whatever. In fact, what Mitsubishi describes there is an ad hoc network. (Disclaimer: part of my PhD thesis has to do with ad hoc networks.) Mitsubishi's work is derivative (friendly term for "outright ripoff") and they should acknowledge it, but of course they don't, going as far as using buzzword-terms to deflect attention and get media interest. On top of that, they got a patent on it! I wonder what the patent says. Here is the original press release from Mitsubishi, which says that "the technology on which their development is based was invented in Germany in 1996." Really? Here is a link from CiteSeer for a paper that described DSR (Dynamic Source Routing protocol), one of the best known dynamic self-organizing protocols for ad hoc networks. And the paper is... from 1996. This wasn't the first paper on the topic, no (See below).

Ad Hoc Networks have been under development for several years, both in universities and corporations such as Ericsson (as part of research efforts and commercial efforts as well). There's an ACM Conference, MobiHoc (which has existed since the year 2000), that deals specifically with the topic of ad hoc networks. Ad Hoc routing protocols have been under heavy R&D since the early 90s. The idea that any one person or company can get a patent on something as generic as what is described in the "article" is laughable. Sure, they might patent some work based on it, maybe even some particular algorithm (although my understanding is that actual algorithms can't be patented--only copyrighted, and that what you can patent is the process described by the algorithm if anything. I might be off-base with that). But patenting the concept? David Johnson, one of the researchers who created DSR, has a page with previous publications on the topic of ad hoc mobile networks that date back to 1994, which proves that the research was ongoing well before that date.

In fact, hey, why talk about "pie-in-the-sky" research at all? IETF has a group called Mobile Ad Hoc Networks (MANET) which has been working for years on standarizing protocols for dynamic, self-organizing routing. The earliest posts for IETF drafts date back to late 1997!

Okay, okay, maybe I'm overreacting. Maybe they never claimed to have invented the whole field. But they make it sound like it. It's disgusting when a company does that. It's even worse when trade publications repeat their news releases like parrots, confusing everybody, without even checking the facts.

Categories: science, technology
Posted by diego on March 2, 2003 at 11:30 PM

no cure for stupidity

From this New Scientist piece: "Stupidity should be cured, says DNA discoverer":

"If you are really stupid, I would call that a disease," says Watson, now president of the Cold Spring Harbour Laboratory, New York. "The lower 10 per cent who really have difficulty, even in elementary school, what's the cause of it? A lot of people would like to say, 'Well, poverty, things like that.' It probably isn't. So I'd like to get rid of that, to help the lower 10 per cent."

Watson, no stranger to controversy, also suggests that genes influencing beauty could also be engineered. "People say it would be terrible if we made all girls pretty. I think it would be great."

Can you say "Eugenics"?

Of course there must be a genetic component to "stupidity" (whatever that is--I'm sure that many people Watson would consider "stupid" live happy, productive lives). chaplin-dictator.jpg But then the world is an imperfect place. Once you "fix" that "lower 10%" you get a new 10% at the bottom. Why not "fix" that too? And, as Watson so eloquently puts it, let's "make all girls pretty" in the process. I wonder, pretty according to what measure? Would he like a society of Barbies and Kens? (oh, sorry, no Kens. He didn't say that men should be "pretty".) Is Barbie "pretty"? Watson seems to think that "prettiness" as well as a number of other traits can be objectively defined, and then imposed on society at large. Hitler would be proud.

The other day I was reading on an op-ed on the Washington Post that 6 out of 10 children of age 10 in the Washington D.C. area can't read. But hey, no problem, Watson would say.... these guys are lost, but we can "fix the next batch" right? Just let me tweak this little gene here and everything will be just fine...

I think that since Watson dismisses environmental factors such as poverty and education and "things like that" out of hand, he should watch Gattaca to see a plausible endgame for his ideas. But "things like that" would never happen right? After all, humans are so great at dealing with this kind of power.

Or maybe there's a gene for intolerance that we can "fix" as well?

Categories: science
Posted by diego on March 2, 2003 at 3:39 PM

high security?

A Wired article on the security problems of Los Alamos National Laboratory:

There are no armed guards to knock out. No sensors to deactivate. No surveillance cameras to cripple. To sneak into Los Alamos National Laboratory, the world's most important nuclear research facility, all you do is step over a few strands of rusted, calf-high barbed wire.

I should know. On Saturday morning, I slipped into and out of a top-secret area of the lab while guards sat, unaware, less than a hundred yards away.

A few weeks ago, The Economist ran a related article on Los Alamos with the headline "Next stop for Blix? - Even America has a hard time keeping track of its arms programmes", and said:

IT BUILDS weapons of mass destruction. And it cannot account for dozens of computers and hundreds of thousands of dollars' worth of other equipment. Were the goings-on that have lately been exposed at Los Alamos National Laboratory in New Mexico to be uncovered in Iraq, the United Nations weapons inspectors would pounce on them with a furious cry.

Los Alamos, where the first atomic bombs were built, is in as bad a crisis as it has known since the end of the cold war. This time the concern is not about its fundamental job; indeed, the realisation that the world has not been made safe by the collapse of communism, and that there are still explosive dangers out there, has put a spring back in the step of nuclear-weapons designers. The current trouble is about that familiar old villain, simple mismanagement.

Categories: geopolitics, science, technology
Posted by diego on March 1, 2003 at 9:25 AM

superstrings

sstr.jpgFrom the Journal Nature (requires login), a paper on how new gravity measurements constrain string theory forces:

[...] we report a search for gravitational-strength forces using planar oscillators separated by a gap of 108 µm. No new forces are observed, ruling out a substantial portion of the previously allowed parameter space for the strange and gluon moduli forces, and setting a new upper limit on the range of the string dilaton and radion forces.
Here's the summary in plain English from Scientific American:
The first measurement of the gravitational constant came more than 100 years later, but testing gravity over very short distances has proved difficult. Now scientists have examined the gravitational attraction between two objects just a tenth of a millimeter apart--the smallest gap yet for such trials. The findings, published today in the journal Nature, set upper limits for some of the forces predicted by string theory.

Categories: science
Posted by diego on February 28, 2003 at 11:48 AM

Copyright © Diego Doval 2002-2007.
Powered by
Movable Type 4.37