Now blogging at diego's weblog. See you over there!

the lack of an ethics conversation in computer science

In April 2000 Bill Joy published in Wired an excellent article titled Why the future doesn't need us. In the article he was saying that, for once, maybe we should stop for a moment and think, because the technologies that are emerging now (molecular nanotechnology, genetic engineering, etc) present both a promise and a distinctive threat to the human species: things like near immortality on one hand, and complete destruction on the other. I'd like to quote at relative length a few paragraphs with an eye on that I want to discuss, so bear with me a little:

["Unabomber" Theodore] Kaczynski's dystopian vision describes unintended consequences, a well-known problem with the design and use of technology, and one that is clearly related to Murphy's law - "Anything that can go wrong, will." (Actually, this is Finagle's law, which in itself shows that Finagle was right.) Our overuse of antibiotics has led to what may be the biggest such problem so far: the emergence of antibiotic-resistant and much more dangerous bacteria. Similar things happened when attempts to eliminate malarial mosquitoes using DDT caused them to acquire DDT resistance; malarial parasites likewise acquired multi-drug-resistant genes.

The cause of many such surprises seems clear: The systems involved are complex, involving interaction among and feedback between many parts. Any changes to such a system will cascade in ways that are difficult to predict; this is especially true when human actions are involved.


What was different in the 20th century? Certainly, the technologies underlying the weapons of mass destruction (WMD) - nuclear, biological, and chemical (NBC) - were powerful, and the weapons an enormous threat. But building nuclear weapons required, at least for a time, access to both rare - indeed, effectively unavailable - raw materials and highly protected information; biological and chemical weapons programs also tended to require large-scale activities.

The 21st-century technologies - genetics, nanotechnology, and robotics (GNR) - are so powerful that they can spawn whole new classes of accidents and abuses. Most dangerously, for the first time, these accidents and abuses are widely within the reach of individuals or small groups. They will not require large facilities or rare raw materials. Knowledge alone will enable the use of them.

Thus we have the possibility not just of weapons of mass destruction but of knowledge-enabled mass destruction (KMD), this destructiveness hugely amplified by the power of self-replication.

I think it is no exaggeration to say we are on the cusp of the further perfection of extreme evil, an evil whose possibility spreads well beyond that which weapons of mass destruction bequeathed to the nation-states, on to a surprising and terrible empowerment of extreme individuals.


Nothing about the way I got involved with computers suggested to me that I was going to be facing these kinds of issues.

(My emphasis). What Joy (who I personally consider among some of the greatest people in the history of computing) describes in that last sentence is striking not because of what it implies, but because we don't hear it often enough.

When we hear the word "ethics" together with "computers" we immediately think about issues like copyright, file trading, and the like. While at Drexel as an undergrad I took a "computer ethics" class where indeed the main topics of discussion where copying, copyright law, the "hacker ethos", etc. The class was fantastic, but there was something missing, and it took me a good while to figure out what it was.

What was missing was a discussion of the most fundamental problems of ethics of all when dealing with a certain discipline, particularly one like ours where "yesterday" means an hour ago and last year is barely last month. We try to run faster and faster, trying to "catch up" and "stay ahead of the curve" (and any number of other cliches). But we never, ever ask ourselves: should we do this at all?

In other words: what about the consequences?

Let's take a detour through history. Pull back in time: It is June, 1942. Nuclear weapons, discussed theoretically for some time, are rumored to be under development in Nazi Germany (the rumors started around 1939--but of course, back then most people didn't quite realize the viciousness of the Nazis). The US government, urged by some of the most brilliant scientists in history (including Einstein) started the Manhattan project at Los Alamos to develop its own nuclear weapon, a fission device, or A-Bomb. (Fusion devices --also known as H-bombs-- , that use a fission reaction as the starting point and are orders of magnitude more powerful, would come later, based on the breakthroughs of the Manhattan Project).

But then, after the first successful test at the Trinity Test site in July 16, 1945, something happened. The scientists, which up until that point had been too worried with technological questions that they had forgotten to think about the philosophical ones, realized what they had built. Oppenheimer, the scientific leader of the project, famously said

I remembered the line from the Hindu scripture, the Bhagavad-Gita: Vishnu is trying to persuade the Prince that he should do his duty and to impress him he takes on his multi-armed form and says, "Now I am become Death, the destroyer of worlds."
While Kenneth Bainbridge, in charge of the test, later said at that time that he told Oppenheimer:
"Now we are all sons of bitches."
Following the test, the scientists got together and tried to stop the bomb from ever being used. To which Truman said (I'm paraphrasing):
"What did they think they were building it for? We can't uninvent it."
Which was, of course, quite true.

"All of this sanctimonious preaching is all well and good" (I hear you think) "But what the hell does this have to do with computer science?".

Well. :)

When Bill Joy's piece came out, there was a lot of discussion on the topic. Many reacted viscerally, attacking Joy as a doomsayer, a Cassandra, and so on. Eventually the topic sort of died down. Not much happened. September 11 and then the war in Iraq, surprisingly, did nothing to revive it (contrary to what one might expect). Technology was called upon in aid of the military, spying, anti-terrorism efforts, and so on. The larger question, of whether we should stop to think for a moment before rushing to create things that "we can't uninvent" has been largely set aside. Joy was essentially trying to jump-start the discussion that should have happened before the Mahattan project was started. True, given the Nazi threat, it might have been done anyway. But the more important point to make is that if the Manhattan Project had never started, nuclear weapons might not exist today.


After WW2 Europe was in tatters, and Germany in particular was completely destroyed. There were only two powers left, only two that had the resources, the know-how, and the incentive, to create Nuclear Weapons. So if the US had not developed them, it would be reasonable to ask: What about the Soviets?

As it has been documented in books like The Sword and the Shield (based on KGB files), the Soviet Union, while powerful and full of brilliant scientists, could not have brought its own nuclear effort to fruition but for two reasons: 1) The Americans had nuclear weapons, and 2) they stole the most crucial parts of the technology from the Americans. The Soviet Union was well informed, through spies and "conscientious objectors" of the advances in the US nuclear effort. Key elements, such as the spherical implosion device, were copied verbatim. And even so, it took the Soviet Union two four more years (until its first test in August 29, 1949) to duplicate the technology.

Is it obvious then, that, had the Manhattan project never existed, nuclear weapons wouldn't have been developed? Of course not. But it is clear that the nature of the Cold War might have been radically altered (if there was to be a Cold War at all), and at a minimum nuclear weapons wouldn't have existed for several more years.

Now, historical revisionism is not my thing: what happened, happened. But we can learn from it. Had there been a meaningful discussion on nuclear power before the Manhattan Project, even if it had been completed, maybe we would have come up with ways to avert the nuclear arms race that followed. Maybe protective measures that took time, and trial, and error, to work out would have been in place earlier.

Maybe not. But at least it wouldn't have been for lack of trying.

"Fine. But why do you talk about computer science?" Someone might say. "What about, say, bioengineering?". Take cloning, for example, a field similarly ripe with both peril and promise. An ongoing discussion exists, even among lawmakers. Maybe the answer we'll get to at the end will be wrong. Maybe we'll bungle it anyway. But it's a good bet that whatever happens, we'll be walking into it with our eyes wide open. It will be our choice, not an unforseen consequence that is almost forced upon us.

The difference between CS and everything else is that we seem to be blissfully unaware of the consequences of what we're doing. Consider for a second: of all the weapon systems that exist today, of all the increasingly sophisticated missiles and bombs, of all the combat airplanes designed since the early 80's, which would have been possible without computers?

The answer: Zero. Zilch. None.

Airplanes like the B-2 bomber or the F-117, in fact, cannot fly at all without computers. They're too unstable for humans to handle. Reagan's SDSI (aka "Star Wars"), credited by some with bringing about the fall of the Soviet Union, was a perfect example of the influence of computers (unworkable at the time, true, but a perfect example nevertheless).

During the war in Iraq last year, as I watched the (conveniently) sanitized nightscope visuals of bombs falling on Baghdad and other places in Iraq, I couldn't help but think, constantly, of the amount of programs and microchips and PCI buses that were making it possible. Forget about whether the war was right or wrong. What matters is that, for ill or good, it is the technology we built and continue to build every day that enables this capabilities for both defense and destruction.

So what's our share of the responsibility in this? If we are to believe the deafening silence on the matter, absolutely none.

This responsibility appears obvious when something goes wrong (like in this case, or in any of the other occasions when bugs have caused crashes, accidents, or equipment failures), but it is always there.

It could be argued that after the military-industrial complex (as Eisenhower aptly described it) took over, market forces, which are inherently non-ethical (note, non-ethical, not un-ethical), we lost all hope of having any say in this. But is that the truth? Isn't it about people in the end?

And this is relevant today. Take cameras in cell phones. Wow, cool stuff we said. But now that we've got 50 million of the little critters out there, suddenly people are screaming: the vanishing of privacy! aiee!. Well, why didn't we think of it before? How many people were involved at the early stages of this development? A few, as with anything. And how many thought about the consequences? How many tried to anticipate and maybe even somehow circumvent some of the problems we're facing today?

Wanna bet on that number?

Now, to make it absolutely clear: I'm not saying we should all just stow our keyboards away and start farming or something of the sort. I'm all too aware that this sounds too preachy and gloomy, but I put myself squarely with the rest. I am no better, or worse, and I mean that.

All I'm saying is that, when we make a choice to go forward, we should be aware of what we know, and what we don't know. We should have thought about the risks. We should be thinking about ways to minimize them. We should pause for a moment and, in Einstein's terms, perform a small gedankenexperiment: what are the consequences of what I'm doing? Do the benefits outweigh the risks? What would happen if anyone could build this? How hard is it to build? What would others do with it? And so on.

We should be discussing this topic in our universities, for starters. Talking about copyright is useful, but there are larger things at stake, the RIAA's pronouncements notwhistanding.

This is all the more necessary because we're reaching a point were technologies are increasingly dealing with self-replicating systems that are even more difficult to understand, not to mention control (computer viruses, anyone?), as Joy so clearly put it in his article.

We should be having a meaningful, ongoing conversation about what we do and why. Yes, market forces are all well and good, but in the end it comes down to people. And it's people, us, that should be thinking about these issues before we do things, not after.

These are difficult questions, with no clear-cut answers. Sometimes the questions themselves aren't even clear. But we should try, at least.

Because, when there's an oncoming train and you're tied to the tracks, closing your eyes and humming to yourself doesn't really do anything to get you out of there.

Categories: science,, technology
Posted by diego on February 23 2004 at 10:27 PM

Copyright © Diego Doval 2002-2011.
Powered by
Movable Type 4.37