banner
March 21, 1998

Smart Machines, and Why We Fear Them

By ASTRO TELLER

PITTSBURGH -- The cultural definition of artificial intelligence -- or A.I., as it is known -- goes something like this: "A.I. is the science of how to get machines to do the things they do in the movies." No wonder the subject makes some people nervous.

The popular media portray artificial intelligence as one of the heights of human accomplishment, but also as an inevitable catalyst for the downfall of our species. From Frankenstein through HAL to "The Terminator," our culture enjoys wallowing in the fear of our creations.

In the real world, we see much the same love-hate relationship. The Internet enthralls us, but also makes us wonder if computers are devouring our privacy and personalities. We're fascinated by whether a computer can beat Gary Kasparov at chess, but when it does we're deluged with nervous commentary on what it all means.

These are the same understandable but misguided fears that, not so long ago, induced people to burn midwives at the stake as witches. When we fear the unknown and the new view of ourselves that naturally accompanies knowledge, we cloud our vision and block our path to achieving an enlightened civilization.

The real goal of A.I. is to build devices that can perceive, reason, learn and act at or above human performance levels. But even that definition makes people uncomfortable. The most prevalent argument against A.I. is similar to the popular argument against cloning: the "don't mess with Mother Nature" defense.

Unlike our fear of cloning, however, our discomfort with A.I. stems from an entrenched desire in Western culture to believe that humanity's place in the world is privileged, unique and superior.

Recent successes in artificial intelligence clearly tend toward intelligent aids, not ecological competitors. Cars are beginning to drive themselves using A.I. techniques. Factories now monitor themselves and request maintenance before breakdowns occur. A.I. programs can act as real-time translators, mediating phone calls between people who don't share a language. My television uses A.I. to quiet the commercials. Your VCR may well use it to reduce on-screen noise when playing a worn or damaged tape.

But the question of when A.I. programs will match or exceed human mental performance in various areas is a reasonable one. In some arenas, the answer is "today." A.I. machines are proving math theorems, sorting mail and putting paintbrush to canvas like the masters.

When A.I. will clear other mental hurdles, notably "self-awareness," is a largely subjective matter. Which is to say that when "it" happens depends very much on what you mean by "it." The Wright brothers' "Flyer" was a plane, even though it was missing most features of a Boeing 747. I make this distinction because people have a way of raising the bar as artificial intelligence makes progress, so that they don't have to admit that machines can be creative or intelligent.

Why is this so hard for us? I think three psychological forces have generated our antagonistic view of an admittedly volatile area of science:

All people are xenophobes to some extent. Evolution has, with good reason, dictated that animals will fear the "other." Thus we are all cautious of differences in ethnicity, gender, social class and so on. Imagine how magnified those fears become when a culture confronts something as potentially alien as an artificial intelligence. Will you trust your child with a robotic chauffeur, even knowing that, statistically, it will get into fewer accidents than a human driver? If not, isn't that a form of bigotry?

All people are Luddites to some extent. Who can blame them? A Luddite fears change that threatens job security. But history has shown us that when some jobs disappear, others are created. The Luddite in us is often just our unwillingness to learn the skills required to keep up. Unlike xenophobia, however, Ludditism is caused by misunderstanding, not evolutionary necessity.

All people are narcissists to some extent. Five hundred years ago, the Copernican revolution showed that Earth circles the sun, not the converse; people became upset that their world was no longer the physical center of the universe. Eventually, most everyone got over it, largely because they still believed that humans were the purpose for the universe.

Some 350 years later the Darwinian revolution undermined that belief; the universe, instead of having been created for Homo sapiens, actually created us, and very recently. But many of us have recovered from that shock, too, probably because we still believe that humans are the center of the mental universe.

Today, A.I. threatens one of the last remaining things separating us from the "lesser" animals. But we should have learned by now that every time we give up a piece of our narcissism, we profit as a species.

The Copernican revolution, unsettling as it was, taught us about our universe. Similarly, the Darwinian revolution taught us about our bodies.

In the same way, building intelligent machines can teach us about our minds -- about who we are -- and those lessons will make our world a better place. To win that knowledge, though, our species will have to trade in another piece of its vanity.

Astro Teller, a doctoral candidate in A.I. at Carnegie Mellon University, is the author of "Exegesis," a novel about the emotional development of an artificial intelligence.

Copyright 1998 The New York Times Company

March 26 letters to the Editor in response to this OP-ED piece