washingtonpost.com
Home   |   Register               Web Search: by Google
channel navigation


 News Home Page
 News Digest
 OnPolitics
 Nation
 Science
 Columns
 Search the States
 Special Reports
 Photo Galleries
 Live Online
 Nation Index
 World
 Metro
 Business/Tech
 Sports
 Style
 Education
 Travel
 Health
 Opinion
 Weather
 Weekly Sections
 Classifieds
 Print Edition
 Archives
 News Index
Help
Partners:
Toolbox
Toolbox
On the Web
Census information
Federal crime data
Economy by region
Stateline.org
_


Mastering the Robot

Robo sapiens
  The robot "Cog," backlit with light trails at the Artificial Intelligence Lab at MIT in Cambridge, Massachusetts. (Peter Menzel)


E-Mail This Article
Printer-Friendly Version
By Curt Suplee
Washington Post Staff Writer
Sunday, September 17, 2000; Page A01

In Cambridge, Mass., a larger-than-life-size android named Cog locks its video eyes on the faces of visitors while smoothly slithering a Slinky from hand to hand. At the Smithsonian, a gregarious, self-propelled gizmo that looks like a glorified Shop-Vac has taken visitors on museum tours. In Pittsburgh, a 4-foot, faceless but matronly "nursebot" named Flo briskly answers questions, such as "Hey, Flo! What's on NBC tonight?"

After decades of promises, hopes and disappointments, it appears that the long-awaited "robot revolution" may at last be starting to get underway.

Around the globe, quasi-autonomous devices have become increasingly common on factory floors, hospital corridors and farm fields. Scores more are in development or for sale. Physicians can use robotics to aid in ultraprecise bone and brain surgery.

Affluent parents can pick up a Sony cyber-pooch to amuse the kids or an ottoman-sized, video-equipped "AmigoBOT" to follow and monitor them while they play. The Pentagon is researching a dozen ways to put robots in the battlefield, from self-driving vehicles to swarms of tiny surveillance robots that would pool their information to create a comprehensive, multiangle view of combat zones. And this fall, the first interactive robot baby dolls will hit toy stores. Just last month, Brandeis University researchers announced a major milestone--a computerized system that automatically creates, evolves, improves and builds a variety of simple mobile creatures without any significant human intervention.

The rise in robot technology has been fueled by a number of factors, including spectacular advances in computer power, miniaturization of components, the availability of inexpensive sonar, infrared or laser sensors, improvements in speech-recognition and voice-generation technology, and--perhaps most important--the emergence several years ago of a new paradigm for designing quasi-autonomous objects.

"For 30 years, we've had no results to speak of," said artificial intelligence pioneer Hans P. Moravec of Carnegie Mellon University in Pittsburgh, one of the world's top robotics research sites. "But that's all going to change in the next 10 years."

In the near future, it is not unreasonable "to imagine multiple robotic devices in every business, home and office," says James A. Hendler, head of the University of Maryland's Autonomous Mobile Robotics Laboratory, now working at the Defense Advanced Research Projects Agency.

In fact, Moravec and several other experts are convinced that exponential growth in computing power may soon put robotic systems within reach of the kind of brainpower that could ultimately put humanity out of business.

"Over the next several decades, machine competence will rival--and ultimately surpass--any particular human skill one cares to cite," wrote Ray Kurzweil, inventor of computerized speech-recognition, reading and music systems, in his new book, "The Age of Spiritual Machines: When Computers Exceed Human Intelligence." The emergence of these new creatures, Kurzweil declares, "will be a development of greater import than any of the events that have shaped human history."

Autonomy? No

Whether sheer computer power can translate into genuinely human capability, however, is a hotly debated matter. A true android of the R2D2 variety--that is, an autonomous robot that can make lots of decisions for itself, handle unfamiliar surroundings and situations, and converse usefully with people--may be a very long way off.

"The field has matured enormously," said Leslie Haelbling of the Massachusetts Institute of Technology in Cambridge, "but people have moved away from the grander goals" of creating truly human capability. Even mimicking a dog is well out of reach.

One major obstacle is that scientists have not yet created a device that can do what any young child does automatically: Recognize grandma when she's wearing sunglasses, has a new haircut, and is standing in a crowd with her face turned aside.

By the age of 2, any human can see the difference between a hole in the floor and a black spot painted on the floor.

Thanks to miniaturization of the kinds of infrared, laser-light and ultrasound sensors widely used as range finders for consumer cameras, today's robots can discern the distance to the object accurately. But so far, robots have no dependable way to tell a hole from a spot, much less a boy from a girl or a Ford from a Chevy.

Another impediment to rapid progress, experts say, is that until the late 1990s the price of components has been so high that few have been able to do the kind of creative, blue-sky research that often produces breakthroughs.

That is changing. Small muscle-like motors and miniaturized joints are becoming cheaper all the time, and the spread of video cameras, laser devices and ultrasound range-finder technologies has driven down the cost of items once thought exotic.

"I feel like we're on the cusp," said Rodney Brooks, director of MIT's Artificial Intelligence Laboratory. "We're in the same situation that the country was in 1975 with individual computers. . . . We really haven't yet had the big first success like the Apple II, and we're wondering, 'Are we all fooling ourselves into thinking we're going to get to the Apple II stage?' If we do, it's going to take off."

Rules for Robots

Even if robots do not become as ubiquitous as the PC, less grandiose but extremely useful goals are being achieved. Carnegie Mellon has devised self-directing tractors that harvest hundreds of acres around the clock in California, combining location information from global positioning sensors with video image processing that identifies rows of uncut crops.

Similar systems have enabled the university's fleet of experimental unpiloted "NavLab" vehicles to drive across the United States at highway speed--without human assistance 98 percent of the time--by tracking the edge of the road and interpreting other sensor data.

Earlier this year, during a gathering at Carnegie Mellon, a gaggle of robots acted as platter-bearing robo-waiters, serving drinks and cookies. The machines relied on only a few software rules, such as: Don't get too close to any obstacle and only offer things to moving objects, which presumably are people.

"Sure, occasionally it offers a cookie to a desk or wall," said Hendler of the University of Maryland. "But mostly it's people. And the system is extremely simple."

Eventually, if the cost of human labor for uncomplicated tasks grows too high (as it already has for some industrial and maintenance jobs), the market for robots that scrub, sweep, vacuum and mow the lawn could expand fast.

Indeed, the first robots to directly affect people's lives may not be humanoid at all, but more like benign sentinels, thanks to recent progress in 3D representation. "The dramatic advance in computer graphics during the 1980s to '90s will happen to computer vision in the period from 2000 to 2010," said Takeo Kanade, director of Carnegie Mellon's Robotics Institute.

At Carnegie Melon, the University of Maryland, DARPA and elsewhere, researchers are developing sensor systems that can identify an individual or vehicle, learn its typical behaviors and routes, and then discern when something unusual is going on.

"Imagine, for example, that your normal daily habits are known to your house," said Kanade. "It knows where you are and what you're doing, and if you're not moving as much, then the house senses that there is something wrong with you and calls the appropriate help. The scenario is definitely there."

In the long run, many planners are assuming that demand for personal-care robots is bound to explode as the population ages and the cost of nursing home care (already averaging $50,000 a year) continues to increase.

"Seniors may be prone to forget to take their medications or therapies, and caregivers may be too overworked to remember," said Nicholas Roy, one of the developers of "Flo," the nursebot. But "these are the sorts of tasks that computers and robots are really good at."

The Smiling Android

Not surprisingly, many labs around the world are investigating the way people react to robots, an issue that splits into two main schools of thought. One, typified by researchers at the Science University of Tokyo and other Japanese centers, believes that only the most realistically human-like faces will be suitable for dealing with senior citizens (an enormous future problem in Japan's rapidly aging society) and infants.

The other is epitomized by Kismet. Designed by MIT graduate student Cynthia Breazeal, it's an 8-inch-high disembodied but highly animated face that looks like somebody crossbred a Furby and a Lego kit. "It's explicitly not like a person," Breazeal said. "People's expectations are so high as to what the human face should look like that even the slightest mistake creeps us out."

And yet people look instinctively for visual cues to how the other person is reacting. So Kismet has a range of expressions, from wide-open eyes for "happiness" and raised ears for "arousal" to upturned mouth for a smile. "It's a caricature that is intuitive and natural. We put in just enough information so that people will sort of fill it in," Breazeal said. "It's a cute, appealing creature that's probably smarter than an animal. You want to interact with it because it's fun."

But it also wants to interact with you: Kismet is programmed to keep itself happy by finding out how its actions affect a human. In this respect, it resembles a human infant that "can't satiate these drives on its own," said Breazeal.

Kismet's software provides three "drives"--stimulation, social interaction, and fatigue when it is overstimulated--and it attempts to manipulate a nearby human into playing with it by using nine different facial expressions and, recently, voice output. One ultimate goal, Breazeal said, is to observe humans and machines "mutually adapting to each other in a natural way."

A New Paradigm

Down the hall from Kismet its big brother, Cog (for "cognitive"). MIT's experimental humanoid replica--at least from the waist up--is being used in an attempt to design and integrate artificial senses and ultimately recreate all the sensory and motor dynamics of a human body. They and dozens of other advanced android systems worldwide are the product of a new way of thinking.

In the late 1970s and early '80s, following exciting advances in information theory and understanding of learning, many experts thought they might have the robot problem licked with a combination of vast computer memory and rule-based artificial intelligence routines that provided "decision trees" with options for multiple contingencies.

"The traditional approach," said Carnegie Mellon's Kanade, could be described as "let me program what I think I'm doing." For example, interview people who are trying to find the right gate at an airport and carefully note the decision stages they use. Then program that same logic into a computer and provide it with an accurate map of the airport.

But conventional rule-based AI was soon abandoned as a means of designing robots that could make their own way in the world. There is no way to program a detailed map of terrain you've never seen and that keeps changing; and many situations are so complex or ambiguous that no hard-and-fast rules can apply. Something radically new was clearly needed.

In the mid-'80s, it arrived. Brooks and similar thinkers began to concentrate on systems that could make their own decisions in uncertain contexts. The new paradigm was not the decision tree, but the insect. Flies and mosquitoes find their way around quite nicely, rarely blundering into walls. And yet their nervous systems typically contain fewer than one million neurons and weigh less than a bit of lint. The human nervous system, by contrast, contains over 100 billion neurons.

So Brooks decided to concentrate on building systems with a small number of very simple but highly flexible programs that mimicked insects and similarly uncomplicated animals. Progress was accelerated by the arrival of a new technology called "neural networks," a means of imitating nervous systems in transistors. These devices could actually learn, in the sense that if they did something wrong, they'd revise their programmed instructions and try again until they got closer to being right. All those innovations have produced robots that can act far more autonomously and spontaneously than their predecessors. But they are still very far from the popular image of a working humanoid robot--a term invented early in the century by Czech playwright Karel Capek--that persists in movies and novels.

"My romantic hope," said Brooks, "is that there's something there that we're just not seeing and putting in our models. After that, it'll take off just like personal computers did."

'Baby' Goes to Market

In the short term, Brooks is betting on toys.

"I think toys and entertainment are going to drive costs [of basic robotic components] down further," Brooks said, much as the home video market reduced the price of video cameras from thousands to hundreds of dollars in a decade.

Brooks, known chiefly for complex endeavors such as Cog and various smaller insectoid devices for terrestrial and space exploration, is chairman of iRobot Corp., a firm that hopes to have its first big commercial hit this fall with "My Real Baby."

To be sold by Hasbro, the chip-driven, sensor-covered infant "learns" to speak, progressing from burbling to words, and interacts unpredictably with its owner. It demands its bottle, feeds with vigorously lifelike sucking motions, goes to sleep when rocked and wakes up with appropriate changes in facial expression and mood. When it needs changing, it gets grumpy. The $100 robo-tot reportedly will have at least two competitors eventually: Mattel's "Miracle Moves Baby," and "My Dream Baby" from a company called MGA, which reportedly grows by expansion and ultimately learns to walk.

Until recently, nobody really knew if there was a market for such things. Many people find the notion of artificial animals or people disconcerting or worse, and some psychologists have severe reservations about the value of kids treating machines like living things.

But that was before AIBO, an experimental robot dog developed by Sony. At a sticker price of $2,500 each, the remote-controlled pooch had 18 motors and a high-tech computer chip, permitting it a repertoire of about two dozen favorite canine behaviors such as sitting, begging and walking, along with a variety of moods.

In June 1999, Sony decided to test the market by trying to sell 3,000 of the dogs in Japan and 2,000 in the United States. In Japan, all the available units were sold in 20 minutes. Over here, it took four days. A second production run of 10,000 "special edition" AIBOs has generated 135,000 orders worldwide.

Finish Line?

The robot revolution now appears so plausible that, within the past few months, a number of America's most celebrated computer innovators have been writing what amounts to an advance obituary for the human race.

Once-promising Homo sapiens, they argue, may be out of business by the end of the 21st century, supplanted by devices that we ourselves perversely created as live-in companion-competitor-successors.

Bill Joy, cofounder and chief scientist of Sun Microsystems, predicts that an intelligent robot--quickly followed by self-replicating "robot species"--could emerge by 2030, and worries that computer researchers are contributing to "the technology that may replace our species."

At a minimum, he writes in Wired, robotics and allied technologies have placed humanity "on the cusp of the further perfection of extreme evil."

Moravec, in his new book, "Robot: Mere Machine to Transcendent Mind," confidently declares that by 2040 a $1,000 computer will have "human competency." Shortly thereafter, "by performing better and cheaper, the robots will displace humans from essential roles. Rather quickly, they could displace us from existence."

Brooks strongly disagrees. "What if they get too smart and want to take over from us? Frankly, I don't think that's any worry. There's not going to be any 'us' to take over from! Humans are going to take the best that robotic technology offers and merge it into our bodies. So the humans will be outstripping the robots at every step. Eventually, the machines will be incorporated inside us."

© 2000 The Washington Post Company






  Search
News       
Post Archives

Advanced Search

 Related Links

Carnegie Mellon University
Minerva

Robotics Institute Projects

Robotics Institute Labs


MIT
Cog

Kismet





washingtonpost.com
Home   |   Register               Web Search: by Google
channel navigation