- CMU-developed Synthetic Interview has commercial potential beyond museum exhibits
Ted and Toni Legarski of Glenmore, Pa., are in Pittsburgh for the weekend. While on a visit to the Senator John Heinz Regional History Center, the couple comes upon a life-size video of George Westinghouse--or at least an actor who looks just like him.
The distinguished-looking gentleman alternately strokes his moustache and grasps the lapels of his suit coat while pacing back and forth thoughtfully behind his office desk.
Built into his desktop is a large interactive touch screen strewn with images of Westinghouse's personal belongings, including a family photo album, a stack of patent papers and a ledger.
As Toni Legarski touches the screen, the hardy, silver-haired inventor turns to face them, introduces himself, and invites her to touch the stack of patents on his desk. When she does, an image of Nikola Tesla appears behind him as he tells the story of his association with Tesla.
She touches the photo album, and her video host comments on each photo. When Ted Legarski touches the ledger, the book opens revealing a series of questions about Westinghouse's life. Each question they pose electronically triggers a different response from the genius himself.
"It's a lot more interesting than just showing the artifacts," Ted Legarski says afterward. "To be able to talk to the man brings it alive. It takes you back to that time."
The Legarskis have just experienced their first Synthetic Interview, a patented technology developed at Carnegie Mellon University that merges computer science with video, audio and still photos. CMU's Entertainment Technology Center produced the "interview" with "Westinghouse" in addition to others with Albert Einstein, Benjamin Franklin, Abraham Lincoln and Charles Darwin.
Underlying the Synthetic Interview is an area of computer science called natural language processing, which analyzes user questions and classifies them by meaning. The sorted questions are then associated with appropriate answers in the form of text, audio or video. In most cases they're video clips.
Whenever a question is posed, its most appropriate answer is retrieved from the database and displayed or played. Right now, visitors can type a question on a keyboard or choose items from a touch screen, though someday, users will be able to talk out loud to their virtual hosts and receive their answer the same way, says Don Marinelli, professor of drama and arts management and executive producer and co-founder of the Entertainment Technology Center.
"Speech recognition is not quite there yet, especially for kids, whose speech isn't as predictable as adults," Marinelli says, "but it's getting there."
Synthetic Interviews are the product of a 20-year-plus collaboration between Scott Stevens, research professor in the Entertainment Technology Center and senior systems scientist in the Human-Computer Interaction Institute; and Mike Christel, research professor in the ETC and senior systems scientist in the Computer Science Department.
The interviews, Stevens says, incorporate technology derived from two innovations developed at Carnegie Mellon: the search algorithms originally developed for the Lycos web crawler, but tweaked to parse sentences; and Informedia, a digital information system that enables quick retrieval of vast amounts of information from non-textual media such as still and moving pictures and sound recordings. Developed beginning in 1994 by Howard Wactlar, now vice provost for research computing, Informedia processes, captures and classifies non-verbal media by creating verbal metadata descriptions, so that photos, video and audio can be quickly sorted, retrieved and delivered to users.
Synthetic Interview has potential beyond museum exhibits. MedRespond, a CMU spinout company supported by the Pittsburgh Life Sciences Greenhouse, is commercializing the technology for use in the health care industry. The company is producing a synthetic interview that will be distributed on MedScape.com, a website for health care professionals. Scheduled to go live in December, the MedScape synthetic interview will be part of a continuing medical education program that helps physician manage the risks associated with asthma medications used by children.
Following a video panel discussion by asthma experts, the site will open for viewer questions, which will be analyzed by natural language processing, then answered from a Synthetic Interview video answer data base by the physicians who appeared in the panel discussion.
Virginia Pribanic, CEO of MedRespond, sees a bright future for synthetic interviews in these kinds of uses.
"Besides the MedScape (program), we have successful pilot projects for childhood nutrition, a patient cardiovascular management project that's in development and an 'ask the doctor' project that's currently in discussion, all with major health care players," she says.
Pribanic is also working independently with a West Coast group that is developing a social networking application for the technology. "Companies whose software can answer questions are going to be worth many billions of dollars," she says.