Takeo Kanade is the U.A. and Helen Whitaker University Professor of Computer Science
and Robotics. He was the Director of the Carnegie Mellon Robotics Institute
from 1992 to 2001. TK60, a symposium held in honor of Dr. Kanade’s
60th birthday, was held on March 8-9, 2007.
Tell us a little about yourself. Let’s start with where you are from.
I was born in Japan in 1945. All of my education was in Kyoto, the “Old Capital of Japan,” and I became a junior faculty member in Kyoto University. In 1974, I met in Kyoto Raj Reddy, from Carnegie Mellon. This was when I was still a student, writing face recognition code in an assembly language! That year, Kyoto was hosting a US-Japan seminar. This seminar was really intended for professors to present their papers, not for graduate students, but I somehow managed to get an extra slot on the schedule and presented my face recognition research. A year later, I met Alan Newell, who came to Kyoto on a sabbatical for a two-day visit. I remember presenting some of my work – I think it was outdoor scene analysis to him, then giving him and his wife a tour around Kyoto. I discussed my wish to visit a university in the United States. Newell and Reddy arranged it for me, and I came to Carnegie Mellon as a visitor in 1977. After a one and a half year stay, I went back to Kyoto, and then returned in 1980 as regular faculty. Since, then I’ve been here for 26 years now!
What led to your decision to move to the United States?
I guess you could say that I decided to move because I saw it as an “intellectual adventure.” When I first came to the US, I wanted to see more advanced computer science. Back then, there was a large gap between Japanese and American technologies. When I first came here in 1977, I was very impressed because I was given a terminal at home with which I could connect to PDP10 at the department anytime 24 hours, even though I was only a visitor. AI and computer architecture people were working together. I could talk with people like Herb Simon in passing in a computer terminal room. The way I saw it, I could have three times higher productivity here.
How have you seen the Carnegie Mellon Robotics Institute change from when you first came here in 1980?
One of the biggest changes is that we started out creating mostly industrial robotic applications. In the 1980s, the Robotics Institute‘s slogan was “productivity.” Japanese manufacturing productivity was very high, and the US felt that it needed to catch up. Since then, the Institute has moved more towards more advanced intelligent robotics, primarily focusing on the military, space, and entertainment. The Robotics Institute has also seen a large increase in size—when I was the director, from 1992 to 2001, we doubled the size of the department from 120 to 300 people. That was a large growth period of robotics. Now robotics is one of the largest departments at Carnegie Mellon.
You’ve written over 300 technical papers and hold over 20 patents in a variety of different fields. Where do you get your ideas for all of these different projects?
There’s really no secret to this—if you work hard, the papers and patents will come.
What is the most exciting project you have ever worked on?
I’ve worked on a number of technical and theoretical projects
that have very exciting characteristics. One is that simplicity works.
For example, I created a “structure from motion” technique
where you can convert a video tape of you walking around your house to
a 3D model. This method, usually called Tomasi-Kanade Factorization, is
very neat mathematics, which are surprisingly simple. I’ve had many
other university professors tell me that when they teach this method to
their students, the students can write the code for it and make it work
by the end of the day. While other methods require a lot more background
knowledge and a lot of work to implement, with Tomasi-Kanade Factorization,
the implementation can be as short as a few tens of lines of MATLAB code,
and it works.
We heard that your “virtualized reality” concept was used to create CBS “EyeVision” in the 2001 SuperBowl. Was that originally your idea?
CBS actually proposed the idea to us after hearing about our research and looking us up on the internet. The EyeVision project actually turned out to be a very painful effort, because we were only given 6 months to build the system. We didn’t even have a signed contract up until 10 days before the SuperBowl, which was scary because there was a risk that CBS wouldn’t pay. This made securing equipment really challenging, and I ended up having to order a million dollars of equipment from a friend in Japan because no American companies would accept the order without secure financial backing.
We somehow finished it, though. EyeVision is a lot like some of the action scenes in the move “The Matrix,” where actors are filmed by a ring of cameras. Using this technique, you can create a “stop time” replay with which the viewer feels like spinning around the actors. The challenge faced by Eyevision is that you never know where the action is going to take place on a football field, so you have to have some 30 robotics cameras, placed at the upper deck of the football stadium, and you have to control them so that they point to the right place every moment. We ended up using about 18 km of wiring to connect those cameras!
So, does this mean that you are a big sports fan?
Of course. Go Steelers! I like to tell people that I am the only professor
to ever appear on a SuperBowl broadcast. That is true. I actually appeared
there personally! In fact, the development contract specified that I appear
for 25 seconds during the SuperBowl broadcast, in addition to two sets
of 30 second spots of free airtime for Carnegie Mellon to advertise during
the NCAA Basketball Finals. So, that was a moment of fame.
You’ve done a lot of work with different autonomous robots, like NavLab and Robocoptor. Which was your favorite one to work on, and why?
Both are my favorites. I started on NavLab with my colleagues in early ‘80s and made driving across the US without a driver in mid ‘90s. Robocopter started in early ‘90’s with a then-student, who won a competition in late ‘90s, and now is the best of its kind. Highly visible autonomous systems are a lot of fun. They are not easy, though. They are only possible with real hard-working people.
It seems like you’ve done a lot of things in your life… can you think of something you haven’t done yet that you would like to do someday?
I am not sure if I have done that much. Anyway, something that I’m
really occupied with right now is Quality of Life Technology. We’re
hoping to use intelligent systems to augment people’s minds and
bodies so that, for example, older people or people with disabilities
can live more independently and longer at home. This is really good for
everybody - for older people, care givers, and society. We hope to create
machines that understand people’s intents as well as their capabilities
in order to determine and satisfy their needs. We’ve found that
more is not always better—giving someone more help than they need
can make them feel uncomfortable because it makes them feel less independent.
And lastly… do you have any advice for Carnegie Mellon students?
“Think like an amateur. Execute as an expert. That is the secret