Designing robots for human interaction is a multidisciplinary challenge, balancing both appearance and behavioral requirements. A robot's appearance evokes interaction affordances and triggers emotional responses; its behavior communicates internal states, and can support action coordination and joint planning. Good HRI design should enlist both facets to enable untrained humans to work fluently and intuitively with the robot.
In this talk I will present the design approach we have been using in the past decade to develop several non-humanoid robotic systems. The underlying principles of both appearance and behavioral design are movement, timing, and embodiment, acknowledging that human perception is highly sensitive to spatial cues, physical movement, and visual affordances.
We design our robots' appearance using 3D animation and industrial design techniques. Gestures, movements, and behaviors drive decisions on the robot's surface and mechanical design. Starting from freehand sketches, the robot's personality is built as a computer animated character, setting the parameters and limits of the robot's degrees of freedom. Then, material and form studies are combined with functional requirements to settle on the final system design. In this talk I will exemplify this process on the design of three non-humanoid robots: a robotic desk lamp, a robotic interactive musician, and a robotic speaker dock listening companion.
On the behavioral side, we design around the notion of human-robot fluency - the ability to smoothly mesh the robot's activity with that of the human partner. I will present computational cognitive architectures based on timing, joint movement and embodied gestures, as well as experimental studies of user's responses to the timing of nonverbal acts. Specifically, I will discuss anticipatory action in a collaborative construction task, and a model of priming through embodied perceptual simulation. Both systems have been shown to have significant effects on the fluency of a human-robot team, and on humans' perception of the robot's intelligence, commitment, and even gender. Finally, I will present an interactive robotic Jazz improvisation system that uses embodied gestures for musical expression, enabling simultaneous, yet responsive, joint improvisation.
Dr. Guy Hoffman is Assistant Professor in the School of Communication at IDC Herzliya, and co-director of the IDC Media Innovation Lab. Before, he was a research fellow at the Georgia Institute of Technology and at MIT. Hoffman holds a Ph.D from MIT in the field of human-robot interaction, and an M.Sc. in Computer Science from Tel Aviv University. He also studied animation at Parsons School of Design in NYC. His research deals with human-robot interaction and collaboration, embodied cognition for robots, anticipation and timing in HRI and multi-agent MDPs, nonverbal communication in HRI, entertainment, theater, and musical performance robotics, and non-humanoid robot design.
Among others, Hoffman developed the world’s first human-robot joint theater performance, as well as the first real-time improvising human-robot Jazz duet. Hoffman designed several robots, including a robotic desk lamp, “AUR”, which won the IEEE International Robot Design Competition. His research papers won several top academic awards, including Best Paper awards at HRI and robotics conferences in 2004, 2006, 2007, 2008, and 2010. He was software and animation lead on the World Expo Digital Water Pavilion, one of TIME magazine’s “Best Inventions of the Year”, and was commissioned for a title-page illustration of the New York Times “Week in Review”. Hoffman’s work has been exhibited world-wide and covered in the international press, including CNN, the BBC, The New York Times, Süddeutsche Zeitung, Haaretz, Science, the New Scientist, PBS, NPR, and Comedy Central. In both 2010 and 2012, he was selected as one of Israel’s most promising researchers under forty.
Faculty Host: Jodi Forlizzi
forlizzi [atsymbol] cs.cmu.edu