I work on manipulation. My goal is to enable robots to robustly and gracefully interact with the world to perform complex manipulation tasks in uncertain, unstructured, and cluttered environments.
I want to make this interaction faster, safer, elegant,
and involve simpler actuation.
To this end, I founded and direct the Personal Robotics Lab,
co-direct the Manipulation Lab, and lead the HERB effort of the QoLTbots
systems area and the Mobile Manipulation effort of the Mobility and
Manipulation Thrust at the Quality of Life
Technologies NSF ERC.
I am currently focussing on two topics: Physics-based Manipulation and The Mathematics of Human-Robot Interaction. They are heavily intertwined, both born out of the goal of robots performing complex manipulation tasks with and around people.
Physics-based Manipulation:
I focus on using physics in the design of actions, algorithms, and hands for manipulation:
Manipulation is more than pick-and-place. We are developing nonprehensile physics-based actions and algorithms to reconfigure clutter in the way of the primary task by pulling, pushing, sweeping, and sliding it out of the way.
We have also recently shown that a class of tactile localization problems can be formulated to be submodular.
I am particularly fond of functional gradient methods which have been used with much success in physics.
We have developed CHOMP, a functional gradient optimizer for robot motion planning, and variants like GSCHOMP that exploit the structure of manipulation problems.
I believe simple hands can do complex things. I am working on building robustness into the design and algorithms of simple hands, embracing physics and underactuation to stabilize objects.
The Mathematics of Human-Robot Interaction:
I focus on formalizing Human-Robot Interaction principles using machine learning, motion planning, and function gradient algorithms:
We have been working on enabling seamless and fluent human-robot handovers. We have developed a taxonomy of human and dog handovers, designed expressive grasps and motions, and used time-series analysis to learn the communication of intent. Our JHRI paper summarizes this work.
Our latest work formalizes predictable and legible motion as inference problems, bringing together concepts in psychology, animation, and machine learning.
We are presently working on generating legible motion via functional gradient optimization.
I have greatly enjoyed collaborating with HRI researchers, and our joint exploration has broadened my understanding of humans, robots, and mathematics.
Alvaro Collet,
MS Robotics (coadvised with Chris Atkeson) 2007-09,
Thesis: Object Recognition and Full Pose Registration from a Single Image for Robotic Manipulation
Garratt Gallagher,
MS Robotics (coadvised with Drew Bagnell) 2007-09,
Thesis: GATMO: A Generalized Approach to Tracking Movable Objects
Martin Herrmann,
MS Universitat Karlsruhe (coadvised with Dr.-Inf. Uwe Hanebeck) 2009,
Thesis: Active Scene and Object Reconstruction for Robotic Manipulation from Vision and Laser
Christopher Dellin,
MS Robotics 2009-12,
Thesis: Configuration Space Geometry of Multi-Object Manipulation
Kyle Strabala,
MS Robotics 2010-12,
Thesis: Learning the Communication of Intent Prior to Physical Collaboration
My old grad school webpage from 2005.
Siddhartha Srinivasa
$LastChangedDate: 2013-04-25 21:20:28 -0400 (Thu, 25 Apr 2013) $
Visits since 15 May 2010 :