Paper

S.B. Kang and K. Ikeuchi, ``Toward automatic robot instruction from perception -- Mapping human grasps to manipulator grasps,'' to appear in IEEE Trans. on Robotics and Automation, vol. 12, no. 6, Dec. 1996.

Abstract

Conventional methods for programming a robot either are inflexible or demand significant expertise. While the notion of automatic programming by high-level goal specification addresses these issues, the overwhelming complexity of planning manipulator grasps and paths remains a formidable obstacle to practical implementation. Our approach of programming a robot is by direct human demonstration. Our system observes a human performing the task, recognizes the human grasp, and maps it onto the manipulator. Using human actions to guide robot execution greatly reduces the planning complexity.

Subsequent to recording the human task execution, temporal task segmentation is carried out to identify task breakpoints. This step facilitates human grasp recognition and object motion extraction for robot execution of the task. This paper describes how an observed human grasp can be mapped to that of a given general-purpose manipulator for task replication.

Planning the manipulator grasp based upon the observed human grasp is done at two levels: the functional and physical levels. Initially, at the functional level, grasp mapping is achieved at the virtual finger level; the virtual finger is a group of fingers acting against an object surface in a similar manner. Subsequently, at the physical level, the geometric properties of the object and manipulator are considered in fine-tuning the manipulator grasp. Our work concentrates on power or enveloping grasps and the fingertip precision grasps. We conclude by showing an example of an entire programming cycle from human demonstration to robot execution.