CSAIL Publications and Digital Archive header
bullet Technical Reports bullet Work Products bullet Research Abstracts bullet Historical Collections bullet

link to publications.csail.mit.edu link to www.csail.mit.edu horizontal line


Research Abstracts - 2006
horizontal line

horizontal line

vertical line
vertical line

Imitation Learning of Whole-Body Grasps

Kaijen Hsiao & Tomas Lozano-Perez


We are creating a system for using imitation learning to teach a robot to grasp objects using both hand and whole-body grasps, which use the arms, torso, and entire hand surfaces in addition to fingertips.


Humans often learn to manipulate objects by observing other people. In much the same way, robots can use imitation learning to pick up useful skills. When trying to give our robots the full range of object manipulation abilities that humans possess, we would like to give them the ability to manipulate objects using not only their fingertips, but other body surfaces as well. Also, by trying to create a robot that can learn to use arbitrary surfaces of its arms and torso to pick up objects, we are forced to create a framework that could potentially be generalizable to more complex manipulation tasks than just grasping.


Demonstration grasp trajectories are created by teleoperating a simulated robot to pick up simulated objects, and stored as sequences of keyframes in which contacts with the object are gained or lost. When presented with a new object, the system compares it against the objects in a stored database to pick a demonstrated grasp used on a similar object. In our somewhat simplified implementation, both objects are modeled as a combination of primitives—boxes, cylinders, and spheres—and the template grasp is adapted to the new object by assuming that the new object is a transformed version of the template. Equivalently, 'chunks' containing one or more primitives from the template object are matched with 'chunks' from the new object, and those chunks are grasped the same way. A trajectory is found that moves among the keyframes in the adapted grasp sequence, and the full trajectory is tested for feasibility by executing it in the simulation.


Currently, the system successfully uses this method to pick up 92 out of 100 randomly generated test objects (made of up to three boxes, cylinders, and spheres in a row) in simulation. Sample adapted grasps are shown in Figure 1. We are now examining our system under a transfer learning framework, in which learning on one task is used to speed up learning on a somewhat different task. For instance, in one experiment, the system learns only a few grasps of just boxes, and then uses some part of its knowledge of how to pick up boxes to speed up learning to use more grasps on a second set of objects made of up to 3 primitives in a line. We are also examining methods of extending our overall framework to more general situations.

Figure 1: Example Demonstration and Adapted Grasps
Example Demonstration and Adapted Grasps

Research Support:

This research is primarily supported by a DARPA grant on transfer learning.


vertical line
vertical line
horizontal line

MIT logo Computer Science and Artificial Intelligence Laboratory (CSAIL)
The Stata Center, Building 32 - 32 Vassar Street - Cambridge, MA 02139 - USA
tel:+1-617-253-0073 - publications@csail.mit.edu