CSAIL Publications and Digital Archive header
bullet Research Abstracts Home bullet CSAIL Digital Archive bullet Research Activities bullet CSAIL Home bullet

link to publications.csail.mit.edu link to www.csail.mit.edu horizontal line

 

Research Abstracts - 2007
horizontal line

horizontal line

vertical line
vertical line

Agile Legged Robot Locomotion

Robert T. Effinger, Andreas G. Hofmann & Brian C. Williams

Abstrac

This abstract outlines my research plans over the coming year. I plan to work on a flexible and adaptive approach to legged robot locomotion, called Hybrid Model-based Control (HMBC). HMBC was developed by the Model-based and Embedded Robotics (MERS) group at MIT [8], and accepts as input flexible state and temporal goals, which are then leveraged to adapt to disturbances and failures. In the near-term, I plan to contribute to this line of research by developing and testing agile gaits for a LittleDog robot using HMBC, and by helping to build an overhead localization system for LittleDog. In the longer-term, I plan to help develop a receding-horizon version of HMBC. Ultimately, for my Ph.D., I plan to develop novel extensions to the HMBC architecture, and through a NASA Graduate Student Research Program (GSRP) fellowship at Johnson Space Center to apply these concepts to NASA’s robotic explorers.

Introduction

NASA’s Vision for Space Exploration [1] outlines a bold plan to explore our solar system and beyond. Key milestones include:

  • A human return to the Moon by 2020 with robotic precursor missions no later than 2008.
  • Robotic exploration of Mars, Jupiter’s moons, asteroids and other bodies to search for evidence of life, to understand the history of the solar system, and to search for resources.
  • Assembling large-scale telescopes on orbit and on the Moon to search for Earth-like planets and habitable environments around other stars.

Each of these milestones is built on a common underlying tenet; that the use of robotic assistants will decrease mission costs, increase scientific return, and reduce Astronaut risk. More specifically, NASA envisions dexterous robotic assistants scouting out and assembling a lunar base, climbing into the cracks and crevices of foreign celestial bodies in search of evidence for life, and scurrying lightly across vibration-sensitive truss assemblies to perform on-orbit construction and maintenance.

As a first step, NASA has begun to develop a fleet of robotic assistants, including Robonaut [2,3,4] and Spidernaut [5], shown in Figures 1a and 1b, respectively. These robots are dexterous, powerful, and mechanically capable of implementing the myriad of complex tasks called for in NASA’s Vision for Space Exploration.

Robonaut* Spidernaut*
Figure 1a: Robonaut*            Figure 1b: Spidernaut*

A central problem remains, however. Techniques that can adequately plan for and control these robots autonomously have not yet been developed. Currently, Robonaut and Spidernaut operate only under tele-operator control and in predictable environments. Furthermore, existing locomotion techniques for humanoid and arachnid class robots are overly conservative, forcing them to move slowly and within narrowly defined regions of controllability. Overcoming these limitations will be a key contribution towards achieving NASA’s Vision for Space Exploration.

NASA’s next generation of robotic assistants will be expected to operate reliably in harsh and unpredictable environments such as the Moon, Mars, and outer-space. To do so they will need to be capable of robust and agile locomotion such as: climbing, balancing, running, and jumping. In addition, they will need to adapt to disturbances and failures, such as stumbling over a rock, missing a rung on a ladder, and falling down.

Existing autonomous control techniques, most of which come from industrial robotics, are not intended to be flexible or adaptive. They assume static and controlled environments, and achieve speed and precision through high-gain controllers that follow very closely a predefined trajectory of set-points. If an unexpected condition occurs, the robot immediately goes into a safe mode in order to avoid injuring itself or nearby humans. This paradigm of autonomous control is not sufficient for the types of agile locomotion called for in NASA’s Vision for Space Exploration.

NASA’s next generation of robotic assistants will require a new type of autonomous control system; one that is flexible and adaptive, and that operates safely in unstructured environments and in close proximity to humans. In addition, this system must be capable of operating non-conservatively, at the boundaries of controllability, in order to achieve high-performance. In the next few sections, we outline a plan to develop such a system.

Prior Work – Hybrid Model-based Control (HMBC)

Enabling robust, stable, and high-performance locomotion for articulated robots, such as Robonaut and Spidernaut, is difficult for two main reasons:

  1. Complex articulated robots, such as bipeds, quadrupeds, and arachnids, are high-dimensional, nonlinear systems that have unpredictable and highly nonlinear interactions with their surroundings.
  2. Computing flexible control actions that can adapt to disturbances and failures on-the-fly is computationally intensive.

Next, we describe these two difficulties in more detail, and describe how Hierarchical Model-based Control is able to mitigate them.

1.) Handling high-dimensionality and nonlinearity:

To handle the high-dimensionality and nonlinearity inherent to articulated robots, HMBC uses a feedback linearizing multivariable controller, called a dynamic virtual model controller (DVMC) [6], which transforms a nonlinear, tightly coupled system into a loosely coupled set of linear 2nd-order single-input single-output (SISO) systems. This SISO abstraction technique has been used to demonstrate complicated tasks such as, balancing and recovering from lateral push disturbances on walking bipeds [7], as shown in Figure 2. A key contribution of this research will be to implement DVMCs for NASA’s arachnid class robots.

tip <

Figure 2: A biped recovering from a lateral push disturbance.**

2.) Computing flexible and adaptive control actions:

To compute flexible and adaptive control actions, HMBC uses a novel autonomous control architecture, called a Hybrid Model-based Executive [8] (HMBE). An HMBE takes as input a special type of control program called a qualitative state plan (QSP). QSPs are novel in that they support flexible state and temporal goals, which are then leveraged by the HMBE to adapt to disturbances on-the-fly [9]. An example QSP of a walking biped is shown in Figure 3a. The DVMC, as shown in Figure 3b, allows a hybrid task-level executive to control the robot as a set of loosely coupled, linear systems, which makes it easier to predict future evolution of the robot’s state resulting from control inputs [8].

qsp hmbe
           Figure 3a: An Example QSP **                       Figure 3b: HMBC Architecture**
Proposed Extensions

Key contributions of my proposed research will be to investigate new ways to extend the HMBC autonomous control architecture, including:

  • Applying the architecture to quadrupeds and arachnid class robots in addition to bipeds.
  • Building a repertoire of agile QSP’s for quadruped and arachnid class robots, such as: climbing, balancing, running, and jumping.
  • Help to build an overhead localization system for LittleDog.
  • Help to build a receding-horizon version of the HMBE controller.
  • Automatic selection of corrective strategies in response to larger disturbances
  • Incorporating measures of probability of success into the QSP representation.
Experimental Approach

We plan to validate our approach in simulation and on real robots, both at MIT and at NASA’s Johnson Space Center. At MIT, we plan to validate our approach in simulation using a quadraped simulation built with Matlab’s Sim-mechanics and Multi-parametric toolkits, and on a quadraped robot, called LittleDog, shown in Figure 4a. To ensure applicability towards NASA’s Vision, we also plan to test our approach on NASA’s Robonaut Simulator, shown in Figure 4b, and on NASA’s Robonaut and Spidernaut robots, as availability permits.

LittleDog Robot Robonaut Simulator
Figure 4a: LittleDog Robot 	Figure 4b: Robonaut Simulator
Predicted Outcomes

We predict that this project will enable a repertoire of agile behaviors for articulated robots, such as climbing, balancing, running, and jumping. In the next section, we describe the high-level milestones towards achieving this outcome, along with their associated deadlines. Some of the milestones have had to be omitted for brevity.

Proposed Timeline

Year 1:

  • Develop QSPs for balancing, running, and jumping for quadrupeds.
  • Test the QSPs on the quadraped simulator, and on LittleDog.
  • Holp to Develop an overhead localization system for LittleDog.
  • Develop a receding-horizon version of the HMBE controller.
  • In summer at JSC: Extend the quadraped QSPs to apply to JSC’s arachnid class robots.

Year 2:

  • Implement a corrective strategy selection algorithm to recover from larger disturbances.
  • In summer at JSC: Incorporate probability of success into QSPs, and apply the results.

Year 3:

  • Investigate fuzzy computation techniques as potential improvements to HMBE and QSP.
  • In summer at JSC: Test our HMBE approach on RoboSim, Robonaut and Spidernaut.
Conclusion

NASA’s next generation of robotic assistants will need to climb, balance, run, and jump. To this end, we propose novel extensions to a flexible and adaptive autonomous control architecture, called Hierarchical Model-based Control. This proposed system will enable robust and agile locomotion of articulated robots. Developing such a capability is an essential step in achieving NASA’s Vision for Space Exploration.

(*Figures 1a and 1b are courtesy of NASA’s Johnson Space Center)

(**Figures 2 and 3 courtesy of Andreas Hofmann, Ph.D. [8])

References

[1] National Aeronautics and Space Administration. The Vision For Space Exploration, 2004. NASA Press Release: NP-2004-01-334-HQ, Jan. 14, 2004.

[2] Ambrose, R.O., Askew S.R., Bluethmann, W., and Diftler, M.A., "Humanoids Designed to do Work", submitted to IEEE, Robotics and Automation Society, Humanoids Conference, Tokyo, November 2001.

[3] Scott, Phil, "I, Robonaut", Scientific American, April 2001.

[4] Ambrose, Robert O., Askew, R. S., Bluethmann, W., , Diftler, M.A., Goza, S.M., Magruder, D., Rehnmark, F. "The development of the Robonaut System for Space Operations", ICAR 2001, Invited Session on Space Robotics, Budapest, August 20, 2001.

[5] http://spidernaut.jsc.nasa.gov/index.html

[6] Hofmann A, Massaquoi S, Popovic M, Herr H. A Sliding Controller for Bipedal Balancing Using Integrated Movement of Non-Contact Limbs. IEEE/RSJ International Conference on Intelligent Robots and Systems; 2004 October; Sendai, Japan.

[7] Robert Effinger, Andreas Hofmann, and Brian C. Williams, "Progress Towards Task-Level Collaboration between Astronauts and Their Robotic Assistants," Proceedings of the 8th International Symposium on Artificial Intelligence, Robotics, and Automation in Space (iSAIRAS-05), Munich, Germany, Sept. 2005.

[8] Hofmann, A. G. "Robust Execution of Bipedal Walking Tasks from Biomechanical Principles", Ph.D. Thesis, MIT, 2005.

[9] Hofmann, A. G., Williams, B. C. "Exploiting Spatial and Temporal Flexibility for Plan Execution of Hybrid, Under-actuated Systems", AAAI 2006.

 

vertical line
vertical line
 
horizontal line

MIT logo Computer Science and Artificial Intelligence Laboratory (CSAIL)
The Stata Center, Building 32 - 32 Vassar Street - Cambridge, MA 02139 - USA
tel:+1-617-253-0073 - publications@csail.mit.edu