|
Research
Abstracts - 2007 |
Agile Legged Robot LocomotionRobert T. Effinger, Andreas G. Hofmann & Brian C. WilliamsAbstracThis abstract outlines my research plans over the coming year. I plan to work on a flexible and adaptive approach to legged robot locomotion, called Hybrid Model-based Control (HMBC). HMBC was developed by the Model-based and Embedded Robotics (MERS) group at MIT [8], and accepts as input flexible state and temporal goals, which are then leveraged to adapt to disturbances and failures. In the near-term, I plan to contribute to this line of research by developing and testing agile gaits for a LittleDog robot using HMBC, and by helping to build an overhead localization system for LittleDog. In the longer-term, I plan to help develop a receding-horizon version of HMBC. Ultimately, for my Ph.D., I plan to develop novel extensions to the HMBC architecture, and through a NASA Graduate Student Research Program (GSRP) fellowship at Johnson Space Center to apply these concepts to NASA’s robotic explorers. IntroductionNASA’s Vision for Space Exploration [1] outlines a bold plan to explore our solar system and beyond. Key milestones include:
Each of these milestones is built on a common underlying tenet; that the use of robotic assistants will decrease mission costs, increase scientific return, and reduce Astronaut risk. More specifically, NASA envisions dexterous robotic assistants scouting out and assembling a lunar base, climbing into the cracks and crevices of foreign celestial bodies in search of evidence for life, and scurrying lightly across vibration-sensitive truss assemblies to perform on-orbit construction and maintenance. As a first step, NASA has begun to develop a fleet of robotic assistants, including Robonaut [2,3,4] and Spidernaut [5], shown in Figures 1a and 1b, respectively. These robots are dexterous, powerful, and mechanically capable of implementing the myriad of complex tasks called for in NASA’s Vision for Space Exploration. Figure 1a: Robonaut* Figure 1b: Spidernaut* A central problem remains, however. Techniques that can adequately plan for and control these robots autonomously have not yet been developed. Currently, Robonaut and Spidernaut operate only under tele-operator control and in predictable environments. Furthermore, existing locomotion techniques for humanoid and arachnid class robots are overly conservative, forcing them to move slowly and within narrowly defined regions of controllability. Overcoming these limitations will be a key contribution towards achieving NASA’s Vision for Space Exploration. NASA’s next generation of robotic assistants will be expected to operate reliably in harsh and unpredictable environments such as the Moon, Mars, and outer-space. To do so they will need to be capable of robust and agile locomotion such as: climbing, balancing, running, and jumping. In addition, they will need to adapt to disturbances and failures, such as stumbling over a rock, missing a rung on a ladder, and falling down. Existing autonomous control techniques, most of which come from industrial robotics, are not intended to be flexible or adaptive. They assume static and controlled environments, and achieve speed and precision through high-gain controllers that follow very closely a predefined trajectory of set-points. If an unexpected condition occurs, the robot immediately goes into a safe mode in order to avoid injuring itself or nearby humans. This paradigm of autonomous control is not sufficient for the types of agile locomotion called for in NASA’s Vision for Space Exploration. NASA’s next generation of robotic assistants will require a new type of autonomous control system; one that is flexible and adaptive, and that operates safely in unstructured environments and in close proximity to humans. In addition, this system must be capable of operating non-conservatively, at the boundaries of controllability, in order to achieve high-performance. In the next few sections, we outline a plan to develop such a system. Prior Work – Hybrid Model-based Control (HMBC)Enabling robust, stable, and high-performance locomotion for articulated robots, such as Robonaut and Spidernaut, is difficult for two main reasons:
Next, we describe these two difficulties in more detail, and describe how Hierarchical Model-based Control is able to mitigate them. 1.) Handling high-dimensionality and nonlinearity: To handle the high-dimensionality and nonlinearity inherent to articulated robots, HMBC uses a feedback linearizing multivariable controller, called a dynamic virtual model controller (DVMC) [6], which transforms a nonlinear, tightly coupled system into a loosely coupled set of linear 2nd-order single-input single-output (SISO) systems. This SISO abstraction technique has been used to demonstrate complicated tasks such as, balancing and recovering from lateral push disturbances on walking bipeds [7], as shown in Figure 2. A key contribution of this research will be to implement DVMCs for NASA’s arachnid class robots.
Figure 2: A biped recovering from a lateral push disturbance.** 2.) Computing flexible and adaptive control actions: To compute flexible and adaptive control actions, HMBC uses a novel autonomous control architecture, called a Hybrid Model-based Executive [8] (HMBE). An HMBE takes as input a special type of control program called a qualitative state plan (QSP). QSPs are novel in that they support flexible state and temporal goals, which are then leveraged by the HMBE to adapt to disturbances on-the-fly [9]. An example QSP of a walking biped is shown in Figure 3a. The DVMC, as shown in Figure 3b, allows a hybrid task-level executive to control the robot as a set of loosely coupled, linear systems, which makes it easier to predict future evolution of the robot’s state resulting from control inputs [8]. Figure 3a: An Example QSP ** Figure 3b: HMBC Architecture** Proposed ExtensionsKey contributions of my proposed research will be to investigate new ways to extend the HMBC autonomous control architecture, including:
Experimental ApproachWe plan to validate our approach in simulation and on real robots, both at MIT and at NASA’s Johnson Space Center. At MIT, we plan to validate our approach in simulation using a quadraped simulation built with Matlab’s Sim-mechanics and Multi-parametric toolkits, and on a quadraped robot, called LittleDog, shown in Figure 4a. To ensure applicability towards NASA’s Vision, we also plan to test our approach on NASA’s Robonaut Simulator, shown in Figure 4b, and on NASA’s Robonaut and Spidernaut robots, as availability permits. Figure 4a: LittleDog Robot Figure 4b: Robonaut Simulator Predicted OutcomesWe predict that this project will enable a repertoire of agile behaviors for articulated robots, such as climbing, balancing, running, and jumping. In the next section, we describe the high-level milestones towards achieving this outcome, along with their associated deadlines. Some of the milestones have had to be omitted for brevity. Proposed TimelineYear 1:
Year 2:
Year 3:
ConclusionNASA’s next generation of robotic assistants will need to climb, balance, run, and jump. To this end, we propose novel extensions to a flexible and adaptive autonomous control architecture, called Hierarchical Model-based Control. This proposed system will enable robust and agile locomotion of articulated robots. Developing such a capability is an essential step in achieving NASA’s Vision for Space Exploration. (*Figures 1a and 1b are courtesy of NASA’s Johnson Space Center) (**Figures 2 and 3 courtesy of Andreas Hofmann, Ph.D. [8]) References[1] National Aeronautics and Space Administration. The Vision For Space Exploration, 2004. NASA Press Release: NP-2004-01-334-HQ, Jan. 14, 2004. [2] Ambrose, R.O., Askew S.R., Bluethmann, W., and Diftler, M.A., "Humanoids Designed to do Work", submitted to IEEE, Robotics and Automation Society, Humanoids Conference, Tokyo, November 2001. [3] Scott, Phil, "I, Robonaut", Scientific American, April 2001. [4] Ambrose, Robert O., Askew, R. S., Bluethmann, W., , Diftler, M.A., Goza, S.M., Magruder, D., Rehnmark, F. "The development of the Robonaut System for Space Operations", ICAR 2001, Invited Session on Space Robotics, Budapest, August 20, 2001. [5] http://spidernaut.jsc.nasa.gov/index.html [6] Hofmann A, Massaquoi S, Popovic M, Herr H. A Sliding Controller for Bipedal Balancing Using Integrated Movement of Non-Contact Limbs. IEEE/RSJ International Conference on Intelligent Robots and Systems; 2004 October; Sendai, Japan. [7] Robert Effinger, Andreas Hofmann, and Brian C. Williams, "Progress Towards Task-Level Collaboration between Astronauts and Their Robotic Assistants," Proceedings of the 8th International Symposium on Artificial Intelligence, Robotics, and Automation in Space (iSAIRAS-05), Munich, Germany, Sept. 2005. [8] Hofmann, A. G. "Robust Execution of Bipedal Walking Tasks from Biomechanical Principles", Ph.D. Thesis, MIT, 2005. [9] Hofmann, A. G., Williams, B. C. "Exploiting Spatial and Temporal Flexibility for Plan Execution of Hybrid, Under-actuated Systems", AAAI 2006. |
|||||||||||
|