Linkage Robots
Consider the following three systems:
Some simulated constructions in a new modular robot system we have proposed called MultiShady [1]. Each is a high-DoF linkage with actuated revolute joints at every intersection of differently colored segments. |
ATHLETE, a new 36-DOF hybrid legged/wheeled robot under development at NASA/JPL [2]. The system includes capability to use appendages both for locomotion and manipulation of tools. Images courtesy JPL. |
Some sketches of the human body—a linkage, at a high level—in contact with the environment, including cases where parts of the environment take the form of moving linkages. Adapted from Zatsiorski [3]. |
While these three examples relate to fairly disparate areas of robotics, it's not hard to see that there is some common deep structure among them. Namely, we can identify linkages at a fundamental level in each system. This provides a basis for developing theory and algorithms which apply both deeply and broadly, not only across these examples, but also to any other robotic system fundamentally based on a linkage. We call the class of such systems linkage robots.
We are developing theory and algorithms for linkage robots in two complimentary directions. First we explore the seemingly basic issue of how to specify a desired motion in a linkage robot, taking a perspective we call generalized kinematic control. Second, we observe that when the kinematics of a linkage include closed loops, as some of our example systems clearly do, they can easily become inconsistently overconstrained except for exact choices of geometric parameters. Such exactness is impossible to achieve in practice due to various forms of uncertainty, so in order to apply generalized kinematic control to real systems we also need a correspondingly general way to ensure that closed loops will remain consistent without requiring geometric exactness; for this we explore the use of actuators with both intentional compliance and high-level sensing of the compliant motions, which together we call proprioception.
We first consider the fundamental and practical question of how to tell a linkage robot what to do at a kinematic level—i.e., how to move. The entity making such a motion specification may either be a human operator or a higher-level computational system. It may at first seem that this movement specification problem should not be very challenging, especially given the power of modern methods in kinematic computation and motion planning. And it is true that, in many cases of interest, forward and inverse kinematic control can be defined and implemented for linkage robots so that we may give either joint or link trajectories to define a motion [4]. Recent developments are now even making it feasible to compute solutions to the piano movers problem for a fairly general class of linkage robots operating in geometric environments [2].
The issue is that often none of these methods are actually convenient:
To this end we are developing a suite of kinematic control techniques which are higher-level than basic forward and inverse kinematic control, but which offer more fine-grained control than piano moving.
As a foundation, we develop a unified computational framework in which forward and inverse kinematic control become the same thing, which we call direct kinematic control. This is straightforward: we construct an algebraic system corresponding to the robot kinematics; then both types of control amount to specifying trajectories for subsets of the variables. We track such trajectories numerically using algorithms from polynomial homotopy continuation [5]. Already this gives certain desirable properties such as motion continuity even across kinematic singularities [6].
Upon this algebraic framework we then add the ability to define virtual objects and constraints—virtual additions to the robot linkage which can be crafted to reduce the overall DoF of a kinematically redundant robot while constraining the robot to a particular designed motion. This can make direct kinematic control more useful by reducing or eliminating the need for heuristics. The virtual additions are defined by the user (again, either a human or a higher-level computational system) and can include instances from a specified set of object types—points, lines, planes, spheres, cylinders, etc.—and a specified set of constraints such as incidence, parallelism, perpendicularity, tangency, etc.
Virtual objects and constraints provide a direct and (arguably) convenient mechanism for expressing intended high-level qualities of kinematic motion in linkage robots. These qualities can be semantic in nature, for example, enforcing full or partial symmetries, or aligning the robot's motions along task-relevant geometric trajectories.
Next we observe that many linkage robots of interest contain repeated sub-structure. All the examples above show this: for example, the 15 trapezoidal MultiShady tower blocks, ATHLETE's six serial-chain appendages, and the human's legs. It would thus seem useful to define motion (or classes of motion) for individual sub-structures—i.e., sub-linkages—and then be able to assign instances of such motion specifications to instances of the sub-linkages as they appear in the robot. In general the sub-linkages, and thus the sub-linkage motion specifications, can form a hierarchy. We extend the virtual object/constraint capabilities to support such hierarchical motion specifications by defining kinematic abstraction: a system of virtual objects can be crafted to virtually replace a particular sub-linkage in the robot. Conceptually, it is as if the actual sub-linkage is erased and the virtual construction stands in for it, being spliced appropriately to all kinematically adjacent links. The configuration of the virtual replacement is constrained by the rest of the linkage (and/or directly by the user), and that configuration is then mapped to a corresponding configuration of the actual replaced sub-linkage. Here is an example showing a block of the MultiShady tower at three levels of kinematic abstraction:
Together we call this suite of techniques generalized kinematic control: direct kinematic control with virtual objects/constraints and kinematic abstraction.
In theory, generalized kinematic control is sufficient to direct linkage robots to perform a broad class of useful movements. The computed joint state (angle) trajectories could be sent directly as commands to position-controlled actuated joints. However, as this scheme would be entirely feed-forward and rigid, no uncertainty could be tolerated. Uncertainty will be unavoidable in practical linkage robots outside of simulation. In some restricted cases the robot and task can be made inherently robust to the expected levels of uncertainty but to implement linkage robots outside these restrictions, we will need to somehow augment generalized kinematic control. Two promising additions are feedback and compliance; we propose both, localized to the joints and their actuators. Our motivation for this particular scheme is essentially that the joints in a linkage robot are extremely convenient places to extract information about the physical state of the robot and any parts of the environment it may be contacting. However, if the joint actuators were rigid then such feedback would not be of much use, since from the controller's perspective it would contain little or no novel information.
We call the particular combination of actuator compliance and feedback at joint DoFs proprioception, in direct extension of the term's usage in biology. We have already constructed one linkage robot with proprioception, which we call Shady.
We envision a scheme where a linkage robot may engage its surroundings via one or more contacts such that the entire resulting physical system—robot, contacts, and environment—forms a single linkage. Some joints in such a linkage may actually be embedded in the environment, especially in man-made contexts such as the "driving" situations illustrated above, or the common act of turning or pulling a doorknob or handle.
We are exploring the control of arbitrary-topology linkage robots from two complimentary directions: we propose generalized kinematic control to deepen the informational content of purely kinematic feed-forward motion specifications communicated from the operator or high-level controller. To address uncertainty, which will be significant in practice, especially for the common case of overactuated closed-chain systems, we propose proprioception—the combination of actuated joints with low-impedance capability and sensor feedback at those joints. These two approaches address the control of linkage robots from opposing directions. Our vision is that they will meet in the middle to enable a new level of capability across all linkage robot systems.
[1] Marsette Vona, Carrick Detweiler, and Daniela Rus. Shady: Robust Truss Climbing With Mechanical Compliances. In International Symposium on Experimental Robotics, 2006.
[2] Kris Hauser, Timothy Bretl, Jean-Claude Latombe, and Brian Wilcox. Motion planning for a six-legged lunar robot. In Workshop on the Algorithmic Foundations of Robotics (WAFR), 2006.
[3] Vladimir M. Zatsiorski. Kinematics of Human Motion. Human Kinetics, 1998.
[4] Samuel R. Buss. Introduction to inverse kinematics with jacobian transpose pseudoinverse and damped least squares methods. Unpublished, available on the web, April 2004.
[5] Herve Lamure and Dominique Michelucci. Solving geometric constraints by homotopy. In Proceedings of the ACM Solid Modeling Conference, pp. 263–269, Salt Lake City, Utah, 1995.
[6] Ulrich Kortenkamp. Foundations of Dynamic Geometry. PhD thesis, Swiss Federal Institute of Technology Zurich, 1999.
Computer Science and Artificial Intelligence Laboratory (CSAIL) The Stata Center, Building 32 - 32 Vassar Street - Cambridge, MA 02139 - USA tel:+1-617-253-0073 - publications@csail.mit.edu |