Technical Reports Work Products Historical Collections

 CSAIL Digital Archive - Artificial Intelligence Laboratory Series Publications 900 through 999 AI PublicationsLast update Sun Mar 19 05:05:02 2006 AITR-900 Author[s]: David Mark Siegel Contact Sensors for Dexterous Robotic Hands June 1986 This thesis examines a tactile sensor and a thermal sensor for use with the Utah-MIT dexterous four fingered hand. Sensory feedback is critical or full utilization of its advanced manipulatory capabilities. The hand itself provides tendon tensions and joint angles information. However, planned control algorithms require more information than these sources can provide. The tactile sensor utilizes capacitive transduction with a novel design based entirely on silicone elastomers. It provides an 8 x 8 array of force cells with 1.9 mm center-to-center spacing. A pressure resolution of 8 significant bits is available over a 0 to 200 grams per square mm range. The thermal sensor measures a material’s heat conductivity by radiating heat into an object and measuring the resulting temperature variations. This sensor has a 4 x 4 array of temperature cells with 3.5 mm center-to- center spacing. Experiments show that the thermal sensor can discriminate among material by detecting differences in their thermal conduction properties. Both sensors meet the stringent mounting requirements posed by the Utah-MIT hand. Combining them together to form a sensor with both tactile and thermal capabilities will ultimately be possible. The computational requirements for controlling a sensor equipped dexterous hand are severe. Conventional single processor computers do not provide adequate performance. To overcome these difficulties, a computational architecture based on interconnecting high performance microcomputers and a set of software primitives tailored for sensor driven control has been proposed. The system has been implemented and tested on the Utah-MIT hand. The hand, equipped with tactile and thermal sensors and controlled by its computational architecture, is one of the most advanced robotic manipulatory devices available worldwide. Other ongoing projects will exploit these tools and allow the hand to perform tasks that exceed the capabilities of current generation robots. AITR-901 Author[s]: Kenneth W. Haase, Jr. ARLO: Another Representation Language Offer October 1986 This paper describes ARLO, a representation language loosely modelled after Greiner and Lenant’s RLL-1. ARLO is a structure-based representation language for describing structure-based representation languages, including itself. A given representation language is specified in ARLO by a collection of structures describing how its descriptions are interpreted, defaulted, and verified. This high level description is compiles into lisp code and ARLO structures whose interpretation fulfills the specified semantics of the representation. In addition, ARLO itself- as a representation language for expressing and compiling partial and complete language specifications- is described and interpreted in the same manner as the language it describes and implements. This self- description can be extended of modified to expand or alter the expressive power of ARLO’s initial configuration. Languages which describe themselves like ARLO- provide powerful mediums for systems which perform automatic self-modification, optimization, debugging, or documentation. AI systems implemented in such a self- descriptive language can reflect on their own capabilities and limitations, applying general learning and problem solving strategies to enlarge or alleviate them. AIM-902 Author[s]: Richard H. Lathrop, Teresa A. Webster and Temple F. Smith ARIADNE: Pattern-Directed Inference and Hierarchical Abstraction in Protein Structure Recognition May 1987 There are many situations in which a very detailed low-level description encodes, through a hierarchical organization, a recognizable higher-order pattern. The macro- molecular structural conformations of proteins exhibit higher order regularities whose recognition is complicated by many factors. ARIADNE searches for similarities between structural descriptors and hypothesized protein structure at levels more abstract than the primary sequence, based on differential similarity to rule antecedents and the controlled use of tentative higher-order structural hypotheses. Inference is grounded solely in knowledge derivable from the primary sequence, and exploits secondary structure predictions. A novel proposed alignment and functional domain identification of the aminoacyl-tRNA synthetases was found using this system. AIM-903 Author[s]: Richard H. Lathrop, Robert J. Hall, and Robert S. Kirk Functional Abstraction From Structure in VLSI Simulation Models May 1987 High-level functional (or behavioral) simulation models are difficult, time- consuming, and expensive to develop. We report on a method for automatically generating the program code for a high-level functional simulation model. The high-level model is produced directly from the program code for the circuit components’ functional models and a netlist description of their connectivity. A prototype has been implemented in LISP for the SIMMER functional simulator. AITR-904 Author[s]: Linda M. Wills Automated Program Recognition February 1987 The key to understanding a program is recognizing familiar algorithmic fragments and data structures in it. Automating this recognition process will make it easier to perform many tasks which require program understanding, e.g., maintenance, modification, and debugging. This report describes a recognition system, called the Recognizer, which automatically identifies occurrences of stereotyped computational fragments and data structures in programs. The Recognizer is able to identify these familiar fragments and structures, even though they may be expressed in a wide range of syntactic forms. It does so systematically and efficiently by using a parsing technique. Two important advances have made this possible. The first is a language-independent graphical representation for programs and programming structures which canonicalizes many syntactic features of programs. The second is an efficient graph parsing algorithm. AITR-905 Author[s]: Van-Duc Nguyen The Synthesis of Stable Force-Closure Grasps July 1986 This thesis addresses the problem of synthesizing grasps that are force-closure and stable. The synthesis of force-closure grasps constructs independent regions of contact for the fingertips, such that the motion of the grasped object is totally constrained. The synthesis of stable grasps constructs virtual springs at the contacts, such that the grasped object is stable, and has a desired stiffness matrix about its stable equilibrium. A grasp on an object is force-closure if and only if we can exert, through the set of contacts, arbitrary forces and moments on the object. So force-closure implies equilibrium exists because zero forces and moment is spanned. In the reverse direction, we prove that a non-marginal equilibrium grasp is also a force-closure grasp, if it has at least two point contacts with friction in 2D, or two soft- finger contacts or three hard-finger contacts in 3D. Next, we prove that all force-closure grasps can be made stable, by using either active or passive springs at the contacts. The thesis develops a simple relation between the stability and stiffness of the grasp and the spatial configuration of the virtual springs at the contacts. The stiffness of the grasp depends also on whether the points of contact stick, or slide without friction on straight or curved surfaces of the object. The thesis presents fast and simple algorithms for directly constructing stable fore- closure grasps based on the shape of the grasped object. The formal framework of force-closure and stable grasps provides a partial explanation to why we stably grasp objects to easily, and to why our fingers are better soft than hard. AITR-906 Author[s]: Robert Joseph Hall Learning by Failing to Explain May 1986 Explanation-based Generalization requires that the learner obtain an explanation of why a precedent exemplifies a concept. It is, therefore, useless if the system fails to find this explanation. However, it is not necessary to give up and resort to purely empirical generalization methods. In fact, the system may already know almost everything it needs to explain the precedent. Learning by Failing to Explain is a method which is able to exploit current knowledge to prune complex precedents, isolating the mysterious parts of the precedent. The idea has two parts: the notion of partially analyzing a precedent to get rid of the parts which are already explainable, and the notion of re-analyzing old rules in terms of new ones, so that more general rules are obtained. AIM-907 Author[s]: Charles Rich and Richard C. Waters Toward a Requirements Apprentice: On the Boundary Between Informal and Formal Specifications July 1986 Requirements acquisition is one of the most important and least well supported parts of the software development process. The Requirements Apprentice (RA) will assist a human analyst in the creation and modification of software requirements. Unlike current requirements analysis tools, which assume a formal description language, the focus of the RA is on the boundary between informal and formal specifications. The RA is intended to support the earliest phases of creating a requirement, in which incompleteness, ambiguity, and contradiction are inevitable features. From an artificial intelligence perspective, the central problem the RA faces is one of knowledge acquisition. It has to develop a coherent internal representation from an initial set of disorganized statements. To do so, the RA will rely on a variety of techniques, including dependency-directed reasoning, hybrid knowledge representation, and the reuse of common forms (clichés). The Requirements Apprentice is being developed in the context of the Programmer’s Apprentice project, whose overall goal is the creation of an intelligent assistant for all aspects of software development. AITR-908 Author[s]: John G. Harris The Coupled Depth/Slope Approach to Surface Reconstruction June 1986 Reconstructing a surface from sparse sensory data is a well known problem in computer vision. Early vision modules typically supply sparse depth, orientation and discontinuity information. The surface reconstruction module incorporates these sparse and possibly conflicting measurements of a surface into a consistent, dense depth map. The coupled depth/slope model developed here provides a novel computational solution to the surface reconstruction problem. This method explicitly computes dense slope representation as well as dense depth representations. This marked change from previous surface reconstruction algorithms allows a natural integration of orientation constraints into the surface description, a feature not easily incorporated into earlier algorithms. In addition, the coupled depth/ slope model generalizes to allow for varying amounts of smoothness at different locations on the surface. This computational model helps conceptualize the problem and leads to two possible implementations- analog and digital. The model can be implemented as an electrical or biological analog network since the only computations required at each locally connected node are averages, additions and subtractions. A parallel digital algorithm can be derived by using finite difference approximations. The resulting system of coupled equations can be solved iteratively on a mesh-pf-processors computer, such as the Connection Machine. Furthermore, concurrent multi-grid methods are designed to speed the convergence of this digital algorithm. AIM-909 Author[s]: Anya Hurlbert and Tomaso Poggio Learning a Color Algorithm from Examples June 1987 We show that a color algorithm capable of separating illumination from reflectance in a Mondrian world can be learned from a set of examples. The learned algorithm is equivalent to filtering the image data---in which reflectance and illumination are mixed-- -through a center-surround receptive field in individual chromatic channels. The operation resembles the "retinex" algorithm recently proposed by Edwin Land. This result is a specific instance of our earlier results that a standard regularization algorithm can be learned from examples. It illustrates that the natural constraints needed to solve a problemsin inverse optics can be extracted directly from a sufficient set of input data and the corresponding solutions. The learning procedure has been implemented as a parallel algorithm on the Connection Machine System. AIM-910 Author[s]: Steven D. Eppinger and Warren P. Seering On Dynamic Models of Robot Force Control July 1986 For precise robot control, endpoint compliance strategies utilize feedback from a force sensor located near the tool/workpiece interface. Such endpoint force control systems have been observed in the laboratory to be limited to unsatisfactory closed-loop performance. This paper discusses the particular dynamic properties of robot systems which can lead to instability and limit performance. A series of lumped-parameter models is developed in an effort to predict the closed-loop dynamics of a force-controlled single axis arm. The models include some effects of robot structural dynamics, sensor compliance, and workpiece dynamics. The qualitative analysis shows that the robot dynamics contribute to force-controlled instability. Recommendations are made for models to be used in control system design. AIM-911 Author[s]: David McAllester and Ramin Zabih Boolean Classes September 1986 Object-oriented programming languages all involve the notions of class and object. We extend the notion of class so that any Boolean combination of classes is also a class. Boolean classes allow greater precision and conciseness in naming the class of objects governed by a particular method. A class can be viewed as a predicate which is either true or false of any given object. Unlike predicates however classes have an inheritance hierarchy which is known at compile time. Boolean classes extend the notion of class, making classes more like predicates, while preserving the compile time computable inheritance hierarchy. AITR-912 Author[s]: Chae Hun An Trajectory and Force Control of a Direct Drive Arm September 1986 Using the MIT Serial Link Direct Drive Arm as the main experimental device, various issues in trajectory and force control of manipulators were studied in this thesis. Since accurate modeling is important for any controller, issues of estimating the dynamic model of a manipulator and its load were addressed first. Practical and effective algorithms were developed fro the Newton-Euler equations to estimate the inertial parameters of manipulator rigid-body loads and links. Load estimation was implemented both on PUMA 600 robot and on the MIT Serial Link Direct Drive Arm. With the link estimation algorithm, the inertial parameters of the direct drive arm were obtained. For both load and link estimation results, the estimated parameters are good models of the actual system for control purposes since torques and forces can be predicted accurately from these estimated parameters. The estimated model of the direct drive arm was them used to evaluate trajectory following performance by feedforward and computed torque control algorithms. The experimental evaluations showed that the dynamic compensation can greatly improve trajectory following accuracy. Various stability issues of force control were studied next. It was determined that there are two types of instability in force control. Dynamic instability, present in all of the previous force control algorithms discussed in this thesis, is caused by the interaction of a manipulator with a stiff environment. Kinematics instability is present only in the hybrid control algorithm of Raibert and Craig, and is caused by the interaction of the inertia matrix with the Jacobian inverse coordinate transformation in the feedback path. Several methods were suggested and demonstrated experimentally to solve these stability problems. The result of the stability analyses were then incorporated in implementing a stable force/position controller on the direct drive arm by the modified resolved acceleration method using both joint torque and wrist force sensor feedbacks. AIM-914 Author[s]: Christof Koch, Tomaso Poggio and Vincent Torre Computations in the Vertebrate Retina: Gain Enhancement, Differentiation and Motion Discrimination September 1986 The vertebrate retina, which provides the visual input to the brain and its main interface with the outside world, is a very attractive model system for approaching the question of the information processing role of biological mechanisms of nerve cells. It is as yet impossible to provide a complete circuit diagram of the retina, but it is now possible to identify a few simple computations that the retina performs and to relate them to specific biophysical mechanisms and circuit elements. In this paper we consider three operations carried out by most retinae: amplification, temporal differentiation, and computation of the direction of motion of visual patterns. AIM-915 Author[s]: Anya Hurlbert and Tomaso Poggio Visual Attention in Brains and Computers September 1986 Existing computer programs designed to perform visual recognition of objects suffer from a basic weakness: the inability to spotlight regions in the image that potentially correspond to objects of interest. The brain’s mechanisms of visual attention, elucidated by psychophysicists and neurophysiologists, may suggest a solution to the computer’s problem of object recognition. AIM-916 Author[s]: Alessandro Verri and Tomaso Poggio Regularization Theory and Shape Constraints September 1986 Many problems of early vision are ill-posed; to recover unique stable solutions regularization techniques can be used. These techniques lead to meaningful results, provided that solutions belong to suitable compact sets. Often some additional constraints on the shape or the behavior of the possible solutions are available. This note discusses which of these constraints can be embedded in the classic theory of regularization and how, in order to improve the quality of the recovered solution. Connections with mathematical programming techniques are also discussed. As a conclusion, regularization of early vision problems may be improved by the use of some constraints on the shape of the solution (such as monotonicity and upper and lower bounds), when available. AIM-917 Author[s]: Alessandro Verri and Tomaso Poggio Motion Field and Optical Flow: Qualitative Properties December 1986 In this paper we show that the optical flow, a 2D field that can be associated with the variation of the image brightness pattern, and the 2D motion field, the projection on the image plane of the 3D velocity field of a moving scene, are in general different, unless very special conditions are satisfied. The optical flow, therefore, is ill-suited for computing structure from motion and for reconstructing the 3D velocity field, problems that require an accurate estimate of the 2D motion field. We then suggest a different use of the optical flow. We argue that stable qualitative properties of the 2D motion field give useful information about the 3D velocity field and the 3D structure of the scene, and that they can usually be obtained from the optical flow. To support this approach we show how the (smoothed) optical flow and 2D motion field, interpreted as vector fields tangent to flows of planar dynamical systems, may have the same qualitative properties from the point of view of the theory of structural stability of dynamical systems. AITR-918 Author[s]: Guy Blelloch AFL-1: A Programming Language for Massively Concurrent Computers November 1986 Computational models are arising is which programs are constructed by specifying large networks of very simple computational devices. Although such models can potentially make use of a massive amount of concurrency, their usefulness as a programming model for the design of complex systems will ultimately be decided by the ease in which such networks can be programmed (constructed). This thesis outlines a language for specifying computational networks. The language (AFL- 1) consists of a set of primitives, ad a mechanism to group these elements into higher level structures. An implementation of this language runs on the Thinking Machines Corporation, Connection machine. Two significant examples were programmed in the language, an expert system (CIS), and a planning system (AFPLAN). These systems are explained and analyzed in terms of how they compare with similar systems written in conventional languages. AIM-919 Author[s]: Ellen C. Hildreth and Christof Koch The Analysis of Visual Motion: From Computational Theory to Neuronal Mechanisms December 1986 This paper reviews a number of aspects of visual motion analysis in biological systems from a computational perspective. We illustrate the kinds of insights that have been gained through computational studies and how these observations can be integrated with experimental studies from psychology and the neurosciences to understand the particular computations used by biological systems to analyze motion. The particular areas of motion analysis that we discuss include early motion detection and measurement, the optical flow computation, motion correspondence, the detection of motion discontinuities, and the recovery of three-dimensional structure from motion. AIM-924 Author[s]: Mario Bertero, Tomaso Poggio and Vincent Torre Ill-Posed Problems in Early Vision May 1987 The first processing stage in computational vision, also called early vision, consists in decoding 2D images in terms of properties of 3D surfaces. Early vision includes problems such as the recovery of motion and optical flow, shape from shading, surface interpolation, and edge detection. These are inverse problems, which are often ill-posed or ill-conditioned. We review here the relevant mathematical results on ill-posed and ill- conditioned problems and introduce the formal aspects of regularization theory in the linear and non-linear case. More general stochastic regularization methods are also introduced. Specific topics in early vision and their regularization are then analyzed rigorously, characterizing existence, uniqueness, and stability of solutions. AITR-925 Author[s]: Guillermo Juan Rozas A Computational Model for Observation in Quantum Mechanics March 1987 A computational model of observation in quantum mechanics is presented. The model provides a clean and simple computational paradigm which can be used to illustrate and possibly explain some of the unintuitive and unexpected behavior of some quantum mechanical systems. As examples, the model is used to simulate three seminal quantum mechanical experiments. The results obtained agree with the predictions of quantum mechanics (and physical measurements), yet the model is perfectly deterministic and maintains a notion of locality. AIM-927 Author[s]: Davi Geiger and Alan Yuille Stereo and Eye Movement January 1988 We describe a method to solve the stereo correspondence using controlled eye (or camera) movements. These eye movements essentially supply additional image frames which can be used to constrain the stereo matching. Because the eye movements are small, traditional methods of stereo with multiple frames will not work. We develop an alternative approach using a systematic analysis to define a probability distribution for the errors. Our matching strategy then matches the most probable points first, thereby reducing the ambiguity for the remaining matches. We demonstrate this algorithm with several examples. AIM-928 Author[s]: James J. Little Parallel Algorithms for Computer Vision on the Connection Machine November 1986 The Connection Machine is a fine-grained parallel computer having up to 64K processors. It supports both local communication among the processors, which are situated in a two-dimensional mesh, and high-bandwidth communication among processors at arbitrary locations, using a message-passing network. We present solutions to a set of Image Understanding problems for the Connection Machine. These problems were proposed by DARPA to evaluate architectures for Image Understanding systems, and are intended to comprise a representative sample of fundamental procedures to be used in Image Understanding. The solutions on the Connection Machine embody general methods for filtering images, determining connectivity among image elements, determining spatial relations of image elements, and computing graph properties, such as matchings and shortest paths. AIM-930 Author[s]: J.R. Quinlan Simplifying Decision Trees December 1986 Many systems have been developed for constructing decision trees from collections of examples. Although the decision trees generated by these methods are accurate and efficient, they often suffer the disadvantage of excessive complexity that can render them incomprehensible to experts. It is questionable whether opaque structures of this kind can be described as knowledge, no matter how well they function. This paper discusses techniques for simplifying decision trees without compromising their accuracy. Four methods are described, illustrated, and compared on a test- bed of decision trees from a variety of domains. AIM-931 Author[s]: Shimon Ullman An Approach To Object Recognition: Aligning Pictorial Descriptions December 1986 This paper examines the problem of shape- based object recognition and proposes a new approach, the alignment of pictorial descriptions. The first part of the paper reviews general approaches to visual object recognition and divides these approaches into three broad classes: invariant properties methods, object decomposition methods, and alignment methods. The second part presents the alignment method. In this approach the recognition process is divided into two stages. The first determines the transformation in space that is necessary to bring the viewed object into alignment with possible object-models. The second stage determines the model that best matches the viewed object. The proposed alignment method also uses abstract description, but unlike structural description methods, it uses them pictorially, rather than in symbolic structural descriptions. AITR-932 Author[s]: Steven Jeffrey Gordon Automated Assembly Using Feature Localization December 1986 Automated assembly of mechanical devices is studies by researching methods of operating assembly equipment in a variable manner; that is, systems which may be configured to perform many different assembly operations are studied. The general parts assembly operation involves the removal of alignment errors within some tolerance and without damaging the parts. Two methods for eliminating alignment errors are discussed: a priori suppression and measurement and removal. Both methods are studied with the more novel measurement and removal technique being studied in greater detail. During the study of this technique, a fast and accurate six degree-of- freedom position sensor based on a light- stripe vision technique was developed. Specifications for the sensor were derived from an assembly-system error analysis. Studies on extracting accurate information from the sensor by optimally reducing redundant information, filtering quantization noise, and careful calibration procedures were performed. Prototype assembly systems for both error elimination techniques were implemented and used to assemble several products. The assembly system based on the a priori suppression technique uses a number of mechanical assembly tools and software systems which extend the capabilities of industrial robots. The need for the tools was determined through an assembly task analysis of several consumer and automotive products. The assembly system based on the measurement and removal technique used the six degree-of- freedom position sensor to measure part misalignments. Robot commands for aligning the parts were automatically calculated based on the sensor data and executed. AIM-933A Author[s]: Charles Rich and Richard C. Waters The Programmer's Apprentice: A Program Design Scenario November 1987 A scenario is used to illustrate the capabilities of a proposed Design Apprentice, focussing on the area of detailed, low-level design. Given a specification, the Design Apprentice will be able to make many of the design decisions needed to synthesize the required program. The Design Apprentice will also be able to detect various kinds of contradictions and omissions in a specification. AITR-936 Author[s]: Stephen J. Buckley Planning and Teaching Compliant Motion Strategies January 1987 This thesis presents a new high level robot programming system. The programming system can be used to construct strategies consisting of compliant motions, in which a moving robot slides along obstacles in its environment. The programming system is referred to as high level because the user is spared of many robot-level details, such as the specification of conditional tests, motion termination conditions, and compliance parameters. Instead, the user specifies task- level information, including a geometric model of the robot and its environment. The user may also have to specify some suggested motions. There are two main system components. The first component is an interactive teaching system which accepts motion commands from a user and attempts to build a compliant motion strategy using the specified motions as building blocks. The second component is an autonomous compliant motion planner, which is intended to spare the user from dealing with “simple” problems. The planner simplifies the representation of the environment by decomposing the configuration space of the robot into a finite state space, whose states are vertices, edges, faces, and combinations thereof. States are inked to each other by arcs, which represent reliable compliant motions. Using best first search, states are expanded until a strategy is found from the start state to a global state. This component represents one of the first implemented compliant motion planners. The programming system has been implemented on a Symbolics 3600 computer, and tested on several examples. One of the resulting compliant motion strategies was successfully executed on an IBM 7565 robot manipulator. AIM-937 Author[s]: Daniel P. Huttenlocher and Shimon Ullman Recognizing Rigid Objects by Aligning Them with an Image January 1987 This paper presents an approach to recognition where an object is first {\it aligned} with an image using a small number of pairs of model and image features, and then the aligned model is compared directly against the image. To demonstrate the method, we present some examples of recognizing flat rigid objects with arbitrary three-dimensional position, orientation, and scale, from a single two-scale-space segmentation of edge contours. The method is extended to the domain of non-flat objects as well. AIM-939 Author[s]: Shahriar Negahdaripour and Berthold K.P. Horn A Direct Method for Locating the Focus of Expansion January 1987 We address the problem of recovering the motion of a monocular observer relative to a rigid scene. We do not make any assumptions about the shapes of the surfaces in the scene, nor do we use estimates of the optical flow or point correspondences. Instead, we exploit the spatial gradient and the time rate of change of brightness over the whole image and explicitly impose the constraint that the surface of an object in the scene must be in front of the camera for it to be imaged. AIM-940 Author[s]: Shahriar Negahdaripour Ambiguities of a Motion Field January 1987 We study the conditions under which a perspective motion field can have multiple interpretations. Furthermore, we show that in most cases, the ambiguity in the interpretation of a motion field can be resolved by imposing the physical constraint that depth is positive over the image region onto which the surface projects. AIM-941 Author[s]: Eric Saund Dimensionality-Reduction Using Connectionist Networks January 1987 This paper presents a method for using the self-organizing properties of connectionist networks of simple computing elements to discover a particular type of constraint in multidimensional data. The method performs dimensionality-reduction in a wide class of situations for which an assumption of linearity need not be made about the underlying constraint surface. We present a scheme for representing the values of continuous (scalar) variables in subsets of units. The backpropagation weight updating method for training connectionist networks is extended by the use of auxiliary pressure in order to coax hidden units into the prescribed representation for scalar-valued variables. AITR-942 Author[s]: Christopher Granger Atkeson Roles of Knowledge in Motor Learning February 1987 The goal of this thesis is to apply the computational approach to motor learning, i.e., describe the constraints that enable performance improvement with experience and also the constraints that must be satisfied by a motor learning system, describe what is being computed in order to achieve learning, and why it is being computed. The particular tasks used to assess motor learning are loaded and unloaded free arm movement, and the thesis includes work on rigid body load estimation, arm model estimation, optimal filtering for model parameter estimation, and trajectory learning from practice. Learning algorithms have been developed and implemented in the context of robot arm control. The thesis demonstrates some of the roles of knowledge in learning. Powerful generalizations can be made on the basis of knowledge of system structure, as is demonstrated in the load and arm model estimation algorithms. Improving the performance of parameter estimation algorithms used in learning involves knowledge of the measurement noise characteristics, as is shown in the derivation of optimal filters. Using trajectory errors to correct commands requires knowledge of how command errors are transformed into performance errors, i.e., an accurate model of the dynamics of the controlled system, as is demonstrated in the trajectory learning work. The performance demonstrated by the algorithms developed in this thesis should be compared with algorithms that use less knowledge, such as table based schemes to learn arm dynamics, previous single trajectory learning algorithms, and much of traditional adaptive control. AIM-945 Author[s]: Carl E. Hewitt Offices are Open Systems February 1987 This paper takes a prescriptive stance on how to establish the information-processing foundations for taking action and making decisions in office work from an open system perspective. We propose due process as a central activity in organizational information processing. AIM-946 Author[s]: Alan Bawden Reification without Evaluation June 1988 Constructing self-referential systems, such as Brian Smith's 3-Lisp language, is actually more straightforward than you think. Anyone can build an infinite tower of processors (where each processor implements the processor at the next level below) by employing some common sense and one simple trick. In particular, it is not necessary to re-design quotation, take a stand on the relative merits of evaluation vs. normalization, or treat continuations as meta-level objects. This paper presents a simple programming language interpreter that illustrates how this can be done. By keeping its expression evaluator entirely separate from the mechanisms that implement its infinite tower, this interpreter avoids many troublesome aspects of previous self-referential programming languages. Given these basically straightforward techniques, processor towers might be easily constructed for a wide variety of systems to enable them to manipulate and reason about themselves. AIM-947 Author[s]: Bonnie J. Dorr Principle-Based Parsing for Machine Translation December 1987 Many syntactic parsing strategies for machine translation systems are based entirely on context-free grammars. These parsers require an overwhelming number of rules; thus, translation systems using rule-based parsers either have limited linguistic coverage, or they have poor performance due to formidable grammar size. This report shows how a principle-based parser with a 'co-routine' design improves parsing for translation. The parser consists of a skeletal structure-building mechanism that operates in conjunction with a linguistically based constraint module, passing control back and forth until a set of underspecified skeletal phrase-structures is converted into a fully instantiated parse tree. The modularity of the parsing design accomodates linguistic generalization, reduces the grammar size, allows extension to other languages, and is compatible with studies of human language processing. AIM-948 Author[s]: Steven D. Eppinger and Warren P. Seering Understanding Bandwidth Limitations in Robot Force Control August 1987 This paper provides an analytical overview of the dynamics involved in force control. Models are developed which demonstrate, for the one-axis explicit force control case, the effects on system closed-loop bandwidth of: a) robot system dynamics that are not usually considered in the controller design; b) drive- train and task nonlinearities; and c) actuator and controller dynamics. The merits and limitations of conventional solutions are weighed, and some new solutions are proposed. Conclusions are drawn which give insights into the relative importance of the effects discussed. AIM-949 Author[s]: Richard C. Waters Program Translation via Abstraction and Reimplementation December 1986 Essentially all program translators (both source-to-source translators and compilers) operate via transliteration and refinement. This approach is fundamentally limited in the quality of the output it can produce. In particular, it tends to be insufficiently sensitive to global features of the source program and too sensitive to irrelevant local details. This paper presents the alternate translation paradigm of abstraction and reimplementation, which is one of the goals of the Programmer's Apprentice project. A translator has been constructed which translates Cobol programs into Hibol (a very high level, business data processing language). A compiler has been designed which generates extremely efficient PDP-11 object code for Pascal programs. AIM-950 Author[s]: Kenneth Man-Kam Yip Extracting Qualitative Dynamics from Numerical Experiments March 1987 The Phase Space is a powerful tool for representing and reasoning about the qualitative behavior of nonlinear dynamical systems. Significant physical phenomena of the dynamical system---periodicity, recurrence, stability and the like---are reflected by outstanding geometric features of the trajectories in the phase space. This paper presents an approach for the automatic reconstruction of the full dynamical behavior from the numerical results by exploiting knowledge of Dynamical Systems Theory and techniques from computational geometry and computer vision. The approach is applied to an important class of dynamical systems, the area-preserving maps, which often arise from the study of Hamiltonian systems. AIM-951 Author[s]: Daniel S. Weld Comparative Analysis November 1987 Comparative analysis is the problem of predicting how a system will react to perturbations in its parameters, and why. For example, comparative analysis could be asked to explain why the period of an oscillating spring/block system would increase if the mass of the block were larger. This paper formalizes the problem of comparative analysis and presents a technique, differential qualitative (DQ) analysis, which solves the task, providing explanations suitable for use by design systems, automated diagnosis, intelligent tutoring systems, and explanation-based generalization. DQ analysis uses inference rules to deduce qualitative information about the relative change of system parameters. Multiple perspectives are used to represent relative change values over intervals of time. Differential analysis has been implemented, tested on a dozen examples, and proven sound. Unfortunately, the technique is incomplete; it always terminates, but does not always return an answer. AIM-952 Author[s]: Guy E. Blelloch and James J. Little Parallel Solutions to Geometric Problems on the Scan Model of Computation February 1988 This paper describes several parallel algorithms that solve geometric problems. The algorithms are based on a vector model of computation---the scan-model. The purpose of this paper is both to show how the model can be used and to show a set of interesting algorithms, most of which have been implemented on the Connection Machine, a highly parallel single instruction multiple data (SIMD) computer. AIM-953 Author[s]: Henry M. Wu Scheme 86: An Architecture for Microcoding a Scheme Interpreter August 1988 I describe the design and implementation plans for a computer that is optimized as a microcoded interpreter for Scheme. The computer executes SCode, a typed-pointer representation. The memory system has low- latency as well as high throughput. Multiple execution units in the processor complete complex operations in less than one memory cycle, allowing efficient use of memory bandwidth. The processor provides hardware support for tagged data objects and runtime type checking. I will discuss the motivation for this machine, its architecture, why it can interpret Scheme efficiently, and the computer-aided design tools developed for building this computer. AIM-954 Author[s]: Charles Rich and Richard C. Waters Formalizing Reusable Software Components in the Programmer's Apprentice February 1987 There has been a long-standing desire in computer science for a way of collecting and using libraries of standard software components. The limited success in actually doing this stems not from any resistance to the idea, nor from any lack of trying, but rather from the difficulty of choosing an appropriate formalism for representing components. For a formalism to be maximally useful, it must satisfy five key desiderata: expressiveness, convenient combinability, semantic soundness, machine manipulability, and programming language independence. The Plan Calculus formalism developed as part of the Programmer's Apprentice project satisfies each of these desiderata quite well. It does this by combining the ideas from flowchart schemas, data abstraction, logical formalisms, and program transformations. The efficacy of the Plan Calculus has been demonstrated in part by a prototype program editor called the Knowledge- based Editor in Emacs. This editor makes it possible for a programmer to construct a program rapidly and reliably by combining components represented as plans. AIM-955 Author[s]: Harold Abelson and Gerald Jay Sussman The Dynamicist's Workbench: I Automatic Preparation of Numerical Experiments May 1987 The dynamicist's workbench is a system for automating some of the work of experimental dynamics. We describe a portion of our system that deals with the setting up and execution of numerical simulations. This part of the workbench includes a spectrum of computational tools---numerical methods, symbolic algebra, and semantic constraints. These tools are designed so that combined methods, tailored to particular problems, can be constructed. AIM-957 Author[s]: John Canny and Bruce Donald Simplified Voronoi Diagrams April 1987 The Voronoi diagram has proved to be a useful tool in a variety of contexts in computational geometry. Our interest here is in using the diagram to simplify the planning of collision-free paths for a robot among obstacles, the so-called generalized movers' problem. The Voronoi diagram, as usually defined, is a strong deformation retract of free space so that free space can be continuously deformed onto the diagram. In particular, any path in free space can be continuously deformed onto the diagram. This means that the diagram is complete for path planning, i.e., searching the original space for paths can be reduced to a search on the diagram. Reducing the dimension of the set to be searched usually reduces the time complexity of the search. Secondly, the diagram leads to robust paths, i.e., paths that are maximally clear of obstacles. AIM-958A Author[s]: Richard C. Waters Obviously Synchronizable Series Expression: Part I: User's Manual for the OSS Macro Package March 1988 The benefits of programming in a functional style are well known. In particular, algorithms that are expressed as compositions of functions operating on series/vectors/streams of data elements are much easier to understand and modify than equivalent algorithms expressed as loops. Unfortunately, many programmers hesitate to use series expressions, because they are typically implemented very inefficiently. Common Lisp macro packages (OSS) has been implemented which supports a restricted class of series expressions, obviously synchronizable series expressions, which can be evaluated very efficiently by automatically converting them into loops. Using this macro package, programmers can obtain the advantages of expressing computations as series expressions without incurring any run-time overhead. AIM-958 Author[s]: Richard C. Waters Obviously Synchronizable Series Expressions: Part I: User's Manual for the OSS Macro Package October 1987 The benefits of programming in a functional style are well known. In particular, algorithms that are expressed as compositions of functions operating on series/vectors/streams of data elements are much easier to understand and modify than equivalent algorithms expressed as loops. Unfortunately, many programmers hesitate to use series expressions, because they are typically implemented very inefficiently. Common Lisp macro packages (OSS) has been implemented which supports a restricted class of series expressions, obviously synchronizable series expressions, which can be evaluated very efficiently by automatically converting them into loops. Using this macro package, programmers can obtain the advantages of expressing computations as series expressions without incurring any run-time overhead. AIM-959A Author[s]: Richard C. Waters Obviously Synchronizable Series Expressions: Part II: Overview of the Theory and Implementation March 1988 The benefits of programming in a functional style are well known. In particular, algorithms that are expressed as compositions of functions operating on series/vectors/streams of data elements are much easier to understand and modify than equivalent algorithms expressed as loops. Unfortunately, many programmers hesitate to use series expressions, because they are typically implemented very inefficiently- the prime source of inefficiency being the creation of intermediate series objects. A restricted class of series expressions, obviously synchronizable series expressions, is defined which can be evaluated very efficiently. At the cost of introducing restrictions which place modest limits on the series expressions which can be written, the restrictions guarantee that the creation of intermediate series objects is never necessary. This makes it possible to automatically convert obviously synchronizable series expressions into highly efficient loops using straightforward algorithms. AIM-959 Author[s]: Richard C. Waters Synchronizable Series Expressions: Part II: Overview of the Theory and Implementation November 1987 The benefits of programming in a functional style are well known. In particular, algorithms that are expressed as compositions of functions operating on series/vectors/streams of data elements are much easier to understand and modify than equivalent algorithms expressed as loops. Unfortunately, many programmers hesitate to use series expressions, because they are typically implemented very inefficiently- the prime source of inefficiency being the creation of intermediate series objects. A restricted class of series expressions, obviously synchronizable series expressions, is defined which can be evaluated very efficiently. At the cost of introducing restrictions which place modest limits on the series expressions which can be written, the restrictions guarantee that the creation of intermediate series objects is never necessary. This makes it possible to automatically convert obviously synchronizable series expressions into highly efficient loops using straightforward algorithms. AIM-962 Author[s]: Thomas R. Kennedy III Using Program Transformation to Improve Program Translation May 1987 Direct, construct by construct translation from one high level language to another often produces convoluted, unnatural, and unreadable results, particularly when the source and target languages support different models of programming. A more readable and natural translation can be obtained by augmenting the translator with a program transformation system. AITR-963 Author[s]: Gil J. Ettinger Hierarchical Object Recognition Using Libraries of Parameterized Model Sub-Parts May 1987 This thesis describes the development of a model-based vision system that exploits hierarchies of both object structure and object scale. The focus of the research is to use these hierarchies to achieve robust recognition based on effective organization and indexing schemes for model libraries. The goal of the system is to recognize parameterized instances of non-rigid model objects contained in a large knowledge base despite the presence of noise and occlusion. Robustness is achieved by developing a system that can recognize viewed objects that are scaled or mirror-image instances of the known models or that contain components sub-parts with different relative scaling, rotation, or translation than in models. The approach taken in this thesis is to develop an object shape representation that incorporates a component sub-part hierarchy- to allow for efficient and correct indexing into an automatically generated model library as well as for relative parameterization among sub- parts, and a scale hierarchy- to allow for a general to specific recognition procedure. After analysis of the issues and inherent tradeoffs in the recognition process, a system is implemented using a representation based on significant contour curvature changes and a recognition engine based on geometric constraints of feature properties. Examples of the system’s performance are given, followed by an analysis of the results. In conclusion, the system’s benefits and limitations are presented. AIM-964 Author[s]: Eric Sven Ristad Complexity of Human Language Comprehension December 1988 The goal of this article is to reveal the computational structure of modern principle- and-parameter (Chomskian) linguistic theories: what computational problems do these informal theories pose, and what is the underlying structure of those computations? To do this, I analyze the computational complexity of human language comprehension: what linguistic representation is assigned to a given sound? This problem is factored into smaller, interrelated (but independently statable) problems. For example, in order to understand a given sound, the listener must assign a phonetic form to the sound; determine the morphemes that compose the words in the sound; and calculate the linguistic antecedent of every pronoun in the utterance. I prove that these and other subproblems are all NP-hard, and that language comprehension is itself PSPACE- hard. AIM-965 Author[s]: Heinrich H. Bulthoff and Hanspeter A. Mallot Interaction of Different Modules in Depth Perception: Stereo and Shading May 1987 A method has been developed to measure the perceived depth of computer generated images of simple solid objects. Computer graphic techniques allow for independent control of different depth queues (stereo, shading, and texture) and enable the investigator thereby to study psychophysically the interaction of modules for depth perception. Accumulation of information from shading and stereo and vetoing of depth from shading by edge information have been found. Cooperativity and other types of interactions are discussed. If intensity edges are missing, as in a smooth-shaded surface, the image intensities themselves could be used for stereo matching. The results are compared with computer vision algorithms for both single modules and their integration for 3D vision. AIM-966 Author[s]: Robert J. Hall A Fully Abstract Semantics for Event-Based Simulation May 1987 This paper shows that, provided circuits contain no zero-delay loops, a tight relationship, full abstraction, exists between a natural event-based operational semantics for circuits and a natural denotational semantics for circuits based on causal functions on value timelines. The paper also discusses what goes wrong if zero-delay loops are allowed, and illustrates the application of this semantic relationship to modeling questions. AIM-967 Author[s]: Robert J. Hall, Richard H. Lathrop and Robert S. Kirk A Multiple Representation Approach to Understanding the Time Behavior of Digital Circuits May 1987 We put forth a multiple representation approach to deriving the behavioral model of a digital circuit automatically from its structure and the behavioral simulation models of its components. One representation supports temporal reasoning for composition and amplification, another supports simulation and a third helps to partition the translation problem. A working prototype, FUNSTRUX, is described. AITR-968 Author[s]: Harry Voorhees Finding Texture Boundaries in Images June 1987 Texture provides one cue for identifying the physical cause of an intensity edge, such as occlusion, shadow, surface orientation or reflectance change. Marr, Julesz, and others have proposed that texture is represented by small lines or blobs, called 'textons' by Julesz [1981a], together with their attributes, such as orientation, elongation, and intensity. Psychophysical studies suggest that texture boundaries are perceived where distributions of attributes over neighborhoods of textons differ significantly. However, these studies, which deal with synthetic images, neglect to consider two important questions: How can these textons be extracted from images of natural scenes? And how, exactly, are texture boundaries then found? This thesis proposes answers to these questions by presenting an algorithm for computing blobs from natural images and a statistic for measuring the difference between two sample distributions of blob attributes. As part of the blob detection algorithm, methods for estimating image noise are presented, which are applicable to edge detection as well. AIM-970 Author[s]: Ed Gamble and Tomaso Poggio Visual Integration and Detection of Discontinuities: The Key Role of Intensity Edges October 1987 Integration of several vision modules is likely to be one of the keys to the power and robustness of the human visual system. The problem of integrating early vision cues is also emerging as a central problem in current computer vision research. In this paper we suggest that integration is best performed at the location of discontinuities in early processes, such as discontinuities in image brightness, depth, motion, texture and color. Coupled Markov Random Field models, based on Bayes estimation techiques, can be used to combine vision modalities with their discontinuities. These models generate algorithms that map naturally onto parallel fine-grained architectures such as the Connection Machine. We derive a scheme to integrate intensity edges with stereo depth and motion field information and show results on synthetic and natural images. The use of intensity edges to integrate other visual cues and to help discover discontinuities emerges as a general and powerful principle. AITR-972 Author[s]: Robert C. Berwick Principle-Based Parsing June 1987 During the past few years, there has been much discussion of a shift from rule-based systems to principle-based systems for natural language processing. This paper outlines the major computational advantages of principle-based parsing, its differences from the usual rule-based approach, and surveys several existing principle-based parsing systems used for handling languages as diverse as Warlpiri, English, and Spanish, as well as language translation. AIM-973 Author[s]: T. Poggio and C. Koch Synapses That Compute Motion June 1987 Biophysics of computation is a new field that attempts to characterize the role in information processing of the several biophysical mechanisms in neurons, synapses, and membranes that have been uncovered in recent years. In this article, we review a synaptic mechanism, based on the interaction between excitation and silent inhibition, that implements a veto-like operation. Synapses of this type may underlie direction selectivity to direction of motion in the vertebrate retina. AITR-974 Author[s]: Michael D. Riley Time-Frequency Representations for Speech Signals May 1987 This work addresses two related questions. The first question is what joint time-frequency energy representations are most appropriate for auditory signals, in particular, for speech signals in sonorant regions. The quadratic transforms of the signal are examined, a large class that includes, for example, the spectrograms and the Wigner distribution. Quasi-stationarity is not assumed, since this would neglect dynamic regions. A set of desired properties is proposed for the representation: (1) shift-invariance, (2) positivity, (3) superposition, (4) locality, and (5) smoothness. Several relations among these properties are proved: shift-invariance and positivity imply the transform is a superposition of spectrograms; positivity and superposition are equivalent conditions when the transform is real; positivity limits the simultaneous time and frequency resolution (locality) possible for the transform, defining an uncertainty relation for joint time-frequency energy representations; and locality and smoothness tradeoff by the 2-D generalization of the classical uncertainty relation. The transform that best meets these criteria is derived, which consists of two-dimensionally smoothed Wigner distributions with (possibly oriented) 2-D guassian kernels. These transforms are then related to time-frequency filtering, a method for estimating the time- varying ‘transfer function’ of the vocal tract, which is somewhat analogous to ceptstral filtering generalized to the time-varying case. Natural speech examples are provided. The second question addressed is how to obtain a rich, symbolic description of the phonetically relevant features in these time-frequency energy surfaces, the so-called schematic spectrogram. Time-frequency ridges, the 2-D analog of spectral peaks, are one feature that is proposed. If non-oriented kernels are used for the energy representation, then the ridge tops can be identified, with zero-crossings in the inner product of the gradient vector and the direction of greatest downward curvature. If oriented kernels are used, the method can be generalized to give better orientation selectivity (e.g., at intersecting ridges) at the cost of poorer time-frequency locality. Many speech examples are given showing the performance for some traditionally difficult cases: semi- vowels and glides, nasalized vowels, consonant-vowel transitions, female speech, and imperfect transmission channels. AIM-975 Author[s]: Michael A. Gennert and Shahriar Negahdaripour Relaxing the Brightness Constancy Assumption in Computing Optical Flow June 1987 Optical flow is the apparent (or perceived) motion of image brightness patterns arising from relative motion of objects and observer. Estimation of the optical flow requires the application of two kinds of constraint: the flow field smoothness constraint and the brightness constancy constraint. The brightness constancy constraint permits one to match image brightness values across images, but is very restrictive. We propose replacing this constraint with a more general constraint, which permits a linear transformation between image brightness values. The transformation parameters are allowed to vary smoothly so that inexact matching is allowed. We describe the implementation on a highly parallel computer and present sample results. AIM-977 Author[s]: Sundar Narasimhan, David M. Siegel and John M. Hollerbach A Standard Architecture for Controlling Robots July 1988 This paper describes a fully implemented computational architecture that controls the Utah-MIT dextrous hand and other complex robots. Robots like the Utah-MIT hand are characterized by large numbers of actuators and sensors, and require high servo rates. Consequently, powerful and flexible computer architectures are needed to control them. The architecture described in this paper derives its power from the highly efficient real-time environment provided for its control processors, coupled with a development host that enables flexible program development. By mapping the memory of a dedicated group of processors into the address space of a host computer, efficient sharing of system resources between them is possible. The software is characterized by a few simple design concepts but provides the facilities out of which more powerful utilities like multi- processor pseudoterminal emulator, a transparent and fast file server, and a flexible symbolic debugger could be constructed. AITR-978 Author[s]: Daniel Wayne Weise Formal Multilevel Hierarchical Verification of Synchronous MOS Circuits June 1987 I have designed and implemented a system for the multilevel verification of synchronous MOS VLSI circuits. The system, called Silica Pithecus, accepts the schematic of an MOS circuit and a specification of the circuit’s intended digital behavior. Silica Pithecus determines if the circuit meets its specification. If the circuit fails to meet its specification Silica Pithecus returns to the designer the reason for the failure. Unlike earlier verifiers which modelled primitives (e.g., transistors) as unidirectional digital devices, Silica Pithecus models primitives more realistically. Transistors are modelled as bidirectional devices of varying resistances, and nodes are modelled as capacitors. Silica Pithecus operates hierarchically, interactively, and incrementally. Major contributions of this research include a formal understanding of the relationship between different behavioral descriptions (e.g., signal, boolean, and arithmetic descriptions) of the same device, and a formalization of the relationship between the structure, behavior, and context of device. Given these formal structures my methods find sufficient conditions on the inputs of circuits which guarantee the correct operation of the circuit in the desired descriptive domain. These methods are algorithmic and complete. They also handle complex phenomena such as races and charge sharing. Informal notions such as races and hazards are shown to be derivable from the correctness conditions used by my methods. AITR-979 Author[s]: David Allen McAllester ONTIC: A Knowledge Representation System for Mathematics July 1987 Ontic is an interactive system for developing and verifying mathematics. Ontic’s verification mechanism is capable of automatically finding and applying information from a library containing hundreds of mathematical facts. Starting with only the axioms of Zermelo- Fraenkel set theory, the Ontic system has been used to build a data base of definitions and lemmas leading to a proof of the Stone representation theorem for Boolean lattices. The Ontic system has been used to explore issues in knowledge representation, automated deduction, and the automatic use of large data bases. AITR-980 Author[s]: James V. Mahoney Image Chunking: Defining Spatial Building Blocks for Scene Analysis August 1987 Rapid judgments about the properties and spatial relations of objects are the crux of visually guided interaction with the world. Vision begins, however, with essentially pointwise representations of the scene, such as arrays of pixels or small edge fragments. For adequate time-performance in recognition, manipulation, navigation, and reasoning, the processes that extract meaningful entities from the pointwise representations must exploit parallelism. This report develops a framework for the fast extraction of scene entities, based on a simple, local model of parallel computation.sAn image chunk is a subset of an image that can act as a unit in the course of spatial analysis. A parallel preprocessing stage constructs a variety of simple chunks uniformly over the visual array. On the basis of these chunks, subsequent serial processes locate relevant scene components and assemble detailed descriptions of them rapidly. This thesis defines image chunks that facilitate the most potentially time- consuming operations of spatial analysis--- boundary tracing, area coloring, and the selection of locations at which to apply detailed analysis. Fast parallel processes for computing these chunks from images, and chunk-based formulations of indexing, tracing, and coloring, are presented. These processes have been simulated and evaluated on the lisp machine and the connection machine. AITR-982 Author[s]: Bruce Randall Donald Error Detection and Recovery for Robot Motion Planning with Uncertainty July 1987 Robots must plan and execute tasks in the presence of uncertainty. Uncertainty arises from sensing errors, control errors, and uncertainty in the geometry of the environment. The last, which is called model error, has received little previous attention. We present a framework for computing motion strategies that are guaranteed to succeed in the presence of all three kinds of uncertainty. The motion strategies comprise sensor- based gross motions, compliant motions, and simple pushing motions. AIM-983 Author[s]: W. Eric L. Grimson On the Recognition of Curved Objects July 1987 Determining the identity and pose of occluded objects from noisy data is a critical part of a system's intelligent interaction with an unstructured environment. Previous work has shown that local measurements of the position and surface orientation of small patches of an object's surface may be used in a constrained search process to solve this problem for the case of rigid polygonal objects using two-dimensional sensory data, or rigid polyhedral objects using three-dimensional data. This note extends the recognition system to deal with the problem of recognizing and locating curved objects. The extension is done in two dimensions, and applies to the recognition of two-dimensional objects from two-dimensional data, or to the recognition of three-dimensional objects in stable positions from two- dimensional data. AIM-984 Author[s]: Rodney A. Brooks, Anita M. Flynn and Thomas Marill Self Calibration of Motion and Stereo Vision for Mobile RobotsNavigation August 1987 We report on experiments with a mobile robot using one vision process (forward motion vision) to calibrate another (stereo vision) without resorting to any external units of measurement. Both are calibrated to a velocity dependent coordinate system which is natural to the task of obstacle avoidance. The foundations of these algorithms, in a world of perfect measurement, are quite elementary. The contribution of this work is to make them noise tolerant while remaining simple computationally. Both the algorithms and the calibration procedure are easy to implement and have shallow computational depth, making them (1) run at reasonable speed on moderate uni-processors, (2) appear practical to run continuously, maintaining an up-to-the- second calibration on a mobile robot, and (3) appear to be good candidates for massively parallel implementations. AIM-985 Author[s]: W. Eric L. Grimson On the Recognition of Parameterized Objects October 1987 Determining the identity and pose of occluded objects from noisy data is a critical step in interacting intelligently with an unstructured environment. Previous work has shown that local measurements of position and surface orientation may be used in a constrained search process to solve this problem, for the case of rigid objects, either two-dimensional or three-dimensional. This paper considers the more general problem of recognizing and locating objects that can vary in parameterized ways. We consider objects with rotational, translational, or scaling degrees of freedom, and objects that undergo stretching transformations. We show that the constrained search method can be extended to handle the recognition and localization of such generalized classes of object families. AIM-986 Author[s]: Harold Abelson and Gerald Jay Sussman Lisp: A Language for Stratified Design August 1987 We exhibit programs that illustrate the power of Lisp as a language for expressing the design and organization of computational systems. The examples are chosen to highlight the importance of abstraction in program design and to draw attention to the use of procedures to express abstractions. AIM-987 Author[s]: Alan Yuille Energy Functions for Early Vision and Analog Networks November 1987 This paper describes attempts to model the modules of early vision in terms of minimizing energy functions, in particular energy functions allowing discontinuities in the solution. It examines the success of using Hopfield-style analog networks for solving such problems. Finally it discusses the limitations of the energy function approach. AITR-988 Author[s]: Kenneth W. Haase, Jr. TYPICAL: A Knowledge Representation System for Automated Discovery and Inference August 1987 TYPICAL is a package for describing and making automatic inferences about a broad class of SCHEME predicate functions. These functions, called types following popular usage, delineate classes of primitive SCHEME objects, composite data structures, and abstract descriptions. TYPICAL types are generated by an extensible combinator language from either existing types or primitive terminals. These generated types are located in a lattice of predicate subsumption which captures necessary entailment between types; if satisfaction of one type necessarily entail satisfaction of another, the first type is below the second in the lattice. The inferences make by TYPICAL computes the position of the new definition within the lattice and establishes it there. This information is then accessible to both later inferences and other programs (reasoning systems, code analyzers, etc) which may need the information for their own purposes. TYPICAL was developed as a representation language for the discovery program Cyrano; particular examples are given of TYPICAL’s application in the Cyrano program. AIM-989 Author[s]: Alan Yuille and Shimon Ullman Rigidity and Smoothness of Motion November 1987 sMany theories of structure from motion divide the process into twosparts which are solved using different assumptions. Smoothness of thesvelocity field is often assumed to solve the motion correspondencesproblem, and then rigidity is used to recover the 3D structure. Wesprove results showing that, in a statistical sense, smoothness of thesvelocity field follows from rigidity of the motion. AITR-992 Author[s]: David L. Brock Enhancing the Dexterity of a Robot Hand Using Controlled Slip May 1987 Humans can effortlessly manipulate objects in their hands, dexterously sliding and twisting them within their grasp. Robots, however, have none of these capabilities, they simply grasp objects rigidly in their end effectors. To investigate this common form of human manipulation, an analysis of controlled slipping of a grasped object within a robot hand was performed. The Salisbury robot hand demonstrated many of these controlled slipping techniques, illustrating many results of this analysis. First, the possible slipping motions were found as a function of the location, orientation, and types of contact between the hand and object. Second, for a given grasp, the contact types were determined as a function of the grasping force and the external forces on the object. Finally, by changing the grasping force, the robot modified the constraints on the object and affect controlled slipping slipping motions. AITR-993 Author[s]: Michael B. Kashket A Government-Binding Based Parser for Warlpiri, a Free-Word Order Language January 1987 Free-word order languages have long posed significant problems for standard parsing algorithms. This thesis presents an implemented parser, based on Government- Binding (GB) theory, for a particular free-word order language, Warlpiri, an aboriginal language of central Australia. The words in a sentence of a free-word order language may swap about relatively freely with little effect on meaning: the permutations of a sentence mean essentially the same thing. It is assumed that this similarity in meaning is directly reflected in the syntax. The parser presented here properly processes free word order because it assigns the same syntactic structure to the permutations of a single sentence. The parser also handles fixed word order, as well as other phenomena. On the view presented here, there is no such thing as a “configurational” or “non-configurational” language. Rather, there is a spectrum of languages that are more or less ordered. The operation of this parsing system is quite different in character from that of more traditional rule-based parsing systems, e.g., context-free parsers. In this system, parsing is carried out via the construction of two different structures, one encoding precedence information and one encoding hierarchical information. This bipartite representation is the key to handling both free- and fixed-order phenomena. This thesis first presents an overview of the portion of Warlpiri that can be parsed. Following this is a description of the linguistic theory on which the parser is based. The chapter after that describes the representations and algorithms of the parser. In conclusion, the parser is compared to related work. The appendix contains a substantial list of test cases – both grammatical and ungrammatical – that the parser has actually processed. AIM-994 Author[s]: Berthold K.P. Horn Relative Orientation September 1987 Before corresponding points in images taken with two cameras can be used to recover distances to objects in a scene, one has to determine the position and orientation of one camera relative to the other. This is the classic photogrammetric problem of relative orientation, central to the interpretation of binocular stereo information. Described here is a particularly simple iterative scheme for recovering relative orientation that, unlike existing methods, does not require a good initial guess for the baseline and the rotation. AITR-995 Author[s]: Feng Zhao An O(N) Algorithm for Three-Dimensional N-Body Simulations October 1987 We develop an algorithm that computes the gravitational potentials and forces on N point- masses interacting in three-dimensional space. The algorithm, based on analytical techniques developed by Rokhlin and Greengard, runs in order N time. In contrast to other fast N-body methods such as tree codes, which only approximate the interaction potentials and forces, this method is exact – it computes the potentials and forces to within any prespecified tolerance up to machine precision. We present an implementation of the algorithm for a sequential machine. We numerically verify the algorithm, and compare its speed with that of an O(N2) direct force computation. We also describe a parallel version of the algorithm that runs on the Connection Machine in order 0(logN) time. We compare experimental results with those of the sequential implementation and discuss how to minimize communication overhead on the parallel machine. AIM-996 Author[s]: Rado Jasinschi and Alan Yuille Non-Rigid Motion and Regge Calculus November 1987 We study the problem of recovering the structure from motion of figures which are allowed to perform a controlled non-rigid motion. We use Regge Calculus to approximate a general surface by a net of triangles. The non- rigid flexing motion we deal with corresponds to keeping the triangles rigid and allowing bending only at the joins between triangles. We show that depth information can be obtained by using a modified version of the Incremental Rigidity Scheme devised by Ullman (1984). We modify this scheme to allow for flexing motion and call our version the Incremental Semirigidity Scheme. AIM-997 Author[s]: Matthew Halfant and Gerald Jay Sussman Abstraction in Numerical Methods October 1987 We illustrate how the liberal use of high-order procedural abstractions and infinite streams helps us to express some of the vocabulary and methods of numerical analysis. We develop a software toolbox encapsulating the technique of Richardson extrapolation, and we apply these tools to the problems of numerical integration and differentiation. By separating the idea of Richardson extrapolation from its use in particular circumstances, we indicate how numerical programs can be written that exhibit the structure of the ideas from which they are formed. AIM-998 Author[s]: Bonnie Jean Dorr UNITRAN: An Interlingual Machine Translation System December 1987 This report describes the UNITRAN (UNIversal TRANslator) system, an implementation of a principle-based approach to natural language translation. The system is "interlingual", i.e., the model is based on universal principles that hold across all languages; the distinctions among languages are handled by settings of parameters associated with the universal principles. Interaction effects of linguistic principles are handled by the syste so that the programmer does not need to specifically spell out the details of rule applications. Only a small set of principles covers all languages; thus, the unmanageable grammar size of alternative approaches is no longer a problem. AIM-999 Author[s]: Gerald Roylance Expressing Mathematical Subroutines Constructively November 1987 The typical subroutines that compute $\sin(x)$ and $\exp(x)$ bear little resemblance to our mathematical knowledge of these functions: they are composed of concrete arithmetic expressions that include many mysterious numerical constants. Instead of programming these subroutines conventionally, we can express their construction using symbolic ideas such as periodicity and Taylor series. Such an approach has many advantages: the code is closer to the mathematical basis of the function, less vulnerable to errors, and is trivially adaptable to various precisions.

 Computer Science and Artificial Intelligence Laboratory (CSAIL) The Stata Center, Building 32 - 32 Vassar Street - Cambridge, MA 02139 - USA tel:+1-617-253-0073 - publications@csail.mit.edu