CSAIL Publications and Digital Archive header
bullet Technical Reports bullet Work Products bullet Historical Collections bullet

link to publications.csail.mit.edu link to www.csail.mit.edu horizontal line

CSAIL Digital Archive - Artificial Intelligence Laboratory Series
Publications 1000 through 1099

AI PublicationsLast update Sun Mar 19 05:05:02 2006


Author[s]: Bonnie Jean Dorr

UNITRAN: A Principle-Based Approach to Machine Translation

December 1987



Machine translation has been a particularly difficult problem in the area of Natural Language Processing for over two decades. Early approaches to translation failed since interaction effects of complex phenomena in part made translation appear to be unmanageable. Later approaches to the problem have succeeded (although only bilingually), but are based on many language- specific rules of a context-free nature. This report presents an alternative approach to natural language translation that relies on principle-based descriptions of grammar rather than rule-oriented descriptions. The model that has been constructed is based on abstract principles as developed by Chomsky (1981) and several other researchers working within the “Government and Binding” (GB) framework. Thus, the grammar is viewed as a modular system of principles rather than a large set of ad hoc language-specific rules.


Author[s]: Aaron F. Bobick

Natural Object Categorization

November 1987



This thesis addresses the problem of categorizing natural objects. To provide a criteria for categorization we propose that the purpose of a categorization is to support the inference of unobserved properties of objects from the observed properties. Because no such set of categories can be constructed in an arbitrary world, we present the Principle of Natural Modes as a claim about the structure of the world. We first define an evaluation function that measures how well a set of categories supports the inference goals of the observer. Entropy measures for property uncertainty and category uncertainty are combined through a free parameter that reflects the goals of the observer. Natural categorizations are shown to be those that are stable with respect to this free parameter. The evaluation function is tested in the domain of leaves and is found to be sensitive to the structure of the natural categories corresponding to the different species. We next develop a categorization paradigm that utilizes the categorization evaluation function in recovering natural categories. A statistical hypothesis generation algorithm is presented that is shown to be an effective categorization procedure. Examples drawn from several natural domains are presented, including data known to be a difficult test case for numerical categorization techniques. We next extend the categorization paradigm such that multiple levels of natural categories are recovered; by means of recursively invoking the categorization procedure both the genera and species are recovered in a population of anaerobic bacteria. Finally, a method is presented for evaluating the utility of features in recovering natural categories. This method also provides a mechanism for determining which features are constrained by the different processes present in a multiple modal world.


Author[s]: Charles Rich and Richard C. Waters

The Programmer's Apprentice Project: A Research Overview

November 1987



The goal of the Programmer's Apprentice project is to develop a theory of how expert programmers analyze, synthesize, modify, explain, specify, verify, and document programs. This research goal overlaps both artificial intelligence and software engineering. From the viewpoint of artificial intelligence, we have chosen programming as a domain in which to study fundamental issues of knowledge representation and reasoning. From the viewpoint of software engineering, we seek to automate the programming process by applying techniques from artificial intelligence.


Author[s]: Charles Rich

Inspection Methods in Programming: Cliches and Plans

December 1987



Inspection methods are a kind of engineering problem solving based on the recognition and use of standard forms or cliches. Examples are given of program analysis, program synthesis and program validation by inspection. A formalism, called the Plan Calculus, is defined and used to represent programming cliches in a convenient, canonical, and programming- language independent fashion.


Author[s]: Eric W. Aboaf, Christopher G. Atkeson and David J. Reinkensmeyer

Task-Level Robot Learning: Ball Throwing

December 1987



We are investigating how to program robots so that they learn tasks from practice. One method, task-level learning, provides advantages over simply perfecting models of the robot's lower level systems. Task-level learning can compensate for the structural modeling errors of the robot's lower level control systems and can speed up the learning process by reducing the degrees of freedom of the models to be learned. We demonstrate two general learning procedures---fixed-model learning and refined-model learning---on a ball- throwing robot system.


Author[s]: Daphna Weinshall

Qualitative Depth and Shape from Stereo, in Agreement with Psychophysical Evidendence

December 1987



Obtaining exact depth from binocular disparities is hard if camera calibration is needed. We will show that qualitative depth information can be obtained from stereo disparities with almost no computations and with no prior knowledge (or computation) of camera parameters. We derive two expressions that order all matched points in the images in two distinct depth-consistent ways from image coordinates only. One is a tilt-related order $\lambda$, the other is a depth-related order $\chi$. Using $\lambda$ demonstrates some anomalies and unusual characteristics that have been observed in psychophysical experiments. The same approach is applied to qualitatively estimate changes in the curvature of a contour on the surface of an object, with either $x$- or $y$- coordinate fixed.


Author[s]: Jacob Katzenelson and Richard Zippel

Software Structuring Principles for VLSI CAD

December 1987



A frustrating aspect of the frequent changes to large VLSI CAD systems is that so little of the old available programs can be reused. It takes too much time and effort to find the reusable pieces and recast them for the new use. Our thesis is that such systems can be designed for reusability by designing the software as layers of problem oriented languages, which are implemented by suitably extending a "base" language. We illustrate this methodology with respect to VLSI CAD programs and a particular language layer: a language for handling networks. We present two different implementations. The first uses UNIX and Enhanced C. The second approach uses Common Lisp on a Lisp machine.


Author[s]: Alfonso Garcia Reynoso

Structural Dynamics Model of a Cartesian Robot

October 1985



Methods are developed for predicting vibration response characteristics of systems which change configuration during operation. A cartesian robot, an example of such a position-dependent system, served as a test case for these methods and was studied in detail. The chosen system model was formulated using the technique of Component Mode Synthesis (CMS). The model assumes that he system is slowly varying, and connects the carriages to each other and to the robot structure at the slowly varying connection points. The modal data required for each component is obtained experimentally in order to get a realistic model. The analysis results in prediction of vibrations that are produced by the inertia forces as well as gravity and friction forces which arise when the robot carriages move with some prescribed motion. Computer simulations and experimental determinations are conducted in order to calculate the vibrations at the robot end- effector. Comparisons are shown to validate the model in two ways: for fixed configuration the mode shapes and natural frequencies are examined, and then for changing configuration the residual vibration at the end of the mode is evaluated. A preliminary study was done on a geometrically nonlinear system which also has position-dependency. The system consisted of a flexible four-bar linkage with elastic input and output shafts. The behavior of the rocker-beam is analyzed for different boundary conditions to show how some limiting cases are obtained. A dimensional analysis leads to an evaluation of the consequences of dynamic similarity on the resulting vibration.


Author[s]: Jintae Lee

Knowledge Base Integration: What Can We Learn from Database Integration Research?

January 1988



This paper examines the issues and the solutions that have been studied in database (DB) integration research and tries to draw lessons from them for knowledge base (KB) integration.


Author[s]: Neil C. Singer and Warren P. Seering

Utilizing Dynamic Stability to Orient Parts

February 1988



The intent of this research is to study the dynamic behavior of a solid body resting on a moving surface. Results of the study are then used to propose methods for controlling the orientation of parts in preparation for automatic assembly. Two dynamic models are discussed in detail. The first examines the impacts required to cause reorientation of a part. The second investigates the use of oscillatory motion to selectively reorient parts. This study demonstrates that the dynamic behaviors of solid bodies, under the conditions mentioned above, vary considerably with small changes in geometry or orientation.


Author[s]: Kenneth Haase

Soft Objects: A Paradigm for Object Oriented Programming

March 1990



This paper introduces soft objects, a new paradigm for object oriented programming. This paradigm replaces the traditional notion of object classes with the specification of transforming procedures which transform simpler objects into more complicated objects. These transforming procedures incrementally construct new objects by adding new state or providing handlers for new messages. Unlike other incremental approaches (e.g. the inherited exist handlers of Object Logo [Drescher, 1987]), transforming procedures are strict functions which always return new objects; rather than conflating objects and object abstractions (classes), soft objects distinctly separates objects and their abstractions. The composition of these transforming procedures replaces the inheritance schemes of class oriented approaches; order of composition of transforming procedure makes explicit the inheritance indeterminancies introduced by multiple super classes. Issues regarding semantics, efficiency, and security are discussed in the context of several alternative implementation models and the code of a complete implementation is provided in an appendix.


Author[s]: Bonnie J. Dorr

A Lexical Conceptual Approach to Generation for Machine Translation

January 1988



Current approaches to generation for machine translation make use of direct- replacement templates, large grammars, and knowledge-based inferencing techniques. Not only are rules language-specific, but they are too simplistic to handle sentences that exhibit more complex phenomena. Furthermore, these systems are not easily extendable to other languages because the rules that map the internal representation to the surface form are entirely dependent on both the domain of the system and the language being generated. Finally an adequate interlingual representation has not yet been discovered; thus, knowledge-based inferencing is necessary and syntactic cross-linguistic generalization cannot be exploited. This report introduces a plan for the development of a theoretically based computational scheme of natural language generation for a translation system. The emphasis of the project is the mapping from the lexical conceptual structure of sentences to an underlying or "base" syntactic structure called deep structure. This approach tackles the problems of thematic and structural divergence, i.e., it allows generation of target language sentences that are not thematically or structurally equivalent to their conceptually equivalent source language counterparts. Two other more secondary tasks, construction of a dictionary and mapping from dep structure to surface structure, will also be discussed. The generator operates on a constrained grammatical theory rather than on a set of surface level transformations. If the endeavor succeeds, there will no longer be a need for large, detailed grammars; general knowledge-based inferencing will not be necessary; lexical selection and syntactic realization will bw facilitated; and the model will be general enough for extension to other languages.


Author[s]: Rodney A. Brooks, Jonathan Connell and Peter Ning

Herbert: A Second Generation Mobile Robot

January 1988



In mobile robot research we believe the structure of the platform, its capabilities, the choice of sensors, their capabilities, and the choice of processors, both onboard and offboard, greatly constrains the direction of research activity centered on the platform. We examine the design and tradeoffs in a low cost mobile platform we have built while paying careful attention to issues of sensing, manipulation, onboard processing and debuggability of the total system. The robot, named Herbert, is a completely autonomous mobile robot with an onboard parallel processor and special hardware support for the subsumption architecture [Brooks (1986)], an onboard manipulator and a laser range scanner. All processors are simple low speed 8-bit micro-processors. The robot is capable of real time three dimensional vision, while simultaneously carrying out manipulator and navigation tasks.


Author[s]: Yishai A. Feldman and Charles Rich

Pattern-Directed Invocation with Changing Equations

May 1988



The interaction of pattern-directed invocation with equality in an automated reasoning system gives rise to a completeness problem. In such systems, a demon needs to be invoked not only when its pattern exactly matches a term in the reasoning data base, but also when it is possible to create a variant that matches. An incremental algorithm has been developed which solves this problem without generating all possible variants of terms in the database. The algorithm is shown to be complete for a class of demons, called transparent demons, in which there is a well-behaved logical relationship between the pattern and the body of the demon.


Author[s]: Peter Heinrich Meckl

Control of Vibration in Mechanical Systems Using Shaped Reference Inputs

January 1988



Dynamic systems which undergo rapid motion can excite natural frequencies that lead to residual vibration at the end of motion. This work presents a method to shape force profiles that reduce excitation energy at the natural frequencies in order to reduce residual vibration for fast moves. Such profiles are developed using a ramped sinusoid function and its harmonics, choosing coefficients to reduce spectral energy at the natural frequencies of the system. To improve robustness with respect to parameter uncertainty, spectral energy is reduced for a range of frequencies surrounding the nominal natural frequency. An additional set of versine profiles are also constructed to permit motion at constant speed for velocity-limited systems. These shaped force profiles are incorporated into a simple closed-loop system with position and velocity feedback. The force input is doubly integrated to generate a shaped position reference for the controller to follow. This control scheme is evaluated on the MIT Cartesian Robot. The shaped inputs generate motions with minimum residual vibration when actuator saturation is avoided. Feedback control compensates for the effect of friction Using only a knowledge of the natural frequencies of the system to shape the force inputs, vibration can also be attenuated in modes which vibrate in directions other than the motion direction. When moving several axes, the use of shaped inputs allows minimum residual vibration even when the natural frequencies are dynamically changing by a limited amount.


Author[s]: W. Eric L. Grimson

The Combinatorics of Object Recognition in Cluttered Environments Using Constrained Search

February 1988



When clustering techniques such as the Hough transform are used to isolate likely subspaces of the search space, empirical performance in cluttered scenes improves considerably. In this paper we establish formal bounds on the combinatorics of this approach. Under some simple assumptions, we show that the expected complexity of recognizing isolated objects is quadratic in the number of model and sensory fragments, but that the expected complexity of recognizing objects in cluttered environments is exponential in the size of the correct interpretation. We also provide formal bounds on the efficacy of using the Hough transform to preselect likely subspaces, showing that the problem remains exponential, but that in practical terms, the size of the problem is significantly decreased.


Author[s]: Richard C. Waters

System Validation via Constraint Modeling

February 1988



Constraint modeling could be a very important system validation method, because its abilities are complementary to both testing and code inspection. In particular, even though the ability of constraint modeling to find errors is limited by the simplifications which are introduced when making a constraint model, constraint modeling can locate important classes of errors which are caused by non-local faults (i.e., are hard to find with code inspection) and manifest themselves as failures only in unusual situations (i.e., are hard to find with testing).


Author[s]: Pyung H. Chang

A Dexterity Measure for the Kinematic Control of Robot Manipulator with Redundany

February 1988



We have derived a new performance measure, product of minors of the Jacobian matrix, that tells how far kinematically redundant manipulators are from singularity. It was demonstrated that previously used performance measures, namely condition number and manipulability measure allowed to change configurations, caused repeatability problems and discontinuity effects. The new measure, on the other hand, assures that the arm solution remains in the same configuration, thus effectively preventing these problems.


Author[s]: Pyung H. Chang

Analysis and Control of Robot Manipulators with Kinematic Redundancy

May 1987



A closed-form solution formula for the kinematic control of manipulators with redundancy is derived, using the Lagrangian multiplier method. Differential relationship equivalent to the Resolved Motion Method has been also derived. The proposed method is proved to provide with the exact equilibrium state for the Resolved Motion Method. This exactness in the proposed method fixes the repeatability problem in the Resolved Motion Method, and establishes a fixed transformation from workspace to the joint space. Also the method, owing to the exactness, is demonstrated to give more accurate trajectories than the Resolved Motion Method. In addition, a new performance measure for redundancy control has been developed. This measure, if used with kinematic control methods, helps achieve dexterous movements including singularity avoidance. Compared to other measures such as the manipulability measure and the condition number, this measure tends to give superior performances in terms of preserving the repeatability property and providing with smoother joint velocity trajectories. Using the fixed transformation property, Taylor’s Bounded Deviation Paths Algorithm has been extended to the redundant manipulators.


Author[s]: David W. Jacobs

The Use of Grouping in Visual Object Recognition.

January 1988



The report describes a recognition system called GROPER, which performs grouping by using distance and relative orientation constraints that estimate the likelihood of different edges in an image coming from the same object. The thesis presents both a theoretical analysis of the grouping problem and a practical implementation of a grouping system. GROPER also uses an indexing module to allow it to make use of knowledge of different objects, any of which might appear in an image. We test GROPER by comparing it to a similar recognition system that does not use grouping.


Author[s]: Christopher G. Atkeson, Eric W. Aboaf, Joseph McIntyre and David J. Reinkensmeyer

Model-Based Robot Learning

April 1988



Models play an important role in learning from practice. Models of a controlled system can be used as learning operators to refine commands on the basis of performance errors. The examples used to demonstrate this include positioning a limb at a visual target and following a defined trajectory. Better models lead to faster correction of command errors, requiring less practice to attain a given level of performance. The benefits of accurate modeling are improved performance in all aspects of control, while the risks of inadequate modeling are poor learning performance, or even degradation of performance with practice.


Author[s]: Jonathan H. Connell

A Behavior-Based Arm Controller

June 1988



In this paper we describe a working, implemented controller for a real, physical mobile robot arm. The controller is composed of a collection of 15 independent behaviors which run, in real time, on a set of 8 loosely coupled on-board 8-bit microprocessors. We describe how these behaviors cooperate to actually seek out and retrieve objects using local sensory data. We also discuss the methodology used to decompose this collection task and the types of spatial representation and reasoning used by the system.


Author[s]: Gary L. Drescher

Demystifying Quantum Mechanics: A Simple Universe with Quantum Uncertainty

December 1988



An artificial universe is defined that has entirely deterministic laws with exclusively local interactions, and that exhibits the fundamental quantum uncertainty phenomenon: superposed states mutually interfere, but only to the extent that no observation distinguishes among them. Showing how such a universe could be elucidates interpretational issues of actual quantum mechanics. The artificial universe is a much-simplified version of Everett's real- world model (the so-called multiple-worlds formulation). In the artificial world, as in Everett's model, the tradeoff between interference and observation is deducible from the universe formalism. Artificial world examples analogous to the quantum double- slit experiment and the EPR experiment are presented.


Author[s]: Neil Singer and Warren Seering

Preshaping Command Inputs to Reduce System Vibration

January 1988



A method is presented for generating shaped command inputs which significantly reduce or eliminate endpoint vibration. Desired system inputs are altered so that the system completes the requested move without residual vibration. A short move time penalty is incurred (on the order of one period of the first mode of vibration). The preshaping technique is robust under system parameter uncertainty and may be applied to both open and closed loop systems. The Draper Laboratory's Space Shuttle Remote Manipulator System simulator (DRS) is used to evaluate the method. Results show a factor of 25 reduction in endpoint residual vibration for typical moves of the DRS.


Author[s]: Eric Saund

Symbolic Construction of a 2D Scale-Space Image

April 1988



The shapes of naturally occurring objects characteristically involve spatial events occurring at many scales. This paper offers a symbolic approach to constructing a primitive shape description across scales for 2D binary (silhouette) shape images: grouping operations are performed over collections of tokens residing on a Scale-Space Blackboard. Two types of grouping operations are identified that, respectively: (1) aggregate edge primitives at one scale into edge primitives at a coarser scale and (2) group edge primitives into partial-region assertions, including curved- contours, primitive-corners, and bars. This approach avoids several drawbacks of numerical smoothing methods.


Author[s]: Stephen L. Chiu

Generating Compliant Motion of Objects with an Articulated Hand

June 1985



The flexibility of the robot is the key to its success as a viable aid to production. Flexibility of a robot can be explained in two directions. The first is to increase the physical generality of the robot such that it can be easily reconfigured to handle a wide variety of tasks. The second direction is to increase the ability of the robot to interact with its environment such that tasks can still be successfully completed in the presence of uncertainties. The use of articulated hands are capable of adapting to a wide variety of grasp shapes, hence reducing the need for special tooling. The availability of low mass, high bandwidth points close to the manipulated object also offers significant improvements I the control of fine motions. This thesis provides a framework for using articulated hands to perform local manipulation of objects. N particular, it addresses the issues in effecting compliant motions of objects in Cartesian space. The Stanford/JPL hand is used as an example to illustrate a number of concepts. The examples provide a unified methodology for controlling articulated hands grasping with point contacts. We also present a high-level hand programming system based on the methodologies developed in this thesis. Compliant motion of grasped objects and dexterous manipulations can be easily described in the LISP-based hand programming language.


Author[s]: Neil C. Singer

Residual Vibration Reduction in Computer Controlled Machines

February 1989



Control of machines that exhibit flexibility becomes important when designers attempt to push the state of the art with faster, lighter machines. Three steps are necessary for the control of a flexible planet. First, a good model of the plant must exist. Second, a good controller must be designed. Third, inputs to the controller must be constructed using knowledge of the system dynamic response. There is a great deal of literature pertaining to modeling and control but little dealing with the shaping of system inputs. Chapter 2 examines two input shaping techniques based on frequency domain analysis. The first involves the use of the first deriviate of a gaussian exponential as a driving function template. The second, acasual filtering, involves removal of energy from the driving functions at the resonant frequencies of the system. Chapter 3 presents a linear programming technique for generating vibration-reducing driving functions for systems. Chapter 4 extends the results of the previous chapter by developing a direct solution to the new class of driving functions. A detailed analysis of the new technique is presented from five different perspectives and several extensions are presented. Chapter 5 verifies the theories of the previous two chapters with hardware experiments. Because the new technique resembles common signal filtering, chapter 6 compares the new approach to eleven standard filters. The new technique will be shown to result in less residual vibrations, have better robustness to system parameter uncertainty, and require less computation than other currently used shaping techniques.


Author[s]: Elisha Sacks

Automatic Qualitative Analysis of Ordinary Differential Equations Using Piecewise Linear Approximations

March 1988



This paper explores automating the qualitative analysis of physical systems. It describes a program, called PLR, that takes parameterized ordinary differential equations as input and produces a qualitative description of the solutions for all initial values. PLR approximates intractable nonlinear systems with piecewise linear ones, analyzes the approximations, and draws conclusions about the original systems. It chooses approximations that are accurate enough to reproduce the essential properties of their nonlinear prototypes, yet simple enough to be analyzed completely and efficiently. It derives additional properties, such as boundedness or periodicity, by theoretical methods. I demonstrate PLR on several common nonlinear systems and on published examples from mechanical engineering.


Author[s]: Steven J. Gordon and Warren P. Seering

Real-Time Part Position Sensing

May 1988



A light stripe vision system is used to measure the location of polyhedral features of parts from a single frame of video camera output. Issues such as accuracy in locating the line segments of intersection in the image and combining redundant information from multiple measurements and multiple sources are addressed. In 2.5 seconds, a prototype sensor was capable of locating a two inch cube to an accuracy (one standard deviation) of .002 inches (.055 mm) in translation and .1 degrees (.0015 radians) in rotation. When integrated with a manipulator, the system was capable of performing high precision assembly tasks.


Author[s]: Daniel S. Weld

Theories of Comparative Analysis

May 1988



Comparative analysis is the problem of predicting how a system will react to perturbations in its parameters, and why. For example, comparative analysis could be asked to explain why the period of an oscillating spring/block system would increase if the mass of the block were larger. This thesis formalizes the task of comparative analysis and presents two solution techniques: differential qualitative (DQ) analysis and exaggeration. Both techniques solve many comparative analysis problems, providing explanations suitable for use by design systems, automated diagnosis, intelligent tutoring systems, and explanation based generalization. This thesis explains the theoretical basis for each technique, describes how they are implemented, and discusses the difference between the two. DQ analysis is sound; it never generates an incorrect answer to a comparative analysis question. Although exaggeration does occasionally produce misleading answers, it solves a larger class of problems than DQ analysis and frequently results in simpler explanations.


Author[s]: Kenneth Alan Pasch

Heuristics for Job-Shop Scheduling

January 1988



Two methods of obtaining approximate solutions to the classic General Job-shop Scheduling Program are investigated. The first method is iterative. A sampling of the solution space is used to decide which of a collection of space pruning constraints are consistent with "good" schedules. The selected space pruning constraints are then used to reduce the search space and the sampling is repeated. This approach can be used either to verify whether some set of space pruning constraints can prune with discrimination or to generate solutions directly. Schedules can be represented as trajectories through a Cartesian space. Under the objective criteria of Minimum maximum Lateness family of “good" schedules (trajectories) are geometric neighbors (reside with some “tube”) in this space. This second method of generating solutions takes advantage of this adjacency by pruning the space from the outside in thus converging gradually upon this “tube.” One the average this methods significantly outperforms an array of the Priority Dispatch rules when the object criteria is that of Minimum Maximum Lateness. It also compares favorably with a recent relaxation procedure.


Author[s]: Joachim Heel

Dynamical Systems and Motion Vision

April 1988



In this paper we show how the theory of dynamical systems can be employed to solve problems in motion vision. In particular we develop algorithms for the recovery of dense depth maps and motion parameters using state space observers or filters. Four different dynamical models of the imaging situation are investigated and corresponding filters/ observers derived. The most powerful of these algorithms recovers depth and motion of general nature using a brightness change constraint assumption. No feature- matching preprocessor is required.


Author[s]: Ellen C. Hildreth and Shimon Ullman

The Computational Study of Vision

April 1988



The computational approach to the study of vision inquires directly into the sort of information processing needed to extract important information from the changing visual image---information such as the three- dimensional structure and movement of objects in the scene, or the color and texture of object surfaces. An important contribution that computational studies have made is to show how difficult vision is to perform, and how complex are the processes needed to perform visual tasks successfully. This article reviews some computational studies of vision, focusing on edge detection, binocular stereo, motion analysis, intermediate vision, and object recognition.


Author[s]: Gerald Jay Sussman and Jack Wisdom

Numerical Evidence that the Motion of Pluto is Chaotic

April 1988



The Digital Orrery has been used to perform an integration of the motion of the outer planets for 845 million years. This integration indicates that the long-term motion of the planet Pluto is chaotic. Nearby trajectories diverge exponentially with an e-folding time of only about 20 million years.


Author[s]: Andrew A. Berlin and Henry M. Wu

Scheme86: A System for Interpreting Scheme

April 1988



Scheme86 is a computer system designed to interpret programs written in the Scheme dialect of Lisp. A specialized architecture, coupled with new techniques for optimizing register management in the interpreter, allows Scheme86 to execute interpreted Scheme at a speed comparable to that of compiled Lisp on conventional workstations.


Author[s]: Boris Katz and Beth Levin

Exploiting Lexical Regularities in Designing Natural Language Systems

April 1988



This paper presents the lexical component of the START Question Answering system developed at the MIT Artificial Intelligence Laboratory. START is able to interpret correctly a wide range of semantic relationships associated with alternate expressions of the arguments of verbs. The design of the system takes advantage of the results of recent linguistic research into the structure of the lexicon, allowing START to attain a broader range of coverage than many existing systems.


Author[s]: Jacob Katzenelson

Computational Structure of the N-body Problem

April 1988



This work considers the organization and performance of computations on parallel computers of tree algorithms for the N-body problem where the number of particles is on the order of a million. The N-body problem is formulated as a set of recursive equations based on a few elementary functions, which leads to a computational structure in the form of a pyramid-like graph, where each vertex is a process, and each arc a communication link. The pyramid is mapped to three different processor configurations: (1) A pyramid of processors corresponding to the processes pyramid graph; (2) A hypercube of processors, e.g., a connection-machine like architecture; (3) A rather small array, e.g., $2 \times 2 \ times 2$, of processors faster than the ones considered in (1) and (2) above. Simulations of this size can be performed on any of the three architectures in reasonable time.


Author[s]: Karl T. Ulrich

Computation and Pre-Parametric Design

September 1988



My work is broadly concerned with the question "How can designs bessynthesized computationally?" The project deals primarily with mechanical devices and focuses on pre- parametric design: design at the level of detail of a blackboard sketch rather than at the level of detail of an engineering drawing. I explore the project ideas in the domain of single-input single-output dynamic systems, like pressure gauges, accelerometers, and pneumatic cylinders. The problem solution consists of two steps: 1) generate a schematic description of the device in terms of idealized functional elements, and then 2) from the schematic description generate a physical description.


Author[s]: W. Eric L. Grimson and David Huttenlocher

On the Sensitivity of the Hough Transform for Object Recognition

May 1988



A common method for finding an object's pose is the generalized Hough transform, which accumulates evidence for possible coordinate transformations in a parameter space and takes large clusters of similar transformations as evidence of a correct solution. We analyze this approach by deriving theoretical bounds on the set of transformations consistent with each data- model feature pairing, and by deriving bounds on the likelihood of false peaks in the parameter space, as a function of noise, occlusion, and tessellation effects. We argue that blithely applying such methods to complex recognition tasks is a risky proposition, as the probability of false positives can be very high.


Author[s]: Daniel Peter Huttenlocher

Three-Dimensional Recognition of Solid Objects from a Two-Dimensional Image

October 1988



This thesis addresses the problem of recognizing solid objects in the three- dimensional world, using two-dimensional shape information extracted from a single image. Objects can be partly occluded and can occur in cluttered scenes. A model based approach is taken, where stored models are matched to an image. The matching problem is separated into two stages, which employ different representations of objects. The first stage uses the smallest possible number of local features to find transformations from a model to an image. This minimizes the amount of search required in recognition. The second stage uses the entire edge contour of an object to verify each transformation. This reduces the chance of finding false matches.


Author[s]: Steven D. Eppinger and Warren P. Seering

Modeling Robot Flexibility for Endpoint Force Control

May 1988



Dynamic models have been developed in an attempt to match the response of a robot arm. The experimental data show rigid-body and five resonant modes. The frequency response and pole-zero arrays for various models of structural flexibility are compared with the data to evaluate the characteristics of the models, and to provide insight into the nature of the flexibility in the robot. Certain models are better able to depict transmission flexibility while others describe types of structural flexibility.


Author[s]: Richard James Doyle

Hypothesizing Device Mechanisms: Opening Up the Black Box

June 1988



I describe an approach to forming hypotheses about hidden mechanism configurations within devices given external observations and a vocabulary of primitive mechanisms. An implemented causal modelling system called JACK constructs explanations for why a second piece of toast comes out lighter, why the slide in a tire gauge does not slip back inside when the gauge is removed from the tire, and how in a refrigerator a single substance can serve as a heat sink for the interior and a heat source for the exterior. I report the number of hypotheses admitted for each device example, and provide empirical results which isolate the pruning power due to different constraint sources.


Author[s]: Reid G. Simmons

Combining Associational and Causal Reasoning to Solve Interpretation and Planning Problems

August 1988



This report describes a paradigm for combining associational and causal reasoning to achieve efficient and robust problem-solving behavior. The Generate, Test and Debug (GTD) paradigm generates initial hypotheses using associational (heuristic) rules. The tester verifies hypotheses, supplying the debugger with causal explanations for bugs found if the test fails. The debugger uses domain- independent causal reasoning techniques to repair hypotheses, analyzing domain models and the causal explanations produced by the tester to determine how to replace faulty assumptions made by the generator. We analyze the strengths and weaknesses of associational and causal reasoning techniques, and present a theory of debugging plans and interpretations. The GTD paradigm has been implemented and tested in the domains of geologic interpretation, the blocks world, and Tower of Hanoi problems.


Author[s]: Alan Bawden and Jonathan Rees

Syntactic Closures

June 1988



In this paper we describe {\it syntactic closures}. Syntactic closures address the scoping problems that arise when writing macros. We discuss some issues raised by introducing syntactic closures into the macro expansion interface, and we compare syntactic closures with other approaches. Included is a complete implementation.


Author[s]: Philip E. Agre and David Chapman

What Are Plans For?

October 1989



What plans are like depends on how they're used. We contrast two views of plan use. On the plan-as-program-view, plan use is the execution of an effective procedure. On the plan-as-communication view, plan use is like following natural language instructions. We have begun work on computational models of plans-as-communication, building on our previous work on improvised activity and on ideas from sociology.


Author[s]: Peng Wu

Test Generation Guided Design for Testability

July 1988



This thesis presents a new approach to building a design for testability (DFT) system. The system takes a digital circuit description, finds out the problems in testing it, and suggests circuit modifications to correct those problems. The key contributions of the thesis research are (1) setting design for testability in the context of test generation (TG), (2) using failures during FG to focus on testability problems, and (3) relating circuit modifications directly to the failures. A natural functionality set is used to represent the maximum functionalities that a component can have. The current implementation has only primitive domain knowledge and needs other work as well. However, armed with the knowledge of TG, it has already demonstrated its ability and produced some interesting results on a simple microprocessor.


Author[s]: Paul Resnick

Generalizing on Multiple Grounds: Performance Learning in Model-Based Technology

February 1989



This thesis explores ways to augment a model-based diagnostic program with a learning component, so that it speeds up as it solves problems. Several learning components are proposed, each exploiting a different kind of similarity between diagnostic examples. Through analysis and experiments, we explore the effect each learning component has on the performance of a model-based diagnostic program. We also analyze more abstractly the performance effects of Explanation-Based Generalization, a technology that is used in several of the proposed learning components.


Author[s]: Ron I. Kuper

Dependency-Directed Localization of Software Bugs

May 1989



Software bugs are violated specifications. Debugging is the process that culminates in repairing a program so that it satisfies its specification. An important part of debugging is localization, whereby the smallest region of the program that manifests the bug is found. The Debugging Assistant (DEBUSSI) localizes bugs by reasoning about logical dependencies. DEBUSSI manipulates the assumptions that underlie a bug manifestation, eventually localizing the bug to one particular assumption. At the same time, DEBUSSI acquires specification information, thereby extending its understanding of the buggy program. The techniques used for debugging fully implemented code are also appropriate for validating partial designs.


Author[s]: William T. Townsend

The Effect of Transmission Design on Force-Controlled Manipulator Performance

April 1988



Previous research in force control has focused on the choice of appropriate servo implementation without corresponding regard to the choice of mechanical hardware. This report analyzes the effect of mechanical properties such as contact compliance, actuator-to-joint compliance, torque ripple, and highly nonlinear dry friction in the transmission mechanisms of a manipulator. A set of requisites for high performance then guides the development of mechanical- design and servo strategies for improved performance. A single-degree-of-freedom transmission testbed was constructed that confirms the predicted effect of Coulomb friction on robustness; design and construction of a cable-driven, four-degree-of- freedom, "whole-arm" manipulator illustrates the recommended design strategies.


Author[s]: Panayotis S. Skordos

Multistep Methods for Integrating the Solar System

July 1988



High order multistep methods, run at constant stepsize, are very effective for integrating the Newtonian solar system for extended periods of time. I have studied the stability and error growth of these methods when applied to harmonic oscillators and two-body systems like the Sun-Jupiter pair. I have also tried to design better multistep integrators than the traditional Stormer and Cowell methods, and I have found a few interesting ones.


Author[s]: Sundar Narasimhan

Dexterous Robotic Hands: Kinematics and Control

November 1988



This report presents issues relating to the kinematics and control of dexterous robotic hands using the Utah-MIT hand as an illustrative example. The emphasis throughout is on the actual implementation and testing of the theoretical concepts presented. The kinematics of such hands is interesting and complicated owing to the large number of degrees of freedom involved. The implementation of position and force control algorithms on such tendon driven hands has previously suffered from inefficient formulations and a lack of sophisticated computer hardware. Both these problems are addressed in this report. A multiprocessor architecture has been built with high performance microcomputers on which real-time algorithms can be efficiently implemented. A large software library has also been built to facilitate flexible software development on this architecture. The position and force control algorithms described herein have been implemented and tested on this hardware.


Author[s]: Randall Davis and Walter C. Hamscher

Model-Based Reasoning: Troubleshooting

July 1988



To determine why something has stopped working, it is useful to know how it was supposed to work in the first place. That simple observation underlies some of the considerable interest generated in recent years on the topic of model-based reasoning, particularly its application to diagnosis and troubleshooting. This paper surveys the current state of the art, reviewing areas that are well understood and exploring areas that present challenging research topics. It views the fundamental paradigm as the interaction of prediction and observation, and explores it by examining three fundamental subproblems: Generating hypotheses by reasoning from a symptom to a collection of components whose misbehavior may plausibly have caused that symptom; testing each hypothesis to see whether it can account for all available observations of device behavior; then discriminating among the ones that survive testing. We analyze each of these independently at the knowledge level, i.e., attempting to understand what reasoning capabilities arise from the different varieties of knowledge available to the program. We find that while a wide range of apparently diverse model-based systems have been built for diagnosis and troubleshooting, they are for the most part variations on the central theme outlined here. Their diversity lies primarily in the varying amounts and kinds of knowledge they bring to bear at each stage of the process; the underlying paradigm is fundamentally the same. Our survey of this familiar territory leads to a second major conclusion of the paper: Diagnostic reasoning from a model is reasonably understood. Given a model of behavior and structure, we know how to use it in a variety of ways to produce a diagnosis. There is, by contrast, a rich supply of open research issues in the modeling process itself. In a sense we know how to do model-based reasoning; we do not know how to model the behavior of complex devices, how to create models, and how to select the "right" model for the task at hand.


Author[s]: Shimon Ullman and Ronen Basri

The Alignment of Objects with Smooth Surfaces

July 1988



This paper examines the recognition of rigid objects bounded by smooth surfaces using an alignment approach. The projected image of such an object changes during rotation in a manner that is difficult to predict. A method to approach this problem is suggested, using the 3D surface curvature at the points along the silhouette. The curvature information requires a single number for each point along the object's silhouette, the magnitude of the curvature vector at the point. We have implemented and tested this method on images of complex 3D objects; it was found to give accurate predictions of the objects' appearances for large transformations. A small number of models can be used to predict the new appearance of an object from any viewpoint.


Author[s]: Shimon Ullman and Amnon Sha'ashua

Structural Saliency: The Detection of Globally Salient Structures Using a Locally Connected Network

July 1988



Certain salient structures in images attract our immediate attention without requiring a systematic scan. We present a method for computing saliency by a simple iterative scheme, using a uniform network of locally connected processing elements. The network uses an optimization approach to produce a "saliency map," a representation of the image emphasizing salient locations. The main properties of the network are: (i) the computations are simple and local, (ii) globally salient structures emerge with a small number of iterations, and (iii) as a by- product of the computations, contours are smoothed and gaps are filled in.


Author[s]: Allen C. Ward and Warren Seering

Quantitative Inference in a Mechanical Design Compiler

January 1989



This paper presents the ideas underlying a program that takes as input a schematic of a mechanical or hydraulic power transmission system, plus specifications and a utility function, and returns catalog numbers from predefined catalogs for the optimal selection of components implementing the design. It thus provides the designer with a high level "language" in which to compose new designs, then performs some of the detailed design process for him. The program is based on a formalization of quantitative inferences about hierarchically organized sets of artifacts and operating conditions, which allows design compilation without the exhaustive enumeration of alternatives.


Author[s]: Margaret Morrison Fleck

Bondaries and Topological Algorithms

September 1988



This thesis develops a model for the topological structure of situations. In this model, the topological structure of space is altered by the presence or absence of boundaries, such as those at the edges of objects. This allows the intuitive meaning of topological concepts such as region connectivity, function continuity, and preservation of topological structure to be modeled using the standard mathematical definitions. The thesis shows that these concepts are important in a wide range of artificial intelligence problems, including low- level vision, high-level vision, natural language semantics, and high-level reasoning.


Author[s]: James J. Little and Alessandro Verri

Analysis of Differential and Matching Methods for Optical Flow

August 1988



Several algorithms for optical flow are studied theoretically and experimentally. Differential and matching methods are examined; these two methods have differing domains of application- differential methods are best when displacements in the image are small (<2 pixels) while matching methods work well for moderate displacements but do not handle sub-pixel motions. Both types of optical flow algorithm can use either local or global constraints, such as spatial smoothness. Local matching and differential techniques and global differential techniques will be examined. Most algorithms for optical flow utilize weak assumptions on the local variation of the flow and on the variation of image brightness. Strengthening these assumptions improves the flow computation. The computational consequence of this is a need for larger spatial and temporal support. Global differential approaches can be extended to local (patchwise) differential methods and local differential methods using higher derivatives. Using larger support is valid when constraint on the local shape of the flow are satisfied. We show that a simple constraint on the local shape of the optical flow, that there is slow spatial variation in the image plane, is often satisfied. We show how local differential methods imply the constraints for related methods using higher derivatives. Experiments show the behavior of these optical flow methods on velocity fields which so not obey the assumptions. Implementation of these methods highlights the importance of numerical differentiation. Numerical approximation of derivatives require care, in two respects: first, it is important that the temporal and spatial derivatives be matched, because of the significant scale differences in space and time, and, second, the derivative estimates improve with larger support.


Author[s]: Hsien-Che Lee

Estimating the Illuminant Color from the Shading of a Smooth Surface

August 1988



5{ a uniform wall illuminated by a spot light often gives a strong impression of the illuminant color. How can it be possible to know if it is a white wall illuminated by yellow light or a yellow wall illuminated by white light? If the wall is a Lambertian reflector, it would not be possible to tell the difference. However, in the real world, some amount of specular reflection is almost always present. In this memo, it is shown that the computation is possible in most practical cases.


Author[s]: William Dally, Andrew Chien, Stuart Fiske, Waldemar Horwat, John Keen, Peter Nuth, Jerry Larivee and Brian Totty

Message-Driven Processor Architecture: Verson 11

August 1988



The Message-Driven Processor is a node of a large-scale multiprocessor being developed by the Concurrent VLSI Architecture Group. It is intended to support fine-grained, message passing, parallel computation. It contains several novel architectural features, such as a low-latency network interface, extensive type- checking hardware, and on-chip memory that can be used as an associative lookup table. This document is a programmer's guide to the MDP. It describes the processor's register architecture, instruction set, and the data types supported by the processor. It also details the MDP's message sending and exception handling facilities.


Author[s]: Brian K. Totty

An Operating Environment for the Jellybean Machine

May 1988



The Jellybean Machine is a scalable MIMD concurrent processor consisting of special purpose RISC processors loosely coupled into a low latency network. I have developed an operating system to provide the supportive environment required to efficiently coordinate the collective power of the distributed processing elements. The system services are developed in detail, and may be of interest to other designers of fine grain, distributed memory processing networks.


Author[s]: Berthold K.P. Horn

Parallel Networks for Machine Vision

December 1988



The amount of computation required to solve many early vision problems is prodigious, and so it has long been thought that systems that operate in a reasonable amount of time will only become feasible when parallel systems become available. Such systems now exist in digital form, but most are large and expensive. These machines constitute an invaluable test- bed for the development of new algorithms, but they can probably not be scaled down rapidly in both physical size and cost, despite continued advances in semiconductor technology and machine architecture. Simple analog networks can perform interesting computations, as has been known for a long time. We have reached the point where it is feasible to experiment with implementation of these ideas in VLSI form, particularly if we focus on networks composed of locally interconnected passive elements, linear amplifiers, and simple nonlinear components. While there have been excursions into the development of ideas in this area since the very beginnings of work on machine vision, much work remains to be done. Progress will depend on careful attention to matching of the capabilities of simple networks to the needs of early vision. Note that this is not at all intended to be anything like a review of the field, but merely a collection of some ideas that seem to be interesting.


Author[s]: Steven D. Eppinger

Modeling Robot Dynamic Performance for Endpoint Force Control

September 1988



This research aims to understand the fundamental dynamic behavior of servo- controlled machinery in response to various types of sensory feedback. As an example of such a system, we study robot force control, a scheme which promises to greatly expand the capabilities of industrial robots by allowing manipulators to interact with uncertain and dynamic tasks. Dynamic models are developed which allow the effects of actuator dynamics, structural flexibility, and workpiece interaction to be explored in the frequency and time domains. The models are used first to explain the causes of robot force control instability, and then to find methods of improving this performance.


Author[s]: Daphna Weinshall

Seeing 'Ghost' Solutions in Stereo Vision

September 1988



A unique matching is a stated objective of most computational theories of stereo vision. This report describes situations where humans perceive a small number of surfaces carried by non-unique matching of random dot patterns, although a unique solution exists and is observed unambiguously in the perception of isolated features. We find both cases where non-unique matchings compete and suppress each other and cases where they are all perceived as transparent surfaces. The circumstances under which each behavior occurs are discussed and a possible explanation is sketched. It appears that matching reduces many false targets to a few, but may still yield multiple solutions in some cases through a (possibly different) process of surface interpolation.


Author[s]: Walter Charles Hamscher

Model-Based Troubleshooting of Digital Systems

August 1988



This thesis describes a methodology, a representation, and an implemented program for troubleshooting digital circuit boards at roughly the level of expertise one might expect in a human novice. Existing methods for model-based troubleshooting have not scaled up to deal with complex circuits, in part because traditional circuit models do not explicitly represent aspects of the device that troubleshooters would consider important. For complex devices the model of the target device should be constructed with the goal of troubleshooting explicitly in mind. Given that methodology, the principal contributions of the thesis are ways of representing complex circuits to help make troubleshooting feasible. Temporally coarse behavior descriptions are a particularly powerful simplification. Instantiating this idea for the circuit domain produces a vocabulary for describing digital signals. The vocabulary has a level of temporal detail sufficient to make useful predictions abut the response of the circuit while it remains coarse enough to make those predictions computationally tractable. Other contributions are principles for using these representations. Although not embodied in a program, these principles are sufficiently concrete that models can be constructed manually from existing circuit descriptions such as schematics, part specifications, and state diagrams. One such principle is that if there are components with particularly likely failure modes or failure modes in which their behavior is drastically simplified, this knowledge should be incorporated into the model. Further contributions include the solution of technical problems resulting from the use of explicit temporal representations and design descriptions with tangled hierarchies.


Author[s]: Davi Geiger and Tomaso Poggio

An Optimal Scale for Edge Detection

September 1988



Many problems in early vision are ill posed. Edge detection is a typical example. This paper applies regularization techniques to the problem of edge detection. We derive an optimal filter for edge detection with a size controlled by the regularization parameter $\ lambda $ and compare it to the Gaussian filter. A formula relating the signal-to-noise ratio to the parameter $\lambda $ is derived from regularization analysis for the case of small values of $\lambda$. We also discuss the method of Generalized Cross Validation for obtaining the optimal filter scale. Finally, we use our framework to explain two perceptual phenomena: coarsely quantized images becoming recognizable by either blurring or adding noise.


Author[s]: Eric W. Aboaf

Task-Level Robot Learning

August 1988



We are investigating how to program robots so that they learn from experience. Our goal is to develop principled methods of learning that can improve a robot's performance of a wide range of dynamic tasks. We have developed task-level learning that successfully improves a robot's performance of two complex tasks, ball-throwing and juggling. With task- level learning, a robot practices a task, monitors its own performance, and uses that experience to adjust its task-level commands. This learning method serves to complement other approaches, such as model calibration, for improving robot performance.


Author[s]: Waldemar Horwat

A Concurrent Smalltalk Compiler for the Message-Driven Processor

May 1988



This thesis describes Optimist, an optimizing compiler for the Concurrent Smalltalk language developed by the Concurrent VLSI Architecture Group. Optimist compiles Concurrent Smalltalk to the assembly language of the Message-Driven Processor (MDP). The compiler includes numerous optimization techniques such as dead code elimination, dataflow analysis, constant folding, move elimination, concurrency analysis, duplicate code merging, tail forwarding, use of register variables, as well as various MDP-specific optimizations in the code generator. The MDP presents some unique challenges and opportunities for compilation. Due to the MDP's small memory size, it is critical that the size of the generated code be as small as possible. The MDP is an inherently concurrent processor with efficient mechanisms for sending and receiving messages; the compiler takes advantage of these mechanisms. The MDP's tagged architecture allows very efficient support of object-oriented languages such as Concurrent Smalltalk. The initial goals for the MDP were to have the MDP execute about twenty instructions per method and contain 4096 words of memory. This compiler shows that these goals are too optimistic -- most methods are longer, both in terms of code size and running time. Thus, the memory size of the MDP should be increased.


Author[s]: Benjamin J. Paul

A Systems Approach to the Torque Control of a Permanent Magnet Brushless Motor

August 1987



Many approaches to force control have assumed the ability to command torques accurately. Concurrently, much research has been devoted to developing accurate torque actuation schemes. Often, torque sensors have been utilized to close a feedback loop around output torque. In this paper, the torque control of a brushless motor is investigated through: the design, construction, and utilization of a joint torque sensor for feedback control; and the development and implementation of techniques for phase current based feedforeward torque control. It is concluded that simply closing a torque loop is no longer necessarily the best alternative since reasonably accurate current based torque control is achievable.


Author[s]: Richard C. Waters

Optimization of Series Expressions: Part I: User's Manual for the Series Macro Package

January 1989



The benefits of programming in a functional style are well known. In particular, algorithms that are expressed as compositions of functions operating on series/vectors/streams of data elements are much easier to understand and modify than equivalent algorithms expressed as loops. Unfortunately, many programmers hesitate to use series expressions, because they are typically implemented very inefficiently. A Common Lisp macro package (OSS) has been implemented which supports a restricted class of series expressions, obviously synchronizable series expressions, which can be evaluated very efficiently by automatically converting them into loops. Using this macro package, programmers can obtain the advantages of expressing computations as series expressions without incurring any run-time overhead.


Author[s]: Richard C. Waters

Optimization of Series Expressions: Part II: Overview of the Theory and Implementation

January 1989



The benefits of programming in a functional style are well known. In particular, algorithms that are expressed as compositions of functions operating on series/vectors/streams of data elements are much easier to understand and modify than equivalent algorithms expressed as loops. Unfortunately, many programmers hesitate to use series expressions, because they are typically implemented very inefficiently---the prime source of inefficiency being the creation of intermediate series objects. A restricted class of series expressions, obviously synchronizable series expressions, is defined which can be evaluated very efficiently. At the cost of introducing restrictions which place modest limits on the series expressions which can be written, the restrictions guarantee that the creation of intermediate series objects is never necessary. This makes it possible to automatically convert obviously synchronizable series expressions into highly efficient loops using straight forward algorithms.


Author[s]: Allen C. Ward and Warren Seering

The Performance of a Mechanical Design 'Compiler'

January 1989



A mechanical design "compiler" has been developed which, given an appropriate schematic, specifications, and utility function for a mechanical design, returns catalog numbers for an optimal implementation. The compiler has been successfully tested on a variety of mechanical and hydraulic power transmission designs and a few temperature sensing designs. Times required have been at worst proportional to the logarithm of the number of possible combinations of catalog numbers.


Author[s]: Philip E. Agre

The Dynamic Structures of Everyday Life

October 1988



Computational theories of action have generally understood the organized nature of human activity through the construction and execution of plans. By consigning the phenomena of contingency and improvisation to peripheral roles, this view has led to impractical technical proposals. As an alternative, I suggest that contingency is a central feature of everyday activity and that improvisation is the central kind of human activity. I also offer a computational model of certain aspects of everyday routine activity based on an account of improvised activity called running arguments and an account of representation for situated agents called deictic representation .


Author[s]: Terence D. Sanger

Optimal Unsupervised Learning in Feedforward Neural Networks

January 1989



We investigate the properties of feedforward neural networks trained with Hebbian learning algorithms. A new unsupervised algorithm is proposed which produces statistically uncorrelated outputs. The algorithm causes the weights of the network to converge to the eigenvectors of the input correlation with largest eigenvalues. The algorithm is closely related to the technique of Self-supervised Backpropagation, as well as other algorithms for unsupervised learning. Applications of the algorithm to texture processing, image coding, and stereo depth edge detection are given. We show that the algorithm can lead to the development of filters qualitatively similar to those found in primate visual cortex.


Author[s]: Allen C. Ward

A Theory of Quantitative Inference Applied to a Mechanical Design Compiler

January 1989



This thesis presents the ideas underlying a computer program that takes as input a schematic of a mechanical or hydraulic power transmission system, plus specifications and a utility function, and returns catalog numbers from predefined catalogs for the optimal selection of components implementing the design. Unlike programs for designing single components or systems, the program provides the designer with a high level "language" in which to compose new designs. It then performs some of the detailed design process. The process of "compilation" is based on a formalization of quantitative inferences about hierarchically organized sets of artifacts and operating conditions. This allows the design compilation without the exhaustive enumeration of alternatives.


Author[s]: Rodney A. Brooks

A Robot that Walks: Emergent Behaviors from a Carefully Evolved Network

February 1989



Most animals have significant behavioral expertise built in without having to explicitly learn it all from scratch. This expertise is a product of evolution of the organism; it can be viewed as a very long term form of learning which provides a structured system within which individuals might learn more specialized skills or abilities. This paper suggests one possible mechanism for analagous robot evolution by describing a carefully designed series of networks, each one being a strict augmentation of the previous one, which control a six legged walking machine capable of walking over rough terrain and following a person passively sensed in the infrared spectrum. As the completely decentralized networks are augmented, the robot's performance and behavior repertoire demonstrably improve. The rationale for such demonstrations is that they may provide a hint as to the requirements for automatically building massive networks to carry out complex sensory-motor tasks. The experiments with an actual robot ensure that an essence of reality is maintained and that no critical problems have been ignored.


Author[s]: Eric Saund

The Role of Knowledge in Visual Shape Representation

October 1988



This report shows how knowledge about the visual world can be built into a shape representation in the form of a descriptive vocabulary making explicit the important geometrical relationships comprising objects' shapes. Two computational tools are offered: (1) Shapestokens are placed on a Scale- Space Blackboard, (2) Dimensionality- reduction captures deformation classes in configurations of tokens. Knowledge lies in the token types and deformation classes tailored to the constraints and regularities ofparticular shape worlds. A hierarchical shape vocabulary has been implemented supporting several later visual tasks in the two-dimensional shape domain of the dorsal fins of fishes.


Author[s]: Harold Abelson, Michael Eisenberg, Mathew Halfact, Jacob Katzenelson, Elisha Sacks, Gerald Jay Sussman, Jack Wisdom and Ken Yip

Intelligence in Scientific Computing

November 1988



Combining numerical techniques with ideas from symbolic computation and with methods incorporating knowledge of science and mathematics leads to a new category of intelligent computational tools for scientists and engineers. These tools autonomously prepare simulation experiments from high- level specifications of physical models. For computationally intensive experiments, they automatically design special-purpose numerical engines optimized to perform the necessary computations. They actively monitor numerical and physical experiments. They interpret experimental data and formulate numerical results in qualitative terms. They enable their human users to control computational experiments in terms of high-level behavioral descriptions.


Author[s]: Boris Katz

Using English For Indexing and Retrieving

October 1988



This paper describes a natural language system START. The system analyzes English text and automatically transforms it into an appropriate representation, the knowledge base, which incorporates the information found in the text. The user gains access to information stored in the knowledge base by querying it in English. The system analyzes the query and decides through a matching process what information in the knowledge base is relevant to the question. Then it retrieves this information and formulates its response also in English.


Author[s]: Mark Harper Shirley

Generating Circuit Tests by Exploiting Designed Behavior

December 1988



This thesis describes two programs for generating tests for digital circuits that exploit several kinds of expert knowledge not used by previous approaches. First, many test generation problems can be solved efficiently using operation relations, a novel representation of circuit behavior that connects internal component operations with directly executable circuit operations. Operation relations can be computed efficiently by searching traces of simulated circuit behavior. Second, experts write test programs rather than test vectors because programs are more readable and compact. Test programs can be constructed automatically by merging program fragments using expert-supplied goal-refinement rules and domain-independent planning techniques.

horizontal line

MIT logo Computer Science and Artificial Intelligence Laboratory (CSAIL)
The Stata Center, Building 32 - 32 Vassar Street - Cambridge, MA 02139 - USA
tel:+1-617-253-0073 - publications@csail.mit.edu