CSAIL Digital Archive  Artificial Intelligence
Laboratory Series
AIM800 Author[s]: Demetri Terzopoulos Computing VisibleSurface Representations March 1985 ftp://publications.ai.mit.edu/aipublications/500999/AIM800.ps ftp://publications.ai.mit.edu/aipublications/pdf/AIM800.pdf The lowlevel interpretation of images provides constraints on 3D surface shape at multiple resolutions, but typically only at scattered locations over the visual field. Subsequent visual processing can be facilitated substantially if the scattered shape constraints are immediately transformed into visiblesurface representations that unambiguously specify surface shape at every image point. The required transformation is shown to lead to an illposed surface reconstruction problem. A wellposed variational principle formulation is obtained by invoking 'controlled continuity,' a physically nonrestrictive (generic) assumption about surfaces which is nonetheless strong enough to guarantee unique solutions. The variational principle, which admits an appealing physical interpretation, is locally discretized by applying the finite element method to a piecewise, finite element representation of surfaces. This forms the mathematical basis of a unified and general framework for computing visiblesurface representations. The computational framework unifies formal solutions to the key problems of (i) integrating multiscale constraints on surface depth and orientation from multiple visual sources, (ii) interpolating these scattered constraints into dense, piecewise smooth surfaces, (iii) discovering surface depth and orientation discontinuities and allowing them to restrict interpolation appropriately, and (iv) overcoming the immense computational burden of fine resolution surface reconstruction. An efficient surface reconstruction algorithm is developed. It exploits multiresolution hierarchies of cooperative relaxation processes and is suitable for implementation on massively parallel networks of simple, locally interconnected processors. The algorithm is evaluated empirically in a diversity of applications.
AIM801 Author[s]: Kent Pitman The Description of Large Systems September 1984 ftp://publications.ai.mit.edu/aipublications/500999/AIM801.ps ftp://publications.ai.mit.edu/aipublications/pdf/AIM801.pdf In this paper we discuss the problems associated with the description and manipulation of large systems when their sources are not maintained as single fields. We show why and how tools that address these issues, such as Unix MAKE and Lisp Machine DEFSYSTEM, have evolved. Existing formalisms suffer from the problem that their syntax is not easily separable from their functionality. In programming languages, standard “calling conventions” exist to insulate the caller of a function from the syntactic details of how that function was defined, but until now no such conventions have existed to hide consumers of program systems from the details of how those systems were specified. We propose a lowlevel data abstraction which can support notations such as those used by MAKE and DEFSYSTEM without requiring that the introduction of a new notation be accompanied by a completely different set of tools for instantiating or otherwise manipulating the resulting system. Lisp is used for presentation, bit the issues are not idiosyncratic to LISP.
AITR802 Author[s]: David Chapman Planning for Conjunctive Goals November 1985 ftp://publications.ai.mit.edu/aipublications/500999/AITR802.ps ftp://publications.ai.mit.edu/aipublications/pdf/AITR802.pdf The problem of achieving conjunctive goals has been central to domain independent planning research; the nonlinear constraint posting approach has been most successful. Previous planners of this type have been comlicated, heuristic, and illdefined. I have combined and distilled the state of the art into a simple, precise, implemented algorithm (TWEAK) which I have proved correct and complete. I analyze previous work on domain independent conjunctive planning; in retrospect it becomes clear that all conjunctive planners, linear and nonlinear, work the same way. The efficiency of these planners depends on the traditional add/deletelist representation for actions, which drastically limits their usefulness. I present theorems that suggest that efficient general purpose planning with more expressive action representations is impossible, and suggest ways to avoid this problem.
AIM803 Author[s]: Demetri Terzopoulos Multigrid Relaxation Methods and the Analysis of Lightness, Shading and Flow October 1984 ftp://publications.ai.mit.edu/aipublications/500999/AIM803.ps ftp://publications.ai.mit.edu/aipublications/pdf/AIM803.pdf Image analysis problems, posed mathematically as variational principles or as partial differential equations, are amenable to numerical solution by relaxation algorithms that are local, iterative, and often parallel. Although they are well suited structurally for implementation on massively parallel, locally interconnected computational architectures, such distributed algorithms are seriously handicapped by an inherent inefficiency at propagating constraints between widely separated processing elements. Hence, they converge extremely slowly when confronted by the large representations necessary for low level vision. Application of multigrid methods can overcome this drawback, as we established in previous work on 3D surface reconstruction. In this paper, we develop efficient multiresolution iterative algorithms for computing lightness, shapefromshading, and optical flow, and we evaluate the performance of these algorithms on Synthetic images. The multigrid methodology that we describe is broadly applicable in lowlevel vision. Notably, it is an appealing strategy to use in conjunction with regularization analysis for the efficient solution of a wide range of ill posed visual reconstruction problems.
AIM804 Author[s]: Gideon Sahar and John M. Hollerbach Planning of MinimumTime Trajectories for Robot Arms November 1984 ftp://publications.ai.mit.edu/aipublications/500999/AIM804.ps ftp://publications.ai.mit.edu/aipublications/pdf/AIM804.pdf The minimumtime for a robot arm has been a longstanding and unsolved problem of considerable interest. We present a general solution to this problem that involves joint space tesselation, a dynamic timescaling algorithm, and graph search. The solution incorporates full dynamics of movement and actuator constraints, and can be easily extended for joint limits and work space obstacles, but is subject to the particular tesselation scheme used. The results presented show that, in general the optimal paths are not straight lines, bit rather curves in jointspace that utilize the dynamics of the arm and gravity to help in moving the arm faster to its destination. Implementation difficulties due to the tesselation and to combinatorial proliferation of paths are discussed.
AIM805 Author[s]: Michael A. Gennert Any Dimensional Reconstruction from Hyperplanar Projections October 1984 ftp://publications.ai.mit.edu/aipublications/500999/AIM805.ps ftp://publications.ai.mit.edu/aipublications/pdf/AIM805.pdf In this paper we examine the reconstruction of functions of any dimension from hyperplanar projections. This is a generalization of a problem that has generated much interest recently, especially in the field of medical imaging. Computed Axial Tomography (CAT) and Nuclear Magnetic Resonance (NMR) are two medical techniques that fall in this framework. CAT scans measure the hydrogen density along planes through the body. Here we will examine reconstruction methods that involve backprojecting the projection data and summing this over the entire region of interest. There are two methods for doing this. One method is to filter the projection data first, and then backproject this filtered data and sum over all projection directions. The other method is to backproject and sum the projection data first, and then filter. The two methods are mathematically equivalent, producing very similar equations. We will derive the reconstruction formulas for both methods for any number of dimensions. We will examine the cases of two and three dimensions, since these are the only ones encountered in practice. The equations are very different for these cases. In general, the equations are very different for even and odd dimensionality. We will discuss why this is so, and show that the equations for even and odd dimensionality are related by the Hilbert Transform.
AIM806 Author[s]: John Canny Collision Detection for Moving Polyhedra October 1984 ftp://publications.ai.mit.edu/aipublications/500999/AIM806.ps ftp://publications.ai.mit.edu/aipublications/pdf/AIM806.pdf We consider the problem of moving a three dimensional solid object among polyhedral obstacles. The traditional formulation of configuration space for this problem uses three translational parameters and three angles (typically Euler angles), and the constraints between the object and obstacles involve transcendental functions. We show that a quaternion representation of rotation yields constraints which are purely algebraic in a higherdimensional space. By simple manipulation, the constraints may be projected down into a six dimensional space with no increase in complexity. Using this formulation, we derive an efficient exact intersection test for an object which is translating and rotating among obstacles.
AITR807 Author[s]: Andrew Lewis Ressler A Circuit Grammar For Operational Amplifier Design January 1984 ftp://publications.ai.mit.edu/aipublications/500999/AITR807.ps ftp://publications.ai.mit.edu/aipublications/pdf/AITR807.pdf Electrical circuit designers seldom create really new topologies or use old ones in a novel way. Most designs are known combinations of common configurations tailored for the particular problem at hand. In this thesis I show that much of the behavior of a designer engaged in such ordinary design can be modelled by a clearly defined computational mechanism executing a set of stylized rules. Each of my rules embodies a particular piece of the designer’s knowledge. A circuit is represented as a hierarchy of abstract objects, each of which is composed of other objects. The leaves of this tree represent the physical devices from which physical circuits are fabricated. By analogy with contextfree languages, a class of circuits is generated by a phrasestructure grammar of which each rule describes how one type of abstract object can be expanded into a combination of more concrete parts. Circuits are designed by first postulating an abstract object which meets the particular design requirements. This object is then expanded into a concrete circuit by successive refinement using rules of my grammar. There are in general many rules which can be used to expand a given abstract component. Analysis must be done at each level of the expansion to constrain the search to a reasonable set. Thus the rule of my circuit grammar provide constraints which allow the approximate qualitative analysis of partially instantiated circuits. Later, more careful analysis in terms of more concrete components may lead to the rejection of a line of expansion which at first looked promising. I provide special failure rules to direct the repair in this case.
AIM809 Author[s]: Ronald S. Fearing Simplified Grasping and Manipulation with Dextrous Robot Hands November 1984 ftp://publications.ai.mit.edu/aipublications/500999/AIM809.ps ftp://publications.ai.mit.edu/aipublications/pdf/AIM809.pdf A method is presented for stably grasping 2 dimensional polygonal objects with a dextrous hand when object models are not avaiable. Basic constraints on object vertex angles are found for feasible grasping with two fingers. Local tactile information can be used to determine the finger motion that will reach feasible grasping locations. With an appropriate choice of finger stiffness, a hand can automatically grasp these objects with two fingers. The bounded slip of a part in a hand is shown to be valuable for adapting the fingers and object to a stable situation. Examples are given to show the ability of this grasping method to accomodate disturbance forces and to perform simple part reorientations and regrasping operations.
AITR810 Author[s]: Michael Andreas Erdmann On Motion Planning with Uncertainty August 1984 ftp://publications.ai.mit.edu/aipublications/500999/AITR810.ps ftp://publications.ai.mit.edu/aipublications/pdf/AITR810.pdf Robots must successfully plan and execute tasks in the presence of uncertainty. Uncertainty arises from errors in modeling, sensing, and control. Planning in the presence of uncertainty constitutes one facet of the general motion planning problem in robotics. This problem is concerned with the automatic synthesis of motion strategies from high level task specification and geometric models of environments. In order to develop successful motion strategies, it is necessary to understand the effect of uncertainty on the geometry of object interactions. Object interactions, both static and dynamic, may be represented in geometrical terms. This thesis investigates geometrical tools for modeling and overcoming uncertainty. The thesis describes an algorithm for computing backprojections o desired task configurations. Task goals and motion states are specified in terms of a moving object’s configuration space. Backprojections specify regions in configuration space from which particular motions are guaranteed to accomplish a desired task. The backprojection algorithm considers surfaces in configuration space that facilitate sliding towards the goal, while avoiding surfaces on which motions may prematurely halt. In executing a motion for a backprojection region, a plan executor must be able to recognize that a desired task has been accomplished. Since sensors are subject to uncertainty, recognition of task success is not always possible. The thesis considers the structure of backprojection regions and of task goals that ensures goal recognizability. The thesis also develops a representation of friction in configuration space, in terms of a friction cone analogous to the real space friction cone. The friction cone provides the backprojection algorithm with a geometrical tool for determining points at which motions may halt.
AIM811 Author[s]: Richard J. Doyle Hypothesizing and Refining Causal Models December 1984 ftp://publications.ai.mit.edu/aipublications/500999/AIM811.ps ftp://publications.ai.mit.edu/aipublications/pdf/AIM811.pdf An important common sense competence is the ability to hypothesize causal relations. This paper presents a set of constraints which make the problem of formulating causal hypotheses about simple physical systems a tractable one. The constraints include: (1) a temporal and physical proximity requirement, (2) a set of abstract causal explanations for changes in physical systems in terms of dependences between quantities, and (3) a teleological assumption that dependences in designed physical systems are functions. These constraints were embedded in a learning system which was tested in two domains: a sink and a toaster. The learning system successfully generated and refined naïve causal models of these simple physical systems. The causal models which emerge from the learning process support causal reasoning explanation, prediction, and planning. Inaccurate predictions and failed plans in turn indicate deficiencies in the causal models and the need to re hypothesize. Thus learning supports reasoning which leads to further learning. The learning system makes use of standard inductive rules of inference as well as the constraints on causal hypotheses to generalize its causal models. Finally, a simple example involving an analogy illustrates another way to repair incomplete causal models.
AIM812 Author[s]: G. Edward Barton, Jr. On the Complexity of ID/LP Parsing December 1984 ftp://publications.ai.mit.edu/aipublications/500999/AIM812.ps ftp://publications.ai.mit.edu/aipublications/pdf/AIM812.pdf Recent linguistic theories cast surface complexity as the result of interacting subsystems of constraints. For instance, the ID/LP grammar formalism separates constraints on immediate dominance from those on linear order. Shieber (1983) has shown how to carry out direct parsing of ID/LP grammars. His algorithm uses ID and LP constraints directly in language processing, without expanding them into a contextfree “object grammar.” This report examines the computational difficulty of ID/LP parsing. Shieber’s purported O (G square times n cubed) runtime bound underestimated the difficulty of ID/LP parsing; the worstcase runtime of his algorithm is exponential in size. A reduction of the vertexcover problem proves that ID/LP parsing is NPcomplete. The growth of the internal data structures is the source of difficulty in Shieber’s algorithm. The computational and linguistic implications of these results are discussed. Despite the potential for combinatorial explosion, Shieber’s algorithm remains better than the alternative of parsing an expanded object grammar.
AIM813 Author[s]: Berthold K.P. Horn The Variational Approach to Shape from Shading March 1985 ftp://publications.ai.mit.edu/aipublications/500999/AIM813.ps ftp://publications.ai.mit.edu/aipublications/pdf/AIM813.pdf We develop a systematic approach to the discovery of parallel iterative schemes for solving the shapefromshading problem on a grid. A standard procedure for finding such schemes is outlines, and subsequently used to derive several new ones. The shapefrom shading problem is known to be mathematically equivalent to a nonlinear first order partial differential equation in surface elevation. To avoid the problems inherent in methods used to solve such equations, we follow previous work in reformulating the problem as one of finding a surface orientation field that minimizes the integral of the brightness error. The calculus of variations is then employed to derive the appropriate Euler equations on which iterative schemes can be based. The problem of minimizing the integral of the brightness error term it ill posed, since it has an infinite number of solutions in terms of surface orientation fields. A previous method used a regularization technique to overcome this difficulty. An extra term was added to the integral to obtain an approximation to a solution that was as smooth as possible. We point out here that surface orientation has to obey an integrability constraint if it is to correspond to an underlying smooth surface. Regularization methods do not guarantee that the surface orientation recovered satisfies this constraint. Consequently, we attempt to develop a method that enforces integrability, but fail to find a convergent iterate scheme based on the resulting Euler equations. We show, however, that such a scheme can be derived if, instead of strictly enforcing the constraint, a penalty term derived from the constraint is adopted. This new scheme, while it can be expressed simply and elegantly using the surface gradient, unfortunately cannot deal with constraints imposed by occluding boundaries. These constraints are crucial if ambiguities in the solution of the shapefrom shading problem are to be avoided, Different schemes result if one uses different parameters to describe surface orientation We derive two new schemes, using unit surface normals, that facilitate the incorporation of the occluding boundary information. These schemes, while more complex, have several advantages over previous ones.
AIM815 Author[s]: Kenneth ManKam Yip Tense, Aspect and the Cognitive Representation of Time December 1984 ftp://publications.ai.mit.edu/aipublications/500999/AIM815.ps ftp://publications.ai.mit.edu/aipublications/pdf/AIM815.pdf This paper explores the relationships between a computation theory of temporal representation (as developed by James Allen) and a formal linguistic theory of tense (as developed by Norbert Hornstein) and aspect. It aims to provide explicit answers to four fundamental questions: (1) what is the computational justification for the primitive of a linguistic theory; (2) what is the computational explanation of the formal grammatical constraints; (3) what are the processing constraints imposed on the learnability and markedness of these theoretical constructs; and (4) what are the constraints that a linguistic theory imposes on representations. We show that one can effectively exploit the interface between the language faculty and the cognitive faculties by using linguistic constraints to determine restrictions on the cognitive representation and vice versa. Three main results are obtained: (1) We derive an explanation of an observed grammatical constraint on tense—the Linear Order Constraint—from the information monotonicity property of the constraint propagation algorithm of Allen’s temporal system: (2) We formulate a principle of markedness for the basic tense structures based on the computational efficiency of the temporal representations; and (3) We show Allen’s intervalbased temporal system is not arbitrary, but it can be used to explain independently motivated linguistic constraints on tense and aspect interpretations. We also claim that the methodology of research developed in this study—“cross level” investigation of independently motivated formal grammatical theory and computational models—is a powerful paradigm with which to attack representational problems in basic cognitive domains, e.g., space, time, causality, etc.
AIM816 Author[s]: Richard C. Waters PP: A LISP Pretty Printing System December 1984 ftp://publications.ai.mit.edu/aipublications/500999/AIM816.ps ftp://publications.ai.mit.edu/aipublications/pdf/AIM816.pdf The PP system provides an efficient implementation of the Common Lisp pretty printing function PPRINT. In addition, PP goes beyond ordinary pretty printers by providing mechanisms which allow the user to control the exact form of pretty printed output. This is done by extending LISP in two ways. First, several new FORMAT directives are provided which support dynamic decisions about the placement of newlines based on the line width available for output. Second, the concept of printself methods is extended so that it can be applied to lists as well as to objects which can receive messages. Together, these extensions support pretty printing of both programs and data structures. The PP system also modifies the way that the Lisp printer handles the abbreviation of output. The traditional mechanisms for abbreviating lists based on nesting depth and length are extended so that they automatically apply to every kind of structure without the user having to take any explicit action when writing printself methods. A new abbreviation mechanism introduced which can be used to limit the total number of lines printed.
AIM817 Author[s]: A. Hurlbert and T. Poggio Spotlight on Attention April 1985 ftp://publications.ai.mit.edu/aipublications/500999/AIM817.ps ftp://publications.ai.mit.edu/aipublications/pdf/AIM817.pdf We review some recent psychophysical, psychological and anatomical data which highlight the important role of attention in visual information processing, and discuss the evidence for a serial spotlight of attention. We point out the connections between the questions raised by the spotlight model and computational results on the intrinsic parallelism of several tasks in vision.
AIM820 Author[s]: Michael J. Brooks and Berthold K.P. Horn Shape and Source from Shading January 1985 ftp://publications.ai.mit.edu/aipublications/500999/AIM820.ps ftp://publications.ai.mit.edu/aipublications/pdf/AIM820.pdf Wellknown methods for solving the shape fromshading problem require knowledge of the reflectance map. Here we show how the shapefromshading problem can be solved when the reflectance map is not available, but is known to have a given form with some unknown parameters. This happens, for example, when the surface is known to be Lambertian, but the direction to the light source is not known. We give an iterative algorithm that alternately estimates the surface shape and the light source direction. Use of the unit normal in parameterizing the reflectance map, rather than the gradient or stereographic coordinates, simpliflies the analysis. Our approach also leads to an iterative scheme for computing shape from shading that adjusts the current estimates of the focal normals toward or away from the direction of the light source. The amount of adjustment is proportional to the current difference between the predicted and the observed brightness. We also develop generalizations to less constrained forms of reflectance maps.
AIM821 Author[s]: Shahriar Negahdaripour and Berthold K.P. Horn Direct Passive Navigation February 1985 ftp://publications.ai.mit.edu/aipublications/500999/AIM821.ps ftp://publications.ai.mit.edu/aipublications/pdf/AIM821.pdf In this paper, we show how to recover the motion of an observer relative to a planar surface directly from image brightness derivatives. We do not compute the optical flow as an intermediate step. We derive a set of nine nonlinear equations using a least squares formulation. A simple iterative scheme allows us to find either of two possible solutions of these equations. An initial pass over the relevant image region is used to accumulate a number of moments of the image brightness derivatives. All of the quantities used in the iteration can be efficiently computed from these totals, without the need to refer back to the image. A new, compact notation allows is to show easily that there are at most two planar solutions. Key words: Passive Navigation, Optical flow, Structure and Motion, Least Squares, Planar surface, Nonlinear Equations, Dial Solution, Planar Motion Field Equation.
AIM822 Author[s]: Michael Brady, Jean Ponce, Alan Yuille and Haruo Asada Describing Surfaces January 1985 ftp://publications.ai.mit.edu/aipublications/500999/AIM822.ps ftp://publications.ai.mit.edu/aipublications/pdf/AIM822.pdf This paper continues our work on visual representation s of threedimensional surfaces [Brady and Yuille 1984b]. The theoretical component of our work is a study of classes of surface curves as a source of constraint n the surface on which they lie, and as a basis for describing it. We analyze bounding contours, surface intersections, lines of curvature, and asymptotes. Our experimental work investigates whether the information suggested by our theoretical study can be computed reliably and efficiently. We demonstrate algorithms that compute lines of curvature of a (Gaussian smoothed) surface; determine planar patches and umbilic regions; extract axes of surfaces of revolution and tube surfaces. We report preliminary results on adapting the curvature primal sketch algorithms of Asada and Brady [1984] to detect and describe surface intersections.
AIM823 Author[s]: Jonathan H. Connell and Michael Brady Generating and Generalizing Models of Visual Objects July 1985 ftp://publications.ai.mit.edu/aipublications/500999/AIM823.ps ftp://publications.ai.mit.edu/aipublications/pdf/AIM823.pdf We report on initial experiments with an implemented learning system whose inputs are images of twodimensional shapes. The system first builds semantic network descriptions of shapes based on Brady’s smoothed local symmetry representation. It learns shape models form them using a substantially modified version of Winston’s ANALOGY program. A generalization of Gray coding enables the representation to be extended and also allows a single operation, called ablation, to achieve the effects of many standard induction heuristics. The program can learn disjunctions, and can learn concepts suing only positive examples. We discuss learnability and the pervasive importance of representational hierarchies.
AIM824 Author[s]: Jean Ponce and Michael Brady Toward a Surface Primal Sketch April 1985 ftp://publications.ai.mit.edu/aipublications/500999/AIM824.ps ftp://publications.ai.mit.edu/aipublications/pdf/AIM824.pdf This paper reports progress toward the development of a representation of significant surface changes in dense depth maps. We call the representation the Surface Primal Sketch by analogy with representation of intensity changes, image structure, and changes in curvature of planar curves. We describe an implemented program that detects, localizes, and symbolically describes: steps, where the surface height function is discontinuous; roofs, where the surface is continuous but the surface normal is discontinuous; smooth joins, where the surface normal is continuous but a principle curvature is discontinuous and changes sign; and shoulders, which consists of two roofs and correspond to a step viewed obliquely. We illustrate the performance of the program on range maps of objects of varying complexity.
AIM825 Author[s]: S. Murray Sherman and Christof Koch The Anatomy and Physiology of Gating Retinal Signals in the Mammalian Lateral Geniculate Nucleus June 1985 ftp://publications.ai.mit.edu/aipublications/500999/AIM825.ps ftp://publications.ai.mit.edu/aipublications/pdf/AIM825.pdf In the mammalian visual system, the lateral geniculate nucleus is commonly thought to act merely as a relay for the transmission of visual information from the retina to the visual cortex, a relay without significant elaboration in receptive field properties or signal strength. However, many morphological and electrophysiological observations are at odds with this view. In this paper, we will review the different anatomical pathways and biophysical mechanisms possibly implementing a selective gating of visual information flow from the retina to the visual cortex. We will argue that the lateral geniculate nucleus in mammals is one of the earliest sites where selective, visual attention operates and where general changes in neuronal excitability as a function of the behavioral states of the animal, for instance, sleep, paradoxical sleep, arousal, etc., occur.
AIM826 Author[s]: Michael Drumheller Mobile Robot Localization Using Sonar January 1985 ftp://publications.ai.mit.edu/aipublications/500999/AIM826.ps ftp://publications.ai.mit.edu/aipublications/pdf/AIM826.pdf This paper describes a method by which range data from a sonar or other type of rangefinder can be used to determine the 2 dimensional position and orientation of a mobile robot inside a room. The plan of the room is modeled as a list of segments indicating the positions of walls. The method works by extracting straight segments from the range data and examining all hypotheses about pairings between the segments and walls in the model of the room. Inconsistent pairings are discarded efficiently by using local constraints based on distances between walls, angles between walls, and ranges between walls along their normal vectors. These constraints are used to obtain a small set of possible positions, which is further pruned using a test for physical consistency. The approach is extremely tolerant of noise and clutter. Transient objects such as furniture and people need not be included in the room model, and very noisy, low resolution sensors can be used. The algorithm’s performance is demonstrated using Polaroid Ultrasonic Rangefinder, which is a lowresolution, highnoise sensor.
AIM828 Author[s]: Philip E. Agre Routines May 1985 ftp://publications.ai.mit.edu/aipublications/500999/AIM828.ps ftp://publications.ai.mit.edu/aipublications/pdf/AIM828.pdf Regularities in the word give rise to regularities in the way which we deal with the world. That is to say, we fall into routines. I have been studying the phenomena of routinization, the process by which institutionalized patterns of interaction with the world arise and evolve in everyday life. Underlying this evolution is a dialectical process of internalization. First you build a model of some previously unarticulated emergent aspect of an existing routine. Armed with an incrementally more global view of interaction, you can often formulate an incrementally better informed plan of attack. A routine is not a plan in the sense of the classical planning literature, except in the theoretical limit of this process. I am implementing this theory using running arguments, a technique for writing rulebased programs for intelligent agents. Because a running argument is compiled into TMS networks as it proceeds, incremental changes in the world require only incremental recomputation of the reasoning about what actions to take next. The system supports a style of programming, dialectival argumentation that had many important properties that recommend it as a substrate for large AI systems. One of these might be called additivity: an agent can modify its reasoning in a class of situations by adducing arguments as to why its previous arguments were incorrect in those cases. Because no sideeffects are ever required, reflexive systems based on dialectical argumentation ought to be less fragile than intuition and experience suggest. I outline the remaining implementation problems.
AIM829 Author[s]: Kent M. Pitman CREF: An Editing Facility for Managing Structured Text February 1985 ftp://publications.ai.mit.edu/aipublications/500999/AIM829.ps ftp://publications.ai.mit.edu/aipublications/pdf/AIM829.pdf This paper reports work in progress on an experimental text editor called CREF, the Cross Referenced Editing Facility. CREF deals with chunks of text, called segments, which may have associated features such as keywords or various kinds of links to other segments. Text in CREF is organized into linear collections for normal browsing. The use of summary and crossreference links in CREF allows the imposition of an auxiliary network structure upon the text which can be useful for “zooming in and out” or “nonlocal transitions.” Although it was designed as a tool for use in complex protocol analysis by a “knowledge Engineer’s Assistant,” CREF has many interesting features which should make it suitable for a wide variety of applications, including browsing, program editing, document preparation, and mail reading.
AIM832 Author[s]: Alessandro Verri and Alan Yuille Perspective Projection Invariants February 1986 ftp://publications.ai.mit.edu/aipublications/500999/AIM832.ps ftp://publications.ai.mit.edu/aipublications/pdf/AIM832.pdf An important part of stereo vision consists of finding and matching points in two images which correspond to the same physical element in the scene. We show that zeros of curvature of curves are perspective projection invariants and can therefore be used to find corresponding points. They can be used to help solve the registration problem (Longuet Higgins, 1982) and to obtain the correct depth when a curve enters the forbidden zone (Krol and van de Grind, 1982). They are also relevant to theories for representing image curves. We consider the stability of these zeros of curvature.
AIM833 Author[s]: Tomaso Poggio, Harry Voorhees and Alan Yuille A Regularized Solution to Edge Detection April 1985 ftp://publications.ai.mit.edu/aipublications/500999/AIM833.ps ftp://publications.ai.mit.edu/aipublications/pdf/AIM833.pdf We consider edge detection as the problem of measuring and localizing changes of light intensity in the image. As discussed by Torre and Poggio (1984), edge detection, when defined in this way, is an illposed problem in the sense of Hadamard. The regularized solution that arises is then the solution to a variational principle. In the case of exact data, one of the standard regularization methods (see Poggio and Torre, 1984) leads to cubic spline interpolation before differentiation. We show that in the case of regularlyspaced data this solution corresponds to a convolution filterto be applied to the signal before differentiation  which is a cubic spline. In the case of nonexact data, we use another regularization method that leads to a different variational principle. We prove (1) that this variational principle leads to a convolution filter for the problem of onedimensional edge detection, (2) that the form of this filter is very similar to the Gaussian filter, and (3) that the regularizing parameter $lambda$ in the variational principle effectively controls the scale of the filter.
AITR834 Author[s]: Peter Merrett Andreae Justified Generalization: Acquiring Procedures from Examples January 1985 ftp://publications.ai.mit.edu/aipublications/500999/AITR834.ps ftp://publications.ai.mit.edu/aipublications/pdf/AITR834.pdf This thesis describes an implemented system called NODDY for acquiring procedures from examples presented by a teacher. Acquiring procedures form examples involves several different generalization tasks. Generalization is an underconstrained task, and the main issue of machine learning is how to deal with this underconstraint. The thesis presents two principles for constraining generalization on which NODDY is based. The first principle is to exploit domain based constraints. NODDY demonstrated how such constraints can be used both to reduce the space of possible generalizations to manageable size, and how to generate negative examples out of positive examples to further constrain the generalization. The second principle is to avoid spurious generalizations by requiring justification before adopting a generalization. NODDY demonstrates several different ways of justifying a generalization and proposes a way of ordering and searching a space of candidate generalizations based on how much evidence would be required to justify each generalization. Acquiring procedures also involves three types of constructive generalizations: inferring loops (a kind of group), inferring complex relations and state variables, and inferring predicates. NODDY demonstrates three constructive generalization methods for these kinds of generalization.
AIM835 Author[s]: John M. Rubin and W.A. Richards Boundaries of Visual Motion April 1985 ftp://publications.ai.mit.edu/aipublications/500999/AIM835.ps ftp://publications.ai.mit.edu/aipublications/pdf/AIM835.pdf A representation of visual motion convenient for recognition shouldsmake prominent the qualitative differences among simple motions. Wesargue that the first stage in such a motion representation is to makesexplicit boundaries that we define as starts, stops, and forcesdiscontinuities. When one of these boundaries occurs in motion, humansobservers have the subjective impression that some fleeting,ssignificant event has occurred. We go farther and hypothesize that onesof the subjective motion boundaries is seen if and only if one of oursdefined boundaries occurs. We enumerate all possible motion boundariessand provide evidence that they are psychologically real.
AIM836 Author[s]: Robert C. Berwick and Amy S. Weinberg Parsing and Linguistic Explanation April 1985 ftp://publications.ai.mit.edu/aipublications/500999/AIM836.ps ftp://publications.ai.mit.edu/aipublications/pdf/AIM836.pdf This article summarizes and extends recent results linking deterministic parsing to observed “locality principles” in syntax. It also argues that grammatical theories based on explicit phrase structure rules are unlikely to provide comparable explanations of why natural languages are built the way they are.
AIM837 Author[s]: Eric Sven Ristad GPSGRecognition is NPHard March 1985 ftp://publications.ai.mit.edu/aipublications/500999/AIM837.ps ftp://publications.ai.mit.edu/aipublications/pdf/AIM837.pdf Proponents of generalized phrase structure grammar (GPSG) cite its weak contextfree generative power as proof of the computational tractability of GPSG Recognition. Since contextfree languages (CFLs) can be parsed in time proportional to the cube of the sentence length, and GPSGs only generate CFLs, it seems plausible the GPSGs can also be parsed in cubic time. This longstanding, widely assumed GPSG “efficient parsability” result in misleading: parsing the sentences of an arbitrary GPSG is likely to be intractable, because a reduction from 3SAT proves that the universal recognition problem for the GPSGs of Gazdar (1981) is NPhard. Crucially, the time to parse a sentence of a CFL can be the product of sentence length cubed and contextfree grammar size squared, and the GPSG grammar can result in an exponentially large set of derived contextfree rules. A central object in the 1981 GPSG theory, the metarule, inherently results in an intractable parsing problem, even when severely constrained. The implications for linguistics and natural language parsing are discussed.
AIM838 Author[s]: Jean Ponce Prism Trees: An Efficient Representation for Manipulating and Displaying Polyhedra with Many Faces April 1985 ftp://publications.ai.mit.edu/aipublications/500999/AIM838.ps ftp://publications.ai.mit.edu/aipublications/pdf/AIM838.pdf Computing surface and/or object intersections is a cornerstone of many algorithms in Geometric Modeling and Computer Graphics, for example Set Operations between solids, or surface Ray Casting display. We present an object centered, information preserving, hierarchical representation for polyhedra called Prism Tree. We use the representation to decompose the intersection algorithms into two steps: the localization of intersections, and their processing. When dealing with polyhedra with many faces (typically more than one thousand), the first step is by far the most expensive. The Prism Tree structure is used to compute efficiently this localization step. A preliminary implementation of the Set Operations and Ray casting algorithms has been constructed.
AIM839 Author[s]: Jose L. Marroquin Optimal Bayesian Estimators for Image Segmentation and Surface Reconstruction April 1985 ftp://publications.ai.mit.edu/aipublications/500999/AIM839.ps ftp://publications.ai.mit.edu/aipublications/pdf/AIM839.pdf sA very fruitful approach to the solution of image segmentation andssurface reconstruction tasks is their formulation as estimationsproblems via the use of Markov random field models and Bayes theory.sHowever, the Maximuma Posteriori (MAP) estimate, which is the one mostsfrequently used, is suboptimal in these cases. We show that forssegmentation problems the optimal Bayesian estimator is the maximizersof the posterior marginals, while for reconstruction tasks, thesthreshold posterior mean has the best possible performance. We presentsefficient distributed algorithms for approximating these estimates insthe general case. Based on these results, we develop a maximumslikelihood that leads to a parameterfree distributed algorithm forsrestoring piecewise constant images. To illustrate these ideas, thesreconstruction of binary patterns is discussed in detail.
AIM840 Author[s]: Whitman Richards, Jan J. Koenderink and D.D. Hoffman Inferring 3D Shapes from 2D Codons April 1985 ftp://publications.ai.mit.edu/aipublications/500999/AIM840.ps ftp://publications.ai.mit.edu/aipublications/pdf/AIM840.pdf All plane curves can be described at an abstract level by a sequence of five primitive elemental shapes, called “condons”, which capture the sequential relations between the singular points of curvature. The condon description provides a basis for enumerating all smooth 2D curves. Let each of these smooth plane be considered as the si lhouette of an opaque 3D object. Clearly an in finity of 3D objects can generate any one of ou r “condon” silhouettes. How then can we p redict which 3D object corresponds to a g iven 2D silhouette? To restrict the infinity of choices, we impose three mathematical properties of smooth surfaces plus one simple viewing constraint. The constraint is an extension of the notion of general position, and seems to drive our preferred inferences of 3D shapes, given only the 2D contour.
AIM841 Author[s]: W. Eric L. Grimson and Tomas LozanoPerez Recognition and Localization of Overlapping Parts from Sparse Data June 1985 ftp://publications.ai.mit.edu/aipublications/500999/AIM841.ps ftp://publications.ai.mit.edu/aipublications/pdf/AIM841.pdf This paper discusses how sparse local measurements of positions and surface normals may be used to identify and locate overlapping objects. The objects are modeled as polyhedra (or polygons) having up to six degreed of positional freedom relative to the sensors. The approach operated by examining all hypotheses about pairings between sensed data and object surfaces and efficiently discarding inconsistent ones by using local constraints on: distances between faces, angles between face normals, and angles (relative to the surface normals) of vectors between sensed points. The method described here is an extension of a method for recognition and localization of nonoverlapping parts previously described in [Grimson and Lozano Perez 84] and [Gaston and LozanoPerez 84].
AIM842 Author[s]: Tomas LozanoPerez and Rodney A. Brooks An Approach to Automatic Robot Programming April 1985 ftp://publications.ai.mit.edu/aipublications/500999/AIM842.ps ftp://publications.ai.mit.edu/aipublications/pdf/AIM842.pdf In this paper we propose an architecture for a new tasklevel system, which we call TWAIN. Tasklevel programming attempts to simplify the robot programming process but requiring that the user specify only goals for the physical relationships among objects, rather than the motions needed to achieve those goals. A tasklevel specification is meant to be completely robot independent; no positions or paths that depend on the robot geometry or kinematics are specified by the user. We have two goals for this paper. Th is first is to present a more unified t reatment of some individual pieces of r esearch in task planning, whose r elationship has not previously been d escribed. The second is to provide a new framework for further research in task planning. This is a slightly modified version of a paper that appeared in Proceedings of Soli d Modeling by Computers: from Theory to A pplications, Research laboratories Sympo sium Series, sponsored by General Motors, Warren, Michigan, September 1983.
AITR843 Author[s]: Peter J. Sterpe TEMPEST: A Template Editor for Structured Text June 1985 ftp://publications.ai.mit.edu/aipublications/500999/AITR843.ps ftp://publications.ai.mit.edu/aipublications/pdf/AITR843.pdf TEMPEST is a fullscreen text editor that incorporates a structural paradigm in addition to the more traditional textual paradigm provided by most editors. While the textual paradigm treats the text as a sequence of characters, the structural paradigm treats it as a collection of named blocks which the user can define, group, and manipulate. Blocks can be defined to correspond to the structural features of he text, thereby providing more meaningful objects to operate on than characters of lines. The structural representation of the text is kept in the background, giving TEMPEST the appearance of a typical text editor. The structural and textual interfaces coexist equally, however, so one can always operate on the text from wither point of view. TEMPEST’s representation scheme provides no semantic understanding of structure. This approach sacrifices depth, but affords a broad range of applicability and requires very little computational overhead. A prototype has been implemented to illustrate the feasibility and potential areas of application of the central ideas. It was developed and runs on an IBM Personal Computer.
AITR844 Author[s]: Gul Abdulnabi Agha ACTORS: A Model of Concurrent Computation in Distributed Systems June 1985 ftp://publications.ai.mit.edu/aipublications/500999/AITR844.ps ftp://publications.ai.mit.edu/aipublications/pdf/AITR844.pdf A foundational model of concurrency is developed in this thesis. We examine issues in the design of parallel systems and show why the actor model is suitable for exploiting largescale parallelism. Concurrency in actors is constrained only by the availability of hardware resources and by the logical dependence inherent in the computation. Unlike dataflow and functional programming, however, actors are dynamically reconfigurable and can model shared resources with changing local state. Concurrency is spawned in actors using asynchronous messagepassing, pipelining, and the dynamic creation of actors. This thesis deals with some central issues in distributed computing. Specifically, problems of divergence and deadlock are addressed. For example, actors permit dynamic deadlock detection and removal. The problem of divergence is contained because independent transactions can execute concurrently and potentially infinite processes are nevertheless available for interaction.
AIM845 Author[s]: Norberto M. Grzywacz and Ellen C. Hildreth The Incremental Rigidity Scheme for Recovering Structure from Motion: Position vs. Velocity Based Formulations October 1985 ftp://publications.ai.mit.edu/aipublications/500999/AIM845.ps ftp://publications.ai.mit.edu/aipublications/pdf/AIM845.pdf Perceptual studies suggest that the visual system uses the “rigidity” assumption to recover three dimensional structures from motion. Ullman (1984) recently proposed a computational scheme, the incremental rigidity scheme, which uses the rigidity assumptions to recover the structure of rigid and nonrigid objects in motion. The scheme assumes the input to be discrete positions of elements in motion, under orthographic projection. We present formulations of Ullmans’ method that use velocity information and perspective projection in the recovery of structure. Theoretical and computer analyses show that the velocity based formulations provide a rough estimate of structure quickly, but are not robust over an extended time period. The stable long term recovery of structure requires disparate views of moving objects. Our analysis raises interesting questions regarding the recovery of structure from motion in the human visual system.
AIM846 Author[s]: Ellen C. Hildreth and John M. Hollerbach The Computational Approach to Vision and Motor Control August 1985 ftp://publications.ai.mit.edu/aipublications/500999/AIM846.ps ftp://publications.ai.mit.edu/aipublications/pdf/AIM846.pdf Over the past decade it has become increasingly clear that to understand the brain, we must study not only its biochemical and biophysical mechanisms and its outward perceptual and physical behavior. We also must study the brain at a theoretical level that investigated the computations that are necessary to perform its functions. The control of movements such as reaching, grasping and manipulating objects requires complex mechanisms that elaborate information form many sensors and control the forces generated by a large number of muscles. The act of seeing, which intuitively seems so simple and effortless, requires information processing whose complexity we are just beginning to grasp. A computational approach to the study of vision and motor tasks. This paper discusses a particular view of the computational approach and its relevance to experimental neuroscience.
AIM848 Author[s]: William Clinger (editor) The Revised Revised Report on Scheme or An Uncommon Lisp August 1985 ftp://publications.ai.mit.edu/aipublications/500999/AIM848.ps ftp://publications.ai.mit.edu/aipublications/pdf/AIM848.pdf Data and procedures and the values they amass, Higherorder functions to combine and mix and match, Objects with their local state, the message they pass, A property, a package, the control of point for a catch In the Lambda Order they are all firstclass. One thing to name them all, one things to define them, one thing to place them in environments and bind them, in the Lambda Order they are all firstclass. Keywords: SCHEME, LISP, functional programming, computer languages.
AIM848B Author[s]: William Clinger and Jonathan Rees (editors) Revised Report On The Algorithmic Language Scheme November 1991 ftp://publications.ai.mit.edu/aipublications/500999/AIM848b.ps ftp://publications.ai.mit.edu/aipublications/pdf/AIM848b.pdf Data and procedures and the values they amass, Higherorder functions to combine and mix and match, Objects with their local state, the message they pass, A property, a package, the control of point for a catch In the Lambda Order they are all firstclass. One thing to name them all, one things to define them, one thing to place them in environments and bind them, in the Lambda Order they are all firstclass. Keywords: Scheme, Lisp, functional programming, computer languages.
AIM848A Author[s]: Jonathan Rees and William Clinger (editors) Revised Report on the Algorithmic Language Scheme September 1986 ftp://publications.ai.mit.edu/aipublications/500999/AIM848a.ps ftp://publications.ai.mit.edu/aipublications/pdf/AIM848a.pdf Data and procedures and the values they amass, Higherorder functions to combine and mix and match, Objects with their local state, the message they pass, A property, a package, the control of point for a catch In the Lambda Order they are all firstclass. One thing to name them all, one things to define them, one thing to place them in environments and bind them, in the Lambda Order they are all firstclass. Keywords: Scheme, Lisp, functional programming, computer languages.
AIM849 Author[s]: John M. Hollerbach and Christopher G. Atkeson Characterization of JointInterpolated Arm Movements June 1985 ftp://publications.ai.mit.edu/aipublications/500999/AIM849.ps ftp://publications.ai.mit.edu/aipublications/pdf/AIM849.pdf Two possible sets of planning variables for human arm movement are point angles and hand position. Although one might expect these possibilities to be mutually exclusive, recently an apparently contradictory set of data has appeared that indicated straightline trajectories in both hand space and joint space at the same time. To assist in distinguishing between these viewpoints applied to the same data, we have theoretically characterized the set of trajectories derivable from a joint based planning strategy and have compared them to experimental measurements. We conclude that the apparent straightlines in joint space happen to be artifacts of movement kinematics near the workspace boundary.
AITR852 Author[s]: Margaret Morrison Fleck Local Rotational Symmetries August 1985 ftp://publications.ai.mit.edu/aipublications/500999/AITR852.ps ftp://publications.ai.mit.edu/aipublications/pdf/AITR852.pdf This thesis describes a new representation for twodimensional round regions called Local Rotational Symmetries. Local Rotational Symmetries are intended as a companion to Brady’s Smoothed Local Symmetry Representation for elongated shapes. An algorithm for computing Local Rotational Symmetry representations at multiple scales of resolution has been implemented and results of this implementation are presented. These results suggest that Local Rotational Symmetries provide a more robustly computable and perceptually accurate description of round regions than previous proposed representations. In the course of developing this representation, it has been necessary to modify the way both Smoothed Local Symmetries and Local Rotational Symmetries are computed. First, greyscale image smoothing proves to be better than boundary smoothing for creating representations at multiple scales of resolution, because it is more robust and it allows qualitative changes in representations between scales. Secondly, it is proposed that shape representations at different scales of resolution be explicitly related, so that information can be passed between scales and computation at each scale can be kept local. Such a model for multiscale computation is desirable both to allow efficient computation and to accurately model human perceptions.
AITR853 Author[s]: Jonathan Hudson Connell Learning Shape Descriptions: Generating and Generalizing Models of Visual Objects September 1985 ftp://publications.ai.mit.edu/aipublications/500999/AITR853.ps ftp://publications.ai.mit.edu/aipublications/pdf/AITR853.pdf We present the results of an implemented system for learning structural prototypes from greyscale images. We show how to divide an object into subparts and how to encode the properties of these subparts and the relations between them. We discuss the importance of hierarchy and grouping in representing objects and show how a notion of visual similarities can be embedded in the description language. Finally we exhibit a learning algorithm that forms class models from the descriptions produced and uses these models to recognize new members of the class.
AIM854 Author[s]: Pyung H. Chang A Closed Form Solution for Inverse Kinematics of Robot Manipulator with Redundancy March 1986 ftp://publications.ai.mit.edu/aipublications/500999/AIM854.ps ftp://publications.ai.mit.edu/aipublications/pdf/AIM854.pdf A closed form equation for inverse kinematics of manipulator with redundancy is derived, using the Lagrangian multiplier method. The proposed equation is proved to provide the exact equilibrium state for the resolved motion method. And is shown to be a general expression that yields the extended Jacobian method. The repeatability problem n the resolved motion method does not exist in the proposed equation. The equation is demonstrated to give more accurate trajectories than the resolved motion method.
AIM855 Author[s]: W. Eric L. Grimson Sensing Strategies for Disambiguating Among Multiple Objects in Known Poses August 1985 ftp://publications.ai.mit.edu/aipublications/500999/AIM855.ps ftp://publications.ai.mit.edu/aipublications/pdf/AIM855.pdf The need for intelligent interaction of a robot with its environment frequently requires sensing of the environment. Further, the need for rapid execution requires that the interaction between sensing and action take place using as little sensory data as possible, while still being reliable. Previous work has developed a technique for rapidly determining the feasible poses of an object from sparse, noisy, occluded sensory data. In this paper, we examine techniques for acquiring position and surface orientation data about points on the surfaces of objects, with the intent of selecting sensory points that will force a unique interpretation of the pose of the object with as few data points as possible. Under some simple assumptions about the sensing geometry, we derive a technique for predicting optimal sensing positions. The technique has been implemented and tested. To fully specify the algorithm, we need estimates of the error in estimating the position and orientation of the object, and we derive analytic expressions for such error for the case of one particular approach to object recognition.
AIM856 Author[s]: G. Edward Barton, Jr. The Computational Complexity of TwoLevel Morphology November 1985 ftp://publications.ai.mit.edu/aipublications/500999/AIM856.ps ftp://publications.ai.mit.edu/aipublications/pdf/AIM856.pdf Morphological analysis requires knowledge of the stems, affixes, combnatory patterns, and spellingchange processes of a language. The computational difficulty of the task can be clarified by investigating the computational characteristics of specific models of morphologial processing. The use of finite state machinery in the “twolevel” model by Kimmo Koskenicimi model does not guarantee efficient processing. Reductions of the satisfiability problem show that finding the proper lexical–surface correspondence in a twolevel generation or recognition problem can be computationally difficult. However, another source of complexity in the existing algorithms can be sharply reduced by changing the implementation of the dictionary component. A merged dictionary with bit vectors reduces the number of choices among alternative dictionary subdivisions by allowing several subdivisions to be searched at once.
AIM857 Author[s]: Ryszard S. Michalski and Patrick H. Winston Variable Precision Logic August 1985 ftp://publications.ai.mit.edu/aipublications/500999/AIM857.ps ftp://publications.ai.mit.edu/aipublications/pdf/AIM857.pdf Variable precision logic is concerned with problems of reasoning with incomplete information and under time constraints. It offers mechanisms for handling tradeoffs between the precision of inferences and the computational efficiency of deriving them. Of the two aspects of precision, the specificity of conclusions and the certainty of belief in them, we address here primarily the latter, and employ censored production rules as an underlying representational and computational mechanism. Such rules are created by augmenting ordinary production rules with an exception condition and are written in the form if A then D unless C, where C is the exception condition. From a control viewpoint, censored production rules are intended for situations in which the implication A {arrow} B holds frequently and the assertion C holds rarely. Systems using censored production rules are free to ignore the exception conditions, when time is a premium. Given more time, the exception conditions are examined, lending credibility to initial, highspeed answers, or changing them. Such logical systems therefore exhibit variable certainty of conclusions, reflecting variable investments of computational resources in conducting reasoning. From a logical viewpoint, the unless operator between B and C acts as the exclusiveor operator. From an expository viewpoint, the if A then B part of the censored production rule expresses an important information (e.g., a causal relationship), while the unless C part acts only as a switch that changes the polarity of B to –B when C holds. Expositive properties are captured quantitatively by augmenting censored rules with two parameters that indicate the certainty of the implication if A then B. Parameter 6 is the certainty when the truth value C is unknown, and 7 is the certainty when C is known to be false.
AIM858 Author[s]: Ellen C. Hildreth Edge Detection September 1985 ftp://publications.ai.mit.edu/aipublications/500999/AIM858.ps ftp://publications.ai.mit.edu/aipublications/pdf/AIM858.pdf The goal of vision is to recover physical properties of objects in a scene, such as the location of object boundaries and the structure, color, and texture of object surfaces, from the twodimensional image that is projected onto the eye or camera. The first clues about the physical properties of the scene are provided by the changes of intensity in the image. The importance of intensity changes and edges in early visual processing has led to extensive research on their detection, description, and use, both in computer and biological vision systems. This article reviews some of the theory that underlies the detection of edges and the methods used to carry out this analysis.
AITR859 Author[s]: Anita M. Flynn Redundant Sensors for Mobile Robot Navigation September 1985 ftp://publications.ai.mit.edu/aipublications/500999/AITR859.ps ftp://publications.ai.mit.edu/aipublications/pdf/AITR859.pdf Redundant sensors are needed on a mobile robot so that the accuracy with which it perceives its surroundings can be increased. Sonar and infrared sensors are used here in tandem, each compensating for deficiencies in the other. The robot combines the data from both sensors to build a representation which is more accurate than if either sensor were used alone. Another representation, the curvature primal sketch, is extracted from this perceived workspace and is used as the input to two path planning programs: one based on configuration space and one based on a generalized cone formulation of free space.
AITR860 Author[s]: Jose Luis Marroquin Probabilistic Solution of Inverse Problems September 1985 ftp://publications.ai.mit.edu/aipublications/500999/AITR860.ps ftp://publications.ai.mit.edu/aipublications/pdf/AITR860.pdf In this thesis we study the general problem of reconstructing a function, defined on a finite lattice from a set of incomplete, noisy and/or ambiguous observations. The goal of this work is to demonstrate the generality and practical value of a probabilistic (in particular, Bayesian) approach to this problem, particularly in the context of Computer Vision. In this approach, the prior knowledge about the solution is expressed in the form of a Gibbsian probability distribution on the space of all possible functions, so that the reconstruction task is formulated as an estimation problem. Our main contributions are the following: (1) We introduce the use of specific error criteria for the design of the optimal Bayesian estimators for several classes of problems, and propose a general (Monte Carlo) procedure for approximating them. This new approach leads to a substantial improvement over the existing schemes, both regarding the quality of the results (particularly for low signal to noise ratios) and the computational efficiency. (2) We apply the Bayesian appraoch to the solution of several problems, some of which are formulated and solved in these terms for the first time. Specifically, these applications are: teh reconstruction of piecewise constant surfaces from sparse and noisy observationsl; the reconstruction of depth from stereoscopic pairs of images and the formation of perceptual clusters. (3) For each one of these applications, we develop fast, deterministic algorithms that approximate the optimal estimators, and illustrate their performance on both synthetic and real data. (4) We propose a new method, based on the analysis of the residual process, for estimating the parameters of the probabilistic models directly from the noisy observations. This scheme leads to an algorithm, which has no free parameters, for the restoration of piecewise uniform images. (5) We analyze the implementation of the algorithms that we develop in nonconventional hardware, such as massively parallel digital machines, and analog and hybrid networks.
AIM861 Author[s]: Nguyen, VanDuc The Synthesis of ForceClosure Grasps September 1985 ftp://publications.ai.mit.edu/aipublications/500999/AIM861.ps ftp://publications.ai.mit.edu/aipublications/pdf/AIM861.pdf This paper addresses the problem of synthesizing planar grasps that have force closure. A grasp on an object is a force closure grasp if and only if we can exert, through the set of contacts, arbitrary force and moment on this object. Equivalently, any motion of the object is resisted by a contact force, that is the object cannot break contact with the finger tips without some nonzero external work. The force closure constraint is addressed from three different points of view: mathematics, physics, and computational geometry. The last formulation results in fast and simple polynomial time algorithms for directly constructing force closure grasps. We can also find grasps where each finger has an independent region of contact on the set of edges.
AIM862 Author[s]: VanDuc Nguyen The Synthesis of Stable Grasps in the Plane October 1985 ftp://publications.ai.mit.edu/aipublications/500999/AIM862.ps ftp://publications.ai.mit.edu/aipublications/pdf/AIM862.pdf This paper addresses the problem of synthesizing stable grasps on arbitrary planar polygons. Each finger is a virtual spring whose stiffnes and compression can be programmed. The contacts between the finger tips and the object are point contacts without friction. We prove that all forceclosure grasps can be made stable, and it costs 0(n) time to synthesize a set of n virtual springs such that a given force closure grasp is stable. We can also choose the compliance center and the stiffness matrix of the grasp, and so choose the compliant behavior of the grasped object about its equilibrium. The planning and execution of grasps and assembly operations become easier and less sensitive to errors.
AIM863 Author[s]: Shahriar Negahdaripour Direct Passive Navigation: Analytical Solution for Planes August 1985 ftp://publications.ai.mit.edu/aipublications/500999/AIM863.ps ftp://publications.ai.mit.edu/aipublications/pdf/AIM863.pdf In this paper, we derive a closed form solution for recovering the motion of an observer relative to a planar surface directly from image brightness derivatives. We do not compute the optical flow as an intermediate step, only the spatial and temporal intensity gradients at a minimum of 8 points. We solve a linear matrix equation for the elements of a 3x3 matrix. The eigenvalue decomposition of its symmetric part is then used to compute the motion parameters and the plane orientation.
AIM864 Author[s]: Rodney A. Brooks A Robust Layered Control System for a Mobile Robot September 1985 ftp://publications.ai.mit.edu/aipublications/500999/AIM864.ps ftp://publications.ai.mit.edu/aipublications/pdf/AIM864.pdf We describe a new architecture for controlling mobile robots. Layers of control system are built to let the robot operate at increasing levels of competence. Layers are made up of asynchronous modules which communicate over low bandwidth channels. Each module is an instance of a fairly simple computational machine. Higher level layers can subsume the roles of lower levels by suppressing their outputs. However, lower levels continue to function as higher levels are added. The result is a robust and flexible robot control system. The system is intended to control a robot that wanders the office areas of our laboratory building maps of its surroundings. In this paper we demonstrate the system controlling a detailed simulation of the robot.
AIM865 Author[s]: Gul Agha and Carl Hewitt Concurrent Programming Using Actors: Exploiting LargeScale Parallelism October 1985 ftp://publications.ai.mit.edu/aipublications/500999/AIM865.ps ftp://publications.ai.mit.edu/aipublications/pdf/AIM865.pdf We argue that the ability to model shared objects with changing local states, dynamic reconfigurability, and inherent parallelism are desirable properties of any model of concurrency. The actor model addresses these issues in a uniform framework. This paper briefly describes the concurrent programming language Act3 and the principles that have guided its development. Act3 advances the state of the art in programming languages by combining the advantages of objectoriented programming with those of functional programming. We also discuss considerations relevant to large scale parallelism in the context of open systems, and define an abstract model which establishes the equivalence of systems defined by actor programs.
AIM867 Author[s]: Daniel P. Huttenlocher Exploiting Sequential Phonetic Constraints in Recognizing Spoken Words October 1985 ftp://publications.ai.mit.edu/aipublications/500999/AIM867.ps ftp://publications.ai.mit.edu/aipublications/pdf/AIM867.pdf Machine recognition of spoken language requires developing more robust recognition algorithms. The current paper extends the work of Shipman and Zue by investigating the power of partial phonetic descriptions. First we demonstrate that sequences of manner of articulation classes are more reliable and provide more constraint than other classes. Alone these are of limited utility, due to the high degree of variability in natural speech. This variability is not uniform, however, as most modifications and deletions occur in unstressed syllables. The stressed syllables provide substantially more constraint. This indicates that recognition algorithms can be made more robust by exploiting the manner of articulation information in stressed syllables.
AIM868 Author[s]: Brian C. Williams Circumscribing Circumscription: A Guide to Relevance and Incompleteness October 1985 ftp://publications.ai.mit.edu/aipublications/500999/AIM868.ps ftp://publications.ai.mit.edu/aipublications/pdf/AIM868.pdf Intelligent agents in the physical world must work from incomplete information due to partial knowledge and limited resources. An agent copes with these limitations by applying rules of conjecture to make reasonable assumptions about what is known. Circumscription, proposed by McCarthy, is the formalization of a particularly important rule of conjecture likened to Occam’s razor. That is, the set of all objects satisfying a certain property is the smallest set of objects that is consistent with what is known. This paper examines closely the properties and the semantics underlying circumscription, considering both its expressive power and limitations. In addition we study circumscription’s relationship to several related formalisms, such as negation by failure, the closed world assumption, default reasoning and Planner’s THNOT. In the discussion a number of extensions to circumscription are proposed, allowing one to tightly focus its scope of applicability. In addition, several new rules of conjecture are proposed based on the notions of relevance and minimality. Finally a synthesis between the approaches of McCarthy and Konolige is used to extend circumscription, as well as several other rules of conjecture, to account for resource limitations.
AIM869 Author[s]: John Batali A Vision Chip May 1981 ftp://publications.ai.mit.edu/aipublications/500999/AIM869.ps ftp://publications.ai.mit.edu/aipublications/pdf/AIM869.pdf Some well understood and well justified algorithms for early visual processing must be implemented in hardware for later visual processing to be studied. This paper describes the design and hardware implementation of a particular operator of visual processing. I constructed an NMOS VLSI circuit that computes the gradient, and detects zerocrossings, in a digital video image in real time. The algorithms employed by the chip, the design process that led to it, and its capabilites and limitations are discussed. For hardware to be a useful tool for AI, designing it must be as much like programming as possible. This paper concludes with some discussion of how such a goal can be met.
AIM870 Author[s]: Shimon Ullman The Optical Flow of Planar Surfaces December 1985 ftp://publications.ai.mit.edu/aipublications/500999/AIM870.ps ftp://publications.ai.mit.edu/aipublications/pdf/AIM870.pdf The human visual system can recover the 3D shape of moving objects on the basis of motion information alone. Computational studies of this capacity have considered primarily nonplanar rigid objects. With respect to moving planar surfaces, previous studies by Hay (1966), Tsai and Huang (1981), LonguetHiggins (1984), have shown that the planar velocity field has in general a twofold ambiguity: there are two different planes engaged in different motions that can induce the same velocity field. The current analysis extends the analysis of the planar velocity field in four directions: (1) the use of flow parameters of the type suggested by Koenderink and van Doorn (1975), (2) the exclusion of confusable nonplanar solutions, (3) a new proof and a new method for computing the 3D motion and surface orientation, and (4) a comparison with the information available in orthographic velocity fields, which is important for determining the stability of the 3D recovery process.
AIM871 Author[s]: John C. Mallery, Roger Hurwitz and Gavan Duffy Hermeneutics: From Textual Explication to Computer Understanding? May 1986 ftp://publications.ai.mit.edu/aipublications/500999/AIM871.ps ftp://publications.ai.mit.edu/aipublications/pdf/AIM871.pdf Hermeneutics, a branch of continental European philosophy concerned with human understanding and the interpretation of written texts, offers insights that may contribute to the understanding of meaning, translation, architectures for natural language understanding, and even to the methods suitable for scientific inquiry in AI. After briefly reviewing the historical development of hermeneutics as a method of interpretation, this article examines the contributions of hermeneutics to the human sciences. This background provides perspective for a review of recent hermeneuticallyoriented AI research, including the Alker, Lehnert and Schneider computerassisted techniques for coding the affective structure of narratives, the earlier positive proposal by Winograd and Bateman, the later pessimism of Winograd and Flores on the possibility of AI, as well as the systembuilding efforts of Duffey and Mallery.
AIM873 Author[s]: Fanya S. Montalvo Diagram Understanding: The Intersection of Computer Vision and Graphics November 1985 ftp://publications.ai.mit.edu/aipublications/500999/AIM873.ps ftp://publications.ai.mit.edu/aipublications/pdf/AIM873.pdf A problem common to Computer Vision and Computer Graphics is identified. It is the problem of representing, acquiring and validating symbolic descriptions of visual properties. The intersection of Computer Vision and Computer Graphics provides a basis for diagrammatic conversations between users and systems. I call this problem domain Diagram Understanding because of its analogy with Natural Language Understanding. The recognition and generation of visual objects from symbolic descriptions aare two sides of the same coin. A paradigm for the discovery and validation of higherlevel visual properties is introduced. The paradigm involves two aspects. One is the notion of denotation: the map between symbolic descriptions and visual properties. The denotation map can be validated by focus on the conversation between users and a system. The second aspect involves a method for discovering a natural rich set of visual primitives. The notion of visual property is expanded, and the paradigm is further illustrated with a traditional business graphics example.
AITR874 Author[s]: Richard Elliot Robbins BUILD: A Tool for Maintaining Consistency in Modular Systems November 1985 ftp://publications.ai.mit.edu/aipublications/500999/AITR874.ps ftp://publications.ai.mit.edu/aipublications/pdf/AITR874.pdf Build is a tool for keeping modular systems in a consistent state by managing the construction tasks (e.g. compilation, linking, etc.) associated with such systems. It employs a user supplied system model and a procedural description of a task to be performed in order to perform the task. This differs from existing tools which do not explicitly separate knowledge about systems from knowledge about how systems are manipulated. BUILD provides a static framework for modeling systems and handling construction requests that makes use of programming environment specific definitions. By altering the set of definitions, BUILD can be extended to work with new programming environments to perform new tasks.
AIM875 Author[s]: Raul E. ValdesPerez SpatioTemporal Reasoning and Linear Inequalities May 1986 ftp://publications.ai.mit.edu/aipublications/500999/AIM875.ps ftp://publications.ai.mit.edu/aipublications/pdf/AIM875.pdf Time and space are sufficiently similar to warrant in certain cases a common representation in AI problemsolving systems. What is represented is often the constraints that hold between objects, and a concern is the overall consistency of a set of constraints. This paper scrutinizes two current approaches to spatiotemporal reasoning. The suitableness of Allen’s temporal algebra for constraint networks is influenced directly by the mathematical properties of the algebra. These properties are extracted by a formulation as a network of settheoretic relations, such that some previous theorems due to Montanari apply. Some new theorems concerning consistency of these temporal constraint networks are also presented.
AIM876 Author[s]: Shahriar Negahdaripour and Alan Yuille Direct Passive Navigation: Analytical Solution for Quadratic Patches March 1986 ftp://publications.ai.mit.edu/aipublications/500999/AIM876.ps ftp://publications.ai.mit.edu/aipublications/pdf/AIM876.pdf In this paper, we solve the problem of recovering the motion of an observer relative to a surface which can be locally approximated by a quadratic patch directly from image brightness values. We do not compute the optical flow as an intermediate step. We use the coefficients of the Taylor series expansion of the intensity function in two frames to determine 15 intermediate parameters, termed the essential parameters, from a set of linear equations. We then solve analytically for the motion and structure parameters from a set of nonlinear equations in terms of these intermediate parameters. We show that the solution is always unique, unlike some earlier results that reported twofold ambiguities in some special cases.
AIM877 Author[s]: James H. Applegate, Michael R. Douglas, Yekta Gursel, Gerald Jay Sussman and Jack Wisdom The Outer Solar System for 210 Million Years February 1986 ftp://publications.ai.mit.edu/aipublications/500999/AIM877.ps ftp://publications.ai.mit.edu/aipublications/pdf/AIM877.pdf We used a special purpose computer to integrate the orbits of the outer five planets for 100 Myr into the future and 100 Myr into the past. The strongest features in the Fourier transforms of the orbital elements of the Jovian planets can be indentified with the frequencies predicted by linear secular theory. Many of the weaker features in the Fourier spectra are identified as linear combinations of the basic frequencies. We note serious differences between our measurements and the predictions of Bretagnon (1974). The amplitude of the 3.796 Myr period libration of Pluto’s longitude of perihelion is modulated with a period of 34 Myr. Very long periods, on the order of 137 million years, are also seen.
AIM879 Author[s]: Aaron Bobick and Whitman Richards Classifying Objects from Visual Information June 1986 ftp://publications.ai.mit.edu/aipublications/500999/AIM879.ps ftp://publications.ai.mit.edu/aipublications/pdf/AIM879.pdf Consider a world of 'objects.' Our goal is to place these objects into categories that are useful to the observer using sensory data. One criterion for utility is that the categories allow the observer to infer the object's potential behaviors, which are often non observable. Under what condidtions can such useful categories be created? We propose a solution which requires 1.) that modes or clusters of natural structures are present in the world, and, 2.) that the physical properties of these structures are reflected in the sensory data used by the observer for classification. Given these two constraints, we explore the type of additional knowledge sufficient for the observer to generate an internal representation that makes explicit the natural modes. Finally we develop a formal expression of the object classification problem.
AIM882 Author[s]: John M. Hollerbach and Ki C. Suh Redundancy Resolution of Manipulators through Torque Optimization January 1986 ftp://publications.ai.mit.edu/aipublications/500999/AIM882.ps ftp://publications.ai.mit.edu/aipublications/pdf/AIM882.pdf Methods for resolving kinematic redundancies of manipulators by the effect on joint torque are examined. When the generalized inverse is formulated in terms of accelerations and incorporated into the dynamics, the effect of redundancy resolution on joint torque can be directly reflected. One method chooses the joint acceleration nullspace vector to minimize joint torque in a least squares sense; when the least squares is weighted by allowable torque range, the joint torques tend to be kept within their limits. Contrasting methods employing only the pseudoinverse with and without weighting by the inertia matrix are presented. The results show an unexpected stability problem during long trajectories for the nullspace methods and for the inertiaweighted pseudoinverse method, but rarely for the unweighted pseudoinverse method. Evidently a whiplash action develops over time that thrusts the endpoint off the intended path, and extremely high torques are required to overcome these natural movement dynamics.
AIM883 Author[s]: Michael Erdmann and Tomas LozanoPerez On Multiple Moving Objects May 1986 ftp://publications.ai.mit.edu/aipublications/500999/AIM883.ps ftp://publications.ai.mit.edu/aipublications/pdf/AIM883.pdf This paper explores the motion planning problem for multiple moving objects. The approach taken consists of assigning priorities to the objects, then planning motions one object at a time. For each moving object, the planner constructs a configuration spacetime that represents the timevarying constraints imposed on the moving object by the other moving and stationary objects. The planner represents this spacetime approximately, using twodimensional slices. The spacetime is then searched for a collisionfree path. The paper demonstrates this approach in two domains. One domain consists of translating planar objects; the other domain consists of twolink planar articulated arms.
AIM885 Author[s]: A.L. Yuille Shape from Shading, Occlusion and Texture May 1987 ftp://publications.ai.mit.edu/aipublications/500999/AIM885.ps ftp://publications.ai.mit.edu/aipublications/pdf/AIM885.pdf Shape from Shading, Occlusion and Texture are three important sources of depth information. We review and summarize work done on these modules.
AIM887 Author[s]: Chae H. An, Christopher G. Atkeson and John M. Hollerbach Estimation of Inertial Parameters of Rigid Body Links of Manipulators February 1986 ftp://publications.ai.mit.edu/aipublications/500999/AIM887.ps ftp://publications.ai.mit.edu/aipublications/pdf/AIM887.pdf A method of estimating the mass, the location of center of mass, and the moments of inertia of each rigid body link of a robot during general manipulator movement is presented. The algorithm is derived from the Newton Euler equations, and uses measurements of the joint torques as well as the measurement and calculation of the kinematics of the manipulator while it is moving. The identification equations are linear in the desired unknown parameters, and a modified least squares algorithm is used to obtain estimates of these parameters. Some of the parameters, however, are not identifiable due to restricted motion of proximal links and the lack of full force/torque sensing. The algorithm was implemented on the MIT Serial Link Direct Drive Arm. A good match was obtained between joint torques predicted from the estimated parameters and the joint torques computed from motor currents.
AIM888 Author[s]: Norberto Grzywacz and Alan Yuille Massively Parallel Implementations of Theories for Apparent Motion June 1987 ftp://publications.ai.mit.edu/aipublications/500999/AIM888.ps ftp://publications.ai.mit.edu/aipublications/pdf/AIM888.pdf We investigate two ways of solving the correspondence problem for motion using the assumptions of minimal mapping and rigidity. Massively parallel analog networks are designed to implement these theories. Their effectiveness is demonstrated with mathematical proofs and computer simulations. We discuss relevant psychophysical experiments.
AIM890 Author[s]: Gary L. Drescher Genetic AI: Translating Piaget into Lisp February 1986 ftp://publications.ai.mit.edu/aipublications/500999/AIM890.ps ftp://publications.ai.mit.edu/aipublications/pdf/AIM890.pdf This paper presents a constuctivist model of human cognitive development during infancy. According to constructivism, the elements of mental representation  even such basic elements as the concept of physical object  are constructed afresh by each individual, rather than being innately supplied. Here I propose a (partially specified, not yet implemented) mechanism, the Schema Mechanism; this mechanism is intended to achieve a series of cognitive constructions characteristic of infants' sensorimotorstage development, primarily as described by Piaget. In reference to Piaget's 'genetic epistemology', I call this approach genetic AI  'genetic' not in the sense of genes, but in the sense of genesis: development from the point of origin.
AIM893 Author[s]: Walter Hamscher and Randall Davis Issues in Model Based Troubleshooting March 1987 ftp://publications.ai.mit.edu/aipublications/500999/AIM893.ps ftp://publications.ai.mit.edu/aipublications/pdf/AIM893.pdf To determine why something has stopped working, it's helpful to know how it was supposed to work in the first place. This simple fact underlies recent work on a number of systems that do diagnosis from knowledge about the internal structure of behavior of components of the malfunctioning device. Recently much work has been done in this vein in many domains with an apparent diversity of techniques. But the variety of domains and the variety of computational mechanisms used to implement these systems tend to obscure two important facts. First, existing programs have similar mechanisms for generating and testing fault hypotheses. Second, most of these systems have similar builtin assumptions about both the devices being diagnosed and their failure modes; these assumptions in turn limit the generality of the programs. The purpose of this paper is to identify the problems and non problems in model based troubleshooting. The nonproblems are in generating and testing fault hypotheses about misbehaving components in simple static devices; a small core of largely equivalent techniques covers the apparent profusion of existing approaches. The problems occur with devices that aren't static, aren't simple and whose components fail in ways current programs don't hypothesize and hence can't diagnose.
AIM894 Author[s]: Eric Sven Ristad Computational Complexity of Current GPSG Theory April 1986 ftp://publications.ai.mit.edu/aipublications/500999/AIM894.ps ftp://publications.ai.mit.edu/aipublications/pdf/AIM894.pdf An important goal of computational linguistics has been to use linguistic theory to guide the construction of computationally efficient real world natural language processing systems. At first glance, the entirely new generalized phrase structure grammar (GPSG) theory of Gazdar, Klein, Pullum, and Sag (1985) appears to be a blessing on two counts. First, their precise formal system and the broad empirical coverage of their published English grammar might be a direct guide for a transparent parser design and implementation. Second, since GPSG has weak contextfree generative power and contextfree languages can be parsed in O(n3) by a wide range of algorithms, GPSG parsers would appear to run in polynomial time. This widelyassumed GPSG “efficient parsbility” result is misleading: here we prove that the universal recognition problem for the new GPSG theory is exponentiallypolynomial time hard, and assuredly intractable. The paper pinpoints sources of intractability (e.g. metarules and syntactic features in the GPSG formal system and concludes with some linguistically and computationally motivated restrictions on GPSG.
AIM895 Author[s]: Eric Sven Ristad Defining Natural Language Grammars in GPSG April 1986 ftp://publications.ai.mit.edu/aipublications/500999/AIM895.ps ftp://publications.ai.mit.edu/aipublications/pdf/AIM895.pdf This paper is a formal analysis of whether generalized phrase structure grammar’s (GPSG) weak contextfree generative power will allow it to achieve three of its central goals: (1) to characterize all and only the natural language grammars, (2) to algorithmically determine membership and generative power consequences of GPSG’s and (3) to embody the universalism of natural language entirely in the formal system. I prove that “=E*?” is undecidable for GPSGs and, on the basis of this result and the unnaturalness of E*, I argue that GPSG’s three goals and its weak contextfree generative power conflict with each other: there is no algorithmic way of knowing whether any given GPSG generates a natural language or an unnatural one. The paper concludes with a diagnosis of the result and suggests that the problem might be met by abandoning the weak contextfree framework and assuming substantive constraints.
AIM896 Author[s]: Tomas LozanoPerez A Simple Motion Planning Algorithm for General Robot Manipulators June 1986 ftp://publications.ai.mit.edu/aipublication/500999/AIM896.ps ftp://publications.ai.mit.edu/aipublications/pdf/AIM896.pdf This paper presents a simple and efficient algorithm, using configuration space, to plan collisionfree motions for general manipulators. We describe an implementation of the algorithm for manipulators made up of revolute joints. The configurationspace obstacles for an n degreeoffreedom manipulator are approximated by sets of n1 dimensional slices, recursively built up from one dimensional slices. This obstacle representation leads to an efficient approximation of the free space outside of the configurationspace obstacles.
AIM897 Author[s]: J. Marroquin, S. Mitter and T. Poggio Probabilistic Solution of IllPosed Problems in Computational Vision March 1987 ftp://publications.ai.mit.edu/aipublications/500999/AIM897.ps ftp://publications.ai.mit.edu/aipublications/pdf/AIM897.pdf We formulate several problems in early vision as inverse problems. Among the solution methods we review standard regularization theory, discuss its limitations, and present new stochastic (in particular, Bayesian) techniques based on Markov Random Field models for their solution. We derive efficient algorithms and describe parallel implementations on digital parallel SIMD architectures, as well as a new class of parallel hybrid computers that mix digital with analog components.
AIM898 Author[s]: Kenneth W. Haase, Jr. Discovery Systems April 1986 ftp://publications.ai.mit.edu/aipublications/500999/AIM898.ps ftp://publications.ai.mit.edu/aipublications/pdf/AIM898.pdf Cyrano is a thoughtful reimplementation of Lenat's controversial Eurisko program, designed to perform automated discovery and concept formation in a variety of technical fields. The 'thought' in the reimplementation has come from several directions: an appeal to basic principles, which led to identifying constraints of modularity and consistency on the design of discovery systems; an appeal to transparency, which led to collapsing more and more of the control structure into the representation; and an appeal to accountability, which led to the explicit specification of dependencies in the concept formation process. The process of reimplementing Lenat's work has already revealed several insights into the nature of Euriskolike systems in general; these insights are incorporated into the design of Cyrano. Foremost among these new insights is the characterization of Euriskolike systems (shich I call inquisitive systems) as search processes which dynamically reconfigure their search space by the formation of new concepts and representations. This insight reveals requirements for modularity and 'consistency' in the definition of new concepts and representations.
AIM899 Author[s]: Rodney A. Brooks Achieving Artificial Intelligence through Building Robots May 1986 ftp://publications.ai.mit.edu/aipublications/500999/AIM899.ps ftp://publications.ai.mit.edu/aipublications/pdf/AIM899.pdf We argue that generally accepted methodologies of Artificial Intelligence research are limited in the proportion of human level intelligence they can be expected to emulate. We argue that the currently accepted decompositions and static representations used in such research are wrong. We argue for a shift to a process based model, with a decomposition based on task achieving behaviors as the organizational principle. In particular we advocate building robotic insects.

