CSAIL Publications and Digital Archive header
bullet Technical Reports bullet Work Products bullet Historical Collections bullet

link to publications.csail.mit.edu link to www.csail.mit.edu horizontal line

CSAIL Digital Archive - Artificial Intelligence Laboratory Series
Publications 400 through 499

AI PublicationsLast update Sun Mar 19 05:05:02 2006


AIM-400

Author[s]: Vaughan R. Pratt

The Competence/Performance Dichotomy in Programming

January 1977

ftp://publications.ai.mit.edu/ai-publications/0-499/AIM-400.ps

ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-400.pdf

We consider the problem of automating some of the duties of programmers. We take as our point of departure the claim that data management has been automated to the point where the programmer concerned only about the correctness (as opposed to the efficiency) of his program need not involve himself in any aspect of the storage allocation problem. We focus on what we feel is a sensible next step, the problem of automating aspects of control. To accomplish this we propose a definition of control based on a fact/ heuristic dichotomy, a variation of Chomsky's competence/performance dichotomy. The dichotomy formalizes an idea originating with McCarthy and developed by Green, Hewitt, McDermott, Sussman, Hayes, Kowalski and others. It allows one to operate arbitrarily on the control component of a program without affecting the program's correctness, which is entirely the responsibility of the fact component. The immediate objectives of our research are to learn how to program keeping fact and control separate, and to identify those aspects of control amenable to automation.


AIM-401

Author[s]: Jeanne Bamberger

Development of Musical Intelligence II: Children's Representation of Pitch Relations

December 1976

ftp://publications.ai.mit.edu/ai-publications/0-499/AIM-401.ps

ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-401.pdf

The work reported here is an outgrowth of studies in the development of musical intelligence and learning that have been underway for about four years. Beginning as one of the activities in the LOGO Lab (a part of the MIT Artificial Intelligence Laboratory) the research has expanded to include more theoretical work in the MIT Division for Study a nd Research in Education.


AITR-402

Author[s]: Drew Vincent Mcdermott

Flexibility and Efficiency in a Computer Program for Designing Circuits

June 1977

ftp://publications.ai.mit.edu/ai-publications/0-499/AITR-402.ps

ftp://publications.ai.mit.edu/ai-publications/pdf/AITR-402.pdf

This report is concerned with the problem of achieving flexibility (additivity, modularity) and efficiency (performance, expertise) simultaneously in one AI program. It deals with the domain of elementary electronic circuit design. The proposed solution is to provide a deduction-driven problem solver with built-in-control-structure concepts. This problem solver and its knowledge base in the applicaitn areas of design and electronics are descrbed. The prgram embodying it is being used to explore the solutionof some modest problems in circuit design. It is concluded that shallow reasoning about problem-solver plans is necessary for flexibility, and can be implemented with reasonable efficiency.


AITR-403

Author[s]: Richard Brown

Use of Analogy to Achieve New Expertise

April 1977

ftp://publications.ai.mit.edu/ai-publications/0-499/AITR-403.ps

ftp://publications.ai.mit.edu/ai-publications/pdf/AITR-403.pdf

We will take the view that the end result of problem solving in some world should be increased expertness. In the context of computers, increasing expertness means writing programs. This thesis is about a process, reasoning by analogy that writes programs. Analogy relates one problem world to another. We will call the world in which we have an expert problem solver the IMAGE world, and the other world the DOMAIN world. Analogy will construct an expert problem solver in the domain world using the image world expert for inspiration.


AIM-405

Author[s]: Ira P. Goldstein and R. Bruce Roberts

NUDGE, A Knowledge-Based Scheduling Program

February 1977

ftp://publications.ai.mit.edu/ai-publications/0-499/AIM-405.ps

ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-405.pdf

Traditional scheduling algorithms (using the techniques of PERT charts, decision analysis or operations research) require well-defined quantitative, complete sets of constraints. They are insufficient for scheduling situations where the problem description is ill-defined, involving incomplete, possibly inconsistent and generally qualitative constraints. The NUDGE program uses an extensive knowledge base to debug scheduling requests by supplying missing details and resolving minor inconsistencies. The result is that an informal request is converted to a complete description suitable for a traditional scheduler.


AIM-406

Author[s]: Brian P. Carr and Ira P. Goldstein

Overlays: A Theory of Modelling for Computer Aided Instruction

February 1977

ftp://publications.ai.mit.edu/ai-publications/0-499/AIM-406.ps

ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-406.pdf

Overlay modelling is a technique for describing a student's problem solving skills in terms of modular program designed to be an expert for the given domain. The model is an overlay on the expert program in that it consists of a set of hypotheses regarding the student's familiarity with the skills employed by the expert. The modelling is performed by a set of P rules that are triggered by different sources of evidence, and whose effect is to modify these hypotheses. A P critic monitors these rules to detect discontinuities and inconsistencies in their predictions. A first implementation of overlay modelling exists as a component of WUSOR-II, a CAI program based on artificial intelligence techniques. WUSOR-II coaches a student in the logical and probability skills required to play the computer game WUMPUS. Preliminary evidence indicates that overlay modelling significantly improves the appropriateness of the tutoring program's explanations.


AIM-407

Author[s]: Ira P. Goldstein and Eric Grimson

Annotated Production Systems: A Model for Skill Acquisition

February 1977

ftp://publications.ai.mit.edu/ai-publications/0-499/AIM-407.ps

ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-407.pdf

Annotated Production Systems provide a procedural model for skill acquisition by augmenting a production model of the skill with formal commentary describing plans, bugs, and interraltionships between various productions. This commentary supports processes of efficient interpretation, self- debugging and self-improvement. The theory of annotated productions is developed by analyzing the skill of attitude instrument flying. An annotated production interpreter has been written that executes skill models which control a flight simulator. Preliminary evidence indicates that annotated productions effectively model certain bugs and certain learning behaviors characteristic of student pilots.


AIM-408

Author[s]: R. Bruce Roberts and Ira P. Goldstein

The FRL Primer

July 1977

ftp://publications.ai.mit.edu/ai-publications/0-499/AIM-408.ps

ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-408.pdf

The Frame Representation Language (FRL) is an experimental language written to explore the use of frames as a knowledge representation technique. The term 'frame' as used in FRL was inspired by Minsky's [75] development of frame theory. FRL extends the traditional Property List representation scheme by allowing properties to have comments, defaults and constraints, to inherit information from abstract forms of the same type, and to have attached procedures triggered by adding or deleting values, or if a value is needed. We introduce FRL with the aid of a simple example: WHOSIS, a database of AI persons' names, addresses, interests and publications. A second section contains an abridged manual describing FRL's most-used commands and conventions.


AIM-409

Author[s]: R. Bruce Roberts and Ira P. Goldstein

The FRL Manual

September 1977

ftp://publications.ai.mit.edu/ai-publications/0-499/AIM-409.ps

ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-409.pdf

The Frame Representation Language (FRL) is described. FRL is an adjunct to LISP which implements several representation techniques suggested by Minsky's [75] concept of a frame: defaults, constraints, inheritance, procedural attachment and annotation.


AIM-410

Author[s]: Carl Hewitt

Viewing Control Structures as Patterns of Passing Messages

December 1976

ftp://publications.ai.mit.edu/ai-publications/0-499/AIM-410.ps

ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-410.pdf

The purpose of this paper is to discuss some organizational aspects of programs using the actor model of computation. In this paper we present an approach to modelling intelligence in terms of a society of communicating knowledge-based problem-solving experts. In turn each of the experts can be viewed as a society that can be further decomposed in the same way until th primitive actors fo the system are reached. We are investigating the nature of the communication mechanisms needed for effective problem-solving by a society of experts and the conventions of discourse that make this possible. In this way we hope eventually to develop a framework adequate for the discussion of the central issues of problem-solving involving parallel versus serial processing and centralization versus decentralization of control and information storage.


AIM-412

Author[s]: Marc Raibert

Control and Learning by the State Space Model: Experimental Findings

April 1977

ftp://publications.ai.mit.edu/ai-publications/0-499/AIM-412.ps

ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-412.pdf

This is the second of a two part presentation of a model for motor control and learning. The model was implemented using a small computer and the MIT -Scheinman manipulator. Experiments were conducted which demonstrate the controller's ability to learn new movements, adapt to mechanical changes caused by inertial and elastic loading, and generalize its behavior among similar movements. A second generation model, based on improvements suggested by these experiments is suggested.


AIM-413

Author[s]: Candace Bullwinkle

Levels of Complexity in Discourse for Reference Disambiguation and Speech Act Interpretation

May 1977

ftp://publications.ai.mit.edu/ai-publications/0-499/AIM-413.ps

ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-413.pdf

This paper presents a discussion of means of describing the discourse and its components which makes speech act interpretation and reference disambiguation possible with minimal search of the knowledge in the database. A portion of this paper will consider how a frames representation of sentences and common sense knowledge provides a mechanism for representing the postulated discourse components. Finally some discussion of the use of the discourse model and of frames in a discourse understanding program for a personal assistant will be presented.


AIM-414

Author[s]: Patrick H. Winston

Learning by Creating and Justifying Transfer Frames

January 1977

ftp://publications.ai.mit.edu/ai-publications/0-499/AIM-414.ps

ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-414.pdf

Learning is defined to be the computation done by a student when there is a transfer of information to him from a teacher. In the particular kind of learning discussed, the teacher names a source and destination. In the sentence, "Robbie is like a fox," fox is the source and Robbie is the destination. The student, on analyzing the teacher's instruction, computes a kind of filter called a transfer frame. It stands between the source and the destination and determines what information is allowed to pass from one to the other.


AIM-414A

Author[s]: Patrick H. Winston

Learning by Creating and Justifying Transfer Frames

January 1978

ftp://publications.ai.mit.edu/ai-publications/0-499/AIM-414a.ps

ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-414a.pdf

In the particular kind of learning discussed in this paper, the teacher names a destination and a source. In the sentence, “Robbie is like a fox,” Robbie is the destination and fox is the source. The student, on analyzing the teacher’s instruction, computes a filter called a transfer frame. The transfer frame stands between the source and the destination and determines what information is allowed to pass from one to the other.


AIM-415

Author[s]: D. Marr

Representing Visual Information

May 1977

ftp://publications.ai.mit.edu/ai-publications/0-499/AIM-415.ps

ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-415.pdf

Vision is the construction of efficient symbolic descriptions from images of the world. An important aspect of vision is the choice of representations for the different kinds of information in a visual scene. In the early stages of the analysis of an image, the representations used depend more on what it is possible to compute from an image than on what is ultimately desirable, but later representations can be more sensitive to the specific needs of recognition. This essay surveys recent work in vision at M.I.T. from a perspective in which the representational problems assume a primary importance. An overall framework is suggested for visual information processing, in which the analysis proceeds through three representations; (1) the primal sketch, which makes explicit the intensity changes and local two-dimensional geometry of an image (2) the 2 1/2-D sketch, which is a viewer-centered representation of the depth, orientation and discontinuities of the visible surfaces, and (3) the 3-D model representation, which allows an object- centered description of the three-dimensional structure and organization of a viewed shape. Recent results concerning processes for constructing and maintaining these representations are summarized and discussed.


AIM-416

Author[s]: D. Marr and H.K. Nishihara

Representation and Recognition of the Spatial Organization of Three Dimensional Shapes

May 1977

ftp://publications.ai.mit.edu/ai-publications/0-499/AIM-416.ps

ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-416.pdf

The human visual process can be studied by examining the computational problems associated with deriving useful information from retinal images. In this paper, we apply this approach to the problem of representing three-dimensional shapes for the purpose of recognition.


AIM-417

Author[s]: Brian Carr

Wusor II: A Computer Aided Instruction Program with Student Modelling Capabilities

May 1977

ftp://publications.ai.mit.edu/ai-publications/0-499/AIM-417.ps

ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-417.pdf

Wusor II is the second program that has been developed to tutor students in the game of Wumpus. From the earlier efforts with Wusor I it was possible to produce a rule-based expert which processed a relatively complete mastery of the game. Wusor II endeavors to teach the knowledge embodied in the rules used by the Expert. The Student Model represents Wusor’s estimation of the student’s knowledge of said rules, and this estimation is based primarily on analyses of the player’s moves. The Student Model allows Wusor to personalize its explanations to the student according to the student’s current knowledge of the game. The result is a system which, according to preliminary results, is highly effective at tutoring students of varied abilities.


AITR-418

Author[s]: Benjamin J. Kuipers

Representing Knowledge of Large-Scale Space

July 1977

ftp://publications.ai.mit.edu/ai-publications/0-499/AITR-418.ps

ftp://publications.ai.mit.edu/ai-publications/pdf/AITR-418.pdf

This dissertation presents a model of the knowledge a person has about the spatial structure of a large-scale environment: the "cognitive map". The functions of the cognitive map are to assimilate new information about the environment, to represent the current position, and to answer route-finding and relative-position problems. This model (called the TOUR model) analyzes the cognitive map in terms of symbolic descriptions of the environment and operations on those descriptions. Knowledge about a particular environment is represented in terms of route descriptions, a topological network of paths and places, multiple frames of reference for relative positions, dividing boundaries, and a structure of containing regions. The current position is described by the "You Are Here" pointer, which acts as a working memory and a focus of attention. Operations on the cognitive map are performed by inference rules which act to transfer information among different descriptions and the "You Are Here" pointer. The TOUR model shows how the particular descriptions chosen to represent spatial knowledge support assimilation of new information from local observations into the cognitive map, and how the cognitive map solves route-finding and relative-position problems. A central theme of this research is that the states of partial knowledge supported by a representation are responsible for its ability to function with limited information of computational resources. The representations in the TOUR model provide a rich collection of states of partial knowledge, and therefore exhibit flexible, "common- sense" behavior.


AITR-419

Author[s]: Jon Doyle

Truth Maintenance Systems for Problem Solving

January 1978

ftp://publications.ai.mit.edu/ai-publications/0-499/AITR-419.ps

ftp://publications.ai.mit.edu/ai-publications/pdf/AITR-419.pdf

The thesis developed here is that reasoning programs which take care to record the logical justifications for program beliefs can apply several powerful, but simple, domain- independent algorithms to (1) maintain the consistency of program beliefs, (2) realize substantial search efficiencies, and (3) automatically summarize explanations of program beliefs. These algorithms are the recorded justifications to maintain the consistency and well founded basis of the set of beliefs. The set of beliefs can be efficiently updated in an incremental manner when hypotheses are retracted and when new information is discovered. The recorded justifications also enable the pinpointing of exactly whose assumptions which support any particular belief. The ability to pinpoint the underlying assumptions is the basis for an extremely powerful domain-independent backtracking method. This method, called Dependency-Directed Backtracking, offers vastly improved performance over traditional backtracking algorithms.


AIM-420

Author[s]: Guy Lewis Steele, Jr.

Data Representations in PDP-10 MACLISP

September 1977

ftp://publications.ai.mit.edu/ai-publications/0-499/AIM-420.ps

ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-420.pdf

The internal representations of the various MacLISP data types are presented and discussed. Certain implementation tradeoffs are considered. The ultimate decisions on these tradeoffs are discussed in the light of MacLISP’s prime objective of being an efficient high-level language for the implementation of large systems such as MACSYMA. The basic strategy of garbage collection is outlined, with reference to the specific representations involved. Certain “clever tricks” are explained and justified. The “address space crunch” is explained and some alternative solutions explored.


AIM-421

Author[s]: Guy Lewis Steele, Jr.

Fast Arithmetic in MACLISP

September 1977

ftp://publications.ai.mit.edu/ai-publications/0-499/AIM-421.ps

ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-421.pdf

MacLISP provides a compiler which produces numerical code competitive in speed with some FORTRAN implementations and yet compatible with the rest of the MacLISP system. All numerical programs can be run under MacLISP interpreter. Additional declarations to the compiler specify type information which allows the generation of optimized numerical code which generally does not require the garbage collection of temporary numerical results. Array accesses are almost as fast as in FORTRAN, and permit the use of dynamically allocated arrays of varying dimensions. Here we discuss the implementation decisions regarding user interface, data representations, and interfacing conventions which allow the generation of fast numerical LISP code.


AIM-422

Author[s]: K. Forbus

Light Source Effects

May 1977

ftp://publications.ai.mit.edu/ai-publications/0-499/AIM-422.ps

ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-422.pdf

The perception of surface luster in achromatic single view images seems to depend on the existence of regions with source-like properties. These regions are due to the interaction of specular component of the surface’s reflectance and the illumination. Light source effects are broken down into three categories according to gross aspects of the physical situation in which they occur, and criteria for detecting the regions they cause are suggested.


AIM-423

Author[s]: James L. Stansfield

COMEX: A Support System for a Commodities Expert

August 1977

ftp://publications.ai.mit.edu/ai-publications/0-499/AIM-423.ps

ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-423.pdf

The intelligent support system project is developing a program (COMEX) to assist a commodities expert in tasks such as interpreting data, predicting trends and intelligent noticing. Large amounts of qualitative and quantitative information about factors such as weather, trade and crop condition need to be managed. This memo presents COMEX-), a prototype system written in FRL, a frame-based language (Goldstein & Roberts, 1977). COMEX-O has a complaint handling system, frame structure matching and simple reasoning. By conversing with a user, it builds groupings of frame structures to represent events. These are called CLUSTERS and are proposed as a new representation method. New CLUSTERS are built from previously defined ones using INSTANTIATION and AGGREGATION, two methods which combine with frame inheritance and constraints to make up a general event representation mechanism. CLUSTERS capture the idea of generic patterns of relationships between frames and raise an issue named the GENERIC CONSTRAINT PROBLEM concerning constraints between the parts of a cluster. The final section presents plans for future work on qualitative reasoning within COMEX and includes a hypothetical scenario.


AIM-424

Author[s]: John M. Hollerbach

The Minimum Energy Movement for a Spring Muscle Model

September 1977

ftp://publications.ai.mit.edu/ai-publications/0-499/AIM-424.ps

ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-424.pdf

There are many ways of programming an actuator or effector for movement between the same two points. In the interest of efficiency it is sometimes desirable to program that trajectory which requires the least amount of energy. This paper considers the minimum energy movement for a spring-like actuator abstracted from muscle mechanics and energetics. It is proved that for this actuator a bang-coast-bang actuation pattern minimizes the energy expenditure. For some parameter values this pattern is modified by a singular arc at the first switching point. A surprising limitation on the duration of coast is demonstrated. Some relaxations of the restrictions underlying the spring model are shown to preserve the bang-coast-bang solution.


AIM-425

Author[s]: Gerald Jay Sussman

Electrical Design: A Problem for Artificial Intelligence Research

June 1977

ftp://publications.ai.mit.edu/ai-publications/0-499/AIM-425.ps

ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-425-pdf

This report outlines the problem of intelligent failure recovery in a problem-solver for electrical design. We want our problem solver to learn as much as it can from its mistakes. Thus we cast the engineering design process on terms of Problem Solving by Debugging Almost-Right Plans, a paradigm for automatic problem solving based on the belief that creation and removal of “bugs” is an unavoidable part of the process of solving a complex problem. The process of localization and removal of bugs called for by the PSBDARP theory requires an approach to engineering analysis in which every result has a justification which describes the exact set of assumptions it depends upon. We have developed a program based on Analysis by Propagation of Constraints which can explain the basis of its deductions. In addition to being useful to a PSBDARP designer, these justifications are used in Dependency- Directed Backtracking to limit the combinatorial search in the analysis routines. Although the research we will describe is explicitly about electrical circuits, we believe that similar principles and methods are employed by other kinds of engineers, including computer programmers.


AIM-426

Author[s]: Bruce R. Schatz

The Computation of Immediate Texture Discrimination

August 1977

ftp://publications.ai.mit.edu/ai-publications/0-499/AIM-426.ps

ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-426.pdf

The computation of immediate texture discrimination involves finding boundaries between regions of differing texture. Various textures are examined to investigate the factors determining discrimination in the limited domain of line-and-point images. Two operators embodying necessary properties are proposed: length and orientation of actual lines and of local virtual lines between terminators. It is conjectured that these are sufficient as well. Relations between this theory and those of Julesz and of Marr are discussed. Supporting psychological evidence is introduced and an implementation strategy outlined.


AIM-427

Author[s]: Johan de Kleer, Jon Doyle, Guy L. Steele, Jr. and Gerald Jay Sussman

Explicit Control of Reasoning

June 1977

ftp://publications.ai.mit.edu/ai-publications/0-499/AIM-427.ps

ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-427.pdf

The construction of expert problem-solving systems requires the development of techniques for using modular representations of knowledge without encountering combinatorial explosions in the solution effort. This report describes an approach to dealing with this problem based on making some knowledge which is usually implicitly part of an expert problem solver explicit, thus allowing this knowledge about control to be manipulated and reasoned about. The basic components of this approach involve using explicit representations of the control structure of the problem solver, and linking this and other knowledge manipulated by the expert by means of explicit data dependencies.


AIM-428

Author[s]: Akinori Yonezawa and Carl Hewitt

Modelling Distributed Systems

June 1977

ftp://publications.ai.mit.edu/ai-publications/0-499/AIM-428.ps

ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-428.pdf

Distributed systems are multi-processor information processing systems which do not rely on the central shared memory for communication. This paper presents ideas and techniques in modelling distributed systems and its application to Artificial Intelligence. In section 2 and 3, we discuss a model of distributed systems and its specification and verification techniques. We introduce a simple example of air line reservation systems in Section 4 and illustrate our specification and verification techniques for this example in the subsequent sections. Then we discuss our further work.


AIM-429

Author[s]: S.D. Litvintchouk and V.R. Pratt

A Proof-Checker for Dynamic Logic

June 1977

ftp://publications.ai.mit.edu/ai-publications/0-499/AIM-429.ps

ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-429.pdf

We consider the problem of getting a computer to follow reasoning conducted in dynamic logic. This is a recently developed logic of programs that subsumes most existing first-order logics of programs that manipulate their environment, including Floyd’s and Hoare’s logics of partial correctness and Manna and Waldinger’s logic of total correctness. Dynamic logic is more closely related to classical first-order logic than any other proposed logic of programs. This simplifies the design of a proof-checker for dynamic logic. Work in progress on the implementation of such a program is reported on, and an example machine-checked proof is exhibited.


AIM-430

Author[s]: Marvin Minksy

Plain Talk About Neurodevelopmental Epistemology

June 1977

ftp://publications.ai.mit.edu/ai-publications/0-499/AIM-430.ps

ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-430.pdf

This paper is based on a theory being devloped in collaboration with Seymour Papert in which we view the mind as an organized society of intercommunicating “agents”. Each such agent is, by itself, very simple. The subject of this paper is how that simplicity affects communication between different parts of a single mind and , indirectly, how it may affect inter-personal communications.


AIM-431

Author[s]: Steven T. Rosenberg

Frame-based Text Processing

November 1977

ftp://publications.ai.mit.edu/ai-publications/0-499/AIM-431.ps

ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-431.pdf

This paper presents an overview of a theory of discourse structure, and discusses a model for assimilating text into a frame-based data structure. The model has been applied to the analysis of news articles. The theory assumes sentences contain links to the database which are relatively easy to compute. These links point to prior themes which contain expectations and procedural knowledge. This knowledge is used to assimilate new sentences to these themes. At any given time, only procedural knowledge from the indicated theme is active in processing new sentences.


AIM-432

Author[s]: Hal Abelson and Paul Goldenberg

Teacher's Guide for Computational Models of Animal Behavior

April 1977

ftp://publications.ai.mit.edu/ai-publications/0-499/AIM-432.ps

ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-432.pdf

This is an experimental curriculum unit which suggests how the computational perspective can be integrated into a subject such as elementary school biology. In order to illustrate the interplay of computer and non- computer activities, we have prepared the unit as a companion to the Elementary School Science Study “Teacher’s Guide to Behavior of Mealworms.” This material is based on use of the Logo computer language.


AIM-433

Author[s]: Gerald Jay Sussman

SLICES: At the Boundary Between Analysis and Synthesis

July 1977

ftp://publications.ai.mit.edu/ai-publications/0-499/AIM-433.ps

ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-433.pdf

The algebraic difficulty of determining the component values in a circuit of known topology and specifications is large. Expert circuit designers use terminal equivalence and power arguments to reduce the apparent synergy in a circuit so that their computational power can be focussed. A new descriptive mechanism, called slices, is introduced. Slices combine the notion of equivalence with identification of parameters. Armed with appropriate slices, an automatic analysis procedure, Analysis by Propagation of Constraints can be used to assign the component values in a circuit. Techniques of formation, notation, and use of slices are described. The origin of slices in the topological design process is indicated. Slices are shown to be of wider interest in scientific thought than just in circuit analysis.


AIM-435

Author[s]: Johan de Kleer, Jon Doyle, Charles Rich, Guy L. Steele, Jr. and Gerald Jay Sussman

AMORD: A Deductive Procedure System

January 1978

ftp://publications.ai.mit.edu/ai-publications/0-499/AIM-435.ps

ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-435.pdf

We have implemented an interpreter for a rule-based system, AMORD, based on a non- chronological control structure and a system of automatically maintained data- dependencies. The purpose of this paper is to serve as a reference manual and as an implementation tutorial. We wish to illustrate: (1) The discipline of explicit control and dependencies, (2) How to use AMORD, and (3) One way to implement the mechanisms provided by AMORD. This paper is organized into sections. The first section is a short “reference manual” describing the major features of AMORD. Next, we present some examples which illustrate the style of expression encouraged by AMORD. This style makes control information explicit in a rule- manipulable form, and depends on an understanding of the use of non-chronological justifications for program beliefs as a means for determining the current set of beliefs. The third section is a brief description of the Truth Maintenance System employed by AMORD for maintaining these justifications and program beliefs. The fourth section presents a complete annotated interpreter for AMORD, written in MacLISP.


AIM-436A

Author[s]: Carl Hewitt and Henry Baker

Actors and Continuous Functionals

July 1977

ftp://publications.ai.mit.edu/ai-publications/0-499/AIM-436a.ps

ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-436a.pdf

This paper presents precise versions of some “laws” that must be satisfied by computations involving communicating parallel processes. The laws take the form of stating plausible restrictions on the histories of computations that are physically realizable. The laws are very general in that they are obeyed by parallel processes executing on a time varying number of distributed physical processors.


AIM-437

Author[s]: Berthold K.P. Horn and Brett L. Bachman

Using Synthetic Images to Register Real Images with Surface Models

August 1977

ftp://publications.ai.mit.edu/ai-publications/0-499/AIM-437.ps

ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-437.pdf

A number of image analysis tasks can benefit from registration of the image with a model of the surface being imaged. Automatic navigation using visible light or radar images requires exact alignment of such images with digital terrain models. In addition, automatic classification of terrain, using satellite imagery, requires such alignment to deal correctly with the effects of varying sun angle and surface slope. Even inspection techniques for certain industrial parts may be improved by this means.


AIM-438

Author[s]: Russell Atkinson and Carl Hewitt

Specification and Proof Techniques for Serializers

August 1977

ftp://publications.ai.mit.edu/ai-publications/0-499/AIM-438.ps

ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-438.pdf

This paper presents an implementation mechanism, specification language, and proof techniques for problems involving the arbitration of concurrent requests to shared resources. This mechanism is the serializer which may be described as a kind of protection mechanism, in that it prevents improper orders of access to a protected resource. Serializers are a generalization and improvement of the monitor mechanism of Brinch-Hansen and Hoare.


AITR-439

Author[s]: Marc H. Raibert

Motor Control and Learning by the State Space Model

September 1977

ftp://publications.ai.mit.edu/ai-publications/0-499/AITR-439.ps

ftp://publications.ai.mit.edu/ai-publications/pdf/AITR-439.pdf

A model is presented that deals with problems of motor control, motor learning, and sensorimotor integration. The equations of motion for a limb are parameterized and used in conjunction with a quantized, multi- dimensional memory organized by state variables. Descriptions of desired trajectories are translated into motor commands which will replicate the specified motions. The initial specification of a movement is free of information regarding the mechanics of the effector system. Learning occurs without the use of error correction when practice data are collected and analyzed.


AIM-440

Author[s]: Berthold K.P. Horn

Density Reconstruction Using Arbitrary Ray Sampling Schemes

September 1977

ftp://publications.ai.mit.edu/ai-publications/0-499/AIM-440.ps

ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-440.pdf

Methods for calculating the distribution of absorption densities in a cross section through an object from density integrals along rays in the plane of the cross section are well known, but are restricted to particular geometries of data collection. So-called convolutional-backprojection-summation methods, used now for parallel ray data, have recently been extended to special cases of the fan-beam reconstruction problem by the addition of pre- and post-multiplication steps. In this paper, I present a technique for deriving reconstructing algorithms for arbitrary ray- sampling schemes: the resulting algorithms entail the use of a general linear operator, but require little more computation than the convolutional methods, which represent special cases.


AIM-441

Author[s]: Andrea A. diSessa

On "Learnable" Representations of Knowledge: A Meaning for the Computational Metaphor

September 1977

ftp://publications.ai.mit.edu/ai-publications/0-499/AIM-441.ps

ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-441.pdf

The computational metaphor which proposes the comparison of processes of mind to realizable or imaginable computer activities suggests a number of educational concerns. This paper discusses some of those concerns including procedural modes of knowledge representation and control knowledge – knowing what to do. I develop a collection of heuristics for education researchers and curriculum developers which are intended to address the issues raised. Finally, an extensive section of examples is given to concretize those heuristics.


AIM-442

Author[s]: Harold Abelson

Towards a Theory of Local and Global in Computation

September 1977

ftp://publications.ai.mit.edu/ai-publications/0-499/AIM-442.ps

ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-442.pdf

We formulate the rudiments of a method for assessing the difficulty of dividing a computational problem into “independent simpler parts.” This work illustrates measures of complexity which attempt to capture the distinction between “local” and “global” computational problems. One such measure is the covering multiplicity, or average number of partial computations which take account of a given piece of data. Another measure reflects the intuitive notion of a “highly interconnected” computational problem, for which subsets of the data cannot be processed “in isolation.” These ideas are applied in the setting of computational geometry to show that the connectivity predicate has unbounded convering multiplicity and is highly interconnected; and in the setting of numerical computations to measure the complexity of evaluating polynomials and solving systems of linear equations.


AIM-443

Author[s]: Guy Lewis Steele, Jr.

Debunking the 'Expensive Procedure Call' Myth, or, Procedure Call Implementations Considered Harmful, or, Lambda: The Ultimate GOTO

October 1977

ftp://publications.ai.mit.edu/ai-publications/0-499/AIM-443.ps

ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-443.pdf

Folklore states that GOTO statements are 'cheap', while procedure calls are 'expensive'. This myth is largely a result of poorly designed language implementations. The historical growth of this myth is considered. Both theoretical ideas and an existing implementation are discussed which debunk this myth. It is shown that the unrestricted use of procedure calls permits great stylistic freedom. In particular, any flowchart can be written as a 'structured' program without introducing extra variables. The difficulty with the GOTO statement and the procedure call is characterized as a conflict between abstract programming concepts and concrete language constructs.


AIM-444

Author[s]: Alan Bawden, Richard Greenblatt, Jack Holloway, Thomas Knight, David Moon and Daniel Weinreb

LISP Machine Progress Report

August 1977

ftp://publications.ai.mit.edu/ai-publications/0-499/AIM-444.ps

ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-444.pdf

This informal paper introduces the LISP Machine, describes the goals and current status of the project, and explicates some of the key ideas. It covers the LISP machine implementation, LISP as a system language, input/output, representation of data, representation of programs, control structures, storage organization, garbage collection, the editor, and the current status of the work.


AIM-445

Author[s]: Stephen C. Purcell

Understanding Hand-Printed Algebra for Computer Tutoring

February 1977

ftp://publications.ai.mit.edu/ai-publications/0-499/AIM-445.ps

ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-445.pdf

This thesis demonstrates how the use of a global context can improve the power of a local character recognizer. The global context considered is a computer tutor of high school algebra that observes a student working algebra problems on a graphics tablet. The tutoring system is integrated with a character recognizer to understand the pen strokes of an algebra tutoring system is designed and implemented. This thesis joins together two users of a computer, intelligent tutoring and tablet communication. Natural communication with computers has been pursued through speech understanding, English text understanding, special purpose languages, hand printing and graphics. This work extends the power of hand-printing understanders by using more varied and higher level sources of knowledge than have been used previously.


AIM-446

Author[s]: D. Marr, G. Palm, and T. Poggio

Analysis of a Cooperative Stereo Algorithm

October 1977

ftp://publications.ai.mit.edu/ai-publications/0-499/AIM-446.ps

ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-446.pdf

Marr & Poggio (1976) recently described a cooperative algorithm that solves the correspondence problem for stereopsis. This article uses a probabilistic technique to analyze the convergence of that algorithm, and derives the conditions governing the stability of the solution state. The actual results of applying the algorithm to random-dot stereograms are compared with the probabilistic analysis. A satisfactory mathematical analysis of the asymptotic behaviour of the algorithm is possible for a suitable choice of the parameter values and loading rules, and again the actual performance of the algorithm under these conditions is compared with the theoretical predictions. Finally, some problems raised by the analysis of this type of “cooperative” algorithm are briefly discussed.


AIM-447

Author[s]: Eugene Ciccarelli

An Introduction to the EMACS Editor

January 1978

ftp://publications.ai.mit.edu/ai-publications/0-499/AIM-447.ps

ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-447.pdf

EMACS is a real-time editor primarily intended for display terminals. The intent of this memo is to describe EMACS in enough detail to allow a user to edit comfortably in most circumstances, knowing how to get more information if needed. Basic commands described cover buffer editing, file handling, and getting help. Two sections cover commands especially useful for editing LISP code, and text (word- and paragraph- commands). A brief “cultural interest” section describes the environment that supports EMACS commands.


AIM-448

Author[s]: Berthold K. P. Horn

Fan-beam Reconstruction Methods

November 1977

ftp://publications.ai.mit.edu/ai-publications/0-499/AIM-448.ps

ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-448.pdf

In a previous paper a technique was developed for finding reconstruction algorithms for arbitrary ray-sampling schemes. The resulting algorithms use a general linear operator, the kernel of which depends on the details of the scanning geometry. Here this method is applied to the problem of reconstructing density distributions from arbitrary fan-beam data. The general fan-beam method is then specialized to a number of scanning geometries of practical importance. Included are two cases where the kernel of the general linear operator can be factored and rewritten as a function of the difference of coordinates only and the superposition integral consequently simplifies into a convolution integral. Algorithms for these special cases of the fan-beam problem have been developed previously by others. In the general case, however, Fourier transforms and convolutions do not apply, and linear space-variant operators must be used. As a demonstration, details of a fan-beam method for data obtained with uniform ray-sampling density are developed.


AIM-449

Author[s]: Ira P. Goldstein

The Genetic Epistemology of Rule Systems

January 1978

ftp://publications.ai.mit.edu/ai-publications/0-499/AIM-449.ps

ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-449.pdf

I shall describe a model of the evolution of the rule-structured knowledge that serves as a cornerstone of our development of computer- based coaches. The key idea is a graph structure whose nodes represent rules, and whose links represent various evolutionary relationships such as generalization, correction, and refinement. This graph guides both student modelling and tutoring as follows: the coach models the student in terms of nodes in this graph, and selects tutoring strategies for a given rule on the basis of its genetic links. It also suggests a framework for a theory of learning in which the graph serves as a memory structure constructed by the student by means of processes corresponding to the various links. Given this framework, a learning complexity measure can be defined in terms of the topology of the graph.


AITR-450

Author[s]: Scott E. Fahlman

A System for Representing and Using Real-World Knowledge

December 1977

ftp://publications.ai.mit.edu/ai-publications/0-499/AITR-450.ps

ftp://publications.ai.mit.edu/ai-publications/pdf/AITR-450.pdf

This report describes a knowledge-base system in which the information is stored in a network of small parallel processing elements – node and link units – which are controlled by an external serial computer. This network is similar to the semantic network system of Quillian, but is much more tightly controlled. Such a network can perform certain critical deductions and searches very quickly; it avoids many of the problems of current systems, which must use complex heuristics to limit and guided their searches. It is argued (with examples) that the key operation in a knowledge-base system is the intersection of large explicit and semi-explicit sets. The parallel network system does this in a small, essentially constant number of cycles; a serial machine takes time proportional to the size of the sets, except in special cases.


AIM-451

Author[s]: D. Marr and T. Poggio

A Theory of Human Stereo Vision

November 1977

ftp://publications.ai.mit.edu/ai-publications/0-499/AIM-451.ps

ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-451.pdf

An algorithm is proposed for solving the stereoscopic matching problem. The algorithm consists of five steps: 1.) Each image is filtered with bar masks of four sizes that vary with eccentricity; the equivalent filters are about one octave wide. 2.) Zero-crossings of the mask values are localized, and positions that correspond to terminations are found. 3.) For each mask size, matching takes place between pairs of zero crossings or terminations of the same sign in the two images, for a range of disparities up to about the width of the mask's central region. 4.) Wide masks can control vergence movements, thus causing small masks to come into correspondence. 5.) When a correspondence is achieved, it is written into a dynamic buffer, called the 2-1/2-D sketch. It is shown that this proposal provides a theoretical framework for most existing psychophysical and neurophysiological data about stereopsis. Several critical experimental predictions are also made, for instance about the size of Panum's area under various conditions. The results of such experiments would tell us whether, for example, cooperativity is necessary for the fusion process.


AIM-452

Author[s]: Guy Lewis Steele, Jr. and Gerald Jay Sussman

The Revised Report on SCHEME: A Dialect of LISP

January 1978

ftp://publications.ai.mit.edu/ai-publications/0-499/AIM-452.ps

ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-452.pdf

SCHEME is a dialect of LISP. It is an expression-oriented, applicative order, interpreter-based language which allows one to manipulate programs as data. It differs from most current dialects of LISP in that it closes all lambda-expressions in the environment of their definition or declaration, rather than in the execution environment. This has the consequence that variables are normally lexically scoped, as in ALGOL. However, in contrast with ALGOL, SCHEME treats procedures as a first-class data type. They can be the values of variables, the returned values of procedures, and components of data structures. Another difference from LISP is that SCHEME is implemented in such a way that tail- recursions execute without net growth of the interpreter stack. The effect of this is that a procedure call behaves like a GOTO and thus procedure calls can be used to implement iterations, as in PLASMA.


AIM-453

Author[s]: Guy Lewis Steele Jr. and Gerald Jay Sussman

The Art of the Interpreter of the Modularity Complex (Parts Zero, One, and Two)

May 1978

ftp://publications.ai.mit.edu/ai-publications/0-499/AIM-453.ps

ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-453.pdf

We examine the effects of various language design decisions on theprogramming styles available to a user of the language, with particular emphasis on the ability to incrementally construct modular systems. At each step we exhibit an interactive meta- circular interpreter for the language under consideration. Each new interpreter is the result of an incremental change to a previous interpreter. We explore the consequences of various variable binding disciplines and the introduction of side effects. We find that dynamic scoping is unsuitable for constructing procedural abstractions, but has another role as agent of modularity, being a structured form of side effect. More general side effects are also found to be necessary to promote modular style. We find that the notion of side effect and the notion of equality (object identity) are mutually constraining; to define one is to define the other. The interpreters we exhibit are all written in a simple dialect of LISP, and all implement LISP-like languages. A subset of these interpreters constitute a partial historical reconstruction of the actual evaluation of LISP.


AIM-454

Author[s]: Henry G. Baker and Carl Hewitt

The Incremental Garbage Collection Processes

December 1977

ftp://publications.ai.mit.edu/ai-publications/0-499/AIM-454.ps

ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-454.pdf

This paper investigates some problems associated with an expression evaluation order that we call "future" order, which is different from call-by-name, call-by-value, and call-by-need. In future order evaluation, an object called "future" is created to serve as the value of each expression that is to be evaluated and separate process is dedicated to its evaluation. This mechanism allows the fully parallel evaluation of the expressions in a programming language. We discuss an approach to a problem that arises in this context: futures which were thought to be relevant when they were created become irrelevant through not being needed later in computation. The problem of irrelevant processes also appears in multiprocessing problem-solving systems which start several processors working on the same problem but with different methods, and return with the solution which finishes first. This parallel method strategy has the drawback that the processes which are investigating the losing methods must be identified, cleanly stopped, and the processors they are using reassigned to more useful tasks. The solution we propose is that of incremental garbage collection. The goal structure of the solution plan should be explicitly represented in memory as part of the graph memory (like Lisp's heap) so that a garbage collection algorithm can discover which processes are performing useful work, and which can be recycled for a new task. An incremental algorithm for the unified garbage collection of storage and processes is described.


AIM-456

Author[s]: Howard E. Shrobe

Floyd-Hoare Verifiers “Considered Harmful”

January 1978

ftp://publications.ai.mit.edu/ai-publications/0-499/AIM-456.ps

ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-456.pdf

The Floyd-Hoare methodology completely dominates the field of program verification and has contributed much to our understanding of how programs might be analyzed. Useful but limited verifiers have been developed using Floyd-Hoare techniques. However, it has long been known that it is difficult to handle side effects on shared data structures within the Floyd-Hoare framework. Most examples of successful Floyd-Hoare axioms for assignment to complex data structures, similar statements have been used by London. This paper demonstrates an error in these formalizations and suggests a different style of verification.


AITR-457

Author[s]: Robert J. Woodham

Reflectance Map Techniques for Analyzing Surface Defects in Metal Castings

June 1978

ftp://publications.ai.mit.edu/ai-publications/0-499/AITR-457.ps

ftp://publications.ai.mit.edu/ai-publications/pdf/AITR-457.pdf

This report explores the relation between image intensity and object shape. It is shown that image intensity is related to surface orientation and that a variation in image intensity is related to surface curvature. Computational methods are developed which use the measured intensity variation across surfaces of smooth objects to determine surface orientation. In general, surface orientation is not determined locally by the intensity value recorded at each image point. Tools are needed to explore the problem of determining surface orientation from image intensity. The notion of gradient space , popularized by Huffman and Mackworth, is used to represent surface orientation. The notion of a reflectance map, originated by Horn, is used to represent the relation between surface orientation image intensity. The image Hessian is defined and used to represent surface curvature. Properties of surface curvature are expressed as constraints on possible surface orientations corresponding to a given image point. Methods are presented which embed assumptions about surface curvature in algorithms for determining surface orientation from the intensities recorded in a single view. If additional images of the same object are obtained by varying the direction of incident illumination, then surface orientation is determined locally by the intensity values recorded at each image point. This fact is exploited in a new technique called photometric stereo. The visual inspection of surface defects in metal castings is considered. Two casting applications are discussed. The first is the precision investment casting of turbine blades and vanes for aircraft jet engines. In this application, grain size is an important process variable. The existing industry standard for estimating the average grain size of metals is implemented and demonstrated on a sample turbine vane. Grain size can be computed form the measurements obtained in an image, once the foreshortening effects of surface curvature are accounted for. The second is the green sand mold casting of shuttle eyes for textile looms. Here, physical constraints inherent to the casting process translate into these constraints, it is necessary to interpret features of intensity as features of object shape. Both applications demonstrate that successful visual inspection requires the ability to interpret observed changes in intensity in the context of surface topography. The theoretical tools developed in this report provide a framework for this interpretation.


AIM-458

Author[s]: Horn, Berthold K.P. and Marc H. Raibert

Configuration Space Control

December 1977

ftp://publications.ai.mit.edu/ai-publications/0-499/AIM-458.ps

ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-458.pdf

Complicated systems with non-linear time- varying behavior are difficult to control using classical linear feedback methods applied separately to individual degrees of freedom. At the present, mechanical manipulators, for example, are limited in their rate of movement by the inability of traditional feedback systems to deal with time-varying inertia, torque coupling effects between links and Coriolis forces. Analysis of the dynamics of such systems, however, provides the basic information needed to achieve adequate control.


AIM-459

Author[s]: Charles Rich, Howard E. Shrobe, Richard C. Waters, Gerald J. Sussman and Carl E. Hewitt

Programming Viewed as an Engineering Activity

January 1978

ftp://publications.ai.mit.edu/ai-publications/0-499/AIM-459.ps

ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-459.pdf

It is profitable to view the process of writing programs as an engineering activity. A program is a deliberately contrived mechanism constructed from parts whose behaviors are combined to produce the behavior of the whole. We propose to develop a notion of understanding a program which is analogous to similar notions in other engineering subjects. Understanding is a rich notion in engineering domains. It includes the ability to identify the parts of a mechanism and assign a purpose to each part. Understanding also entails being able to explain to someone how a mechanism works and rationalize its behavior under unusual circumstances.


AIM-460

Author[s]: Seymour Papert and Daniel H. Watt

Assessment and Documentation of a Children's Computer Laboratory

September 1977

ftp://publications.ai.mit.edu/ai-publications/0-499/AIM-460.ps

ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-460.pdf

This research will thoroughly document the experiences of a small number of 5th grade children in an elementary school computer laboratory, using LOGO, an advanced computer language designed for children. Four groups of four children will be taught a 10-week LOGO course. Detailed anecdotal records will be kept, and observers will note the development of the children’s computer programming skills, and the acquisition of knowledge in the areas of mathematics, science, and language, and of cognitive strategies and attitudinal changes which transfer beyond the specific subject matter studied.


AIM-461

Author[s]: Jon Doyle

A Glimpse of Truth Maintenance

February 1978

ftp://publications.ai.mit.edu/ai-publications/0-499/AIM-461.ps

ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-461.pdf

Many procedurally-oriented problem solving systems can be viewed as performing a mixture of computation and deduction, with much of the computation serving to decide what deductions should be made. This results in bits and pieces of deductions being strewn throughout the program text and execution. This paper describes a problem solver subsystem called a truth maintenance system which collects and maintains these bits of deductions. Automatic functions of the truth maintenance system then use these pieces of “proofs” to consistently update a data base of program beliefs and to perform a powerful form of backtracking called dependency-directed backtracking.


AIM-461A

Author[s]: Jon Doyle

A Glimpse of Truth Maintenance

November 1978

ftp://publications.ai.mit.edu/ai-publications/0-499/AIM-461a.ps

ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-461a.pdf

To choose their actions, reasoning programs must be able to draw conclusions from limited information and subsequently revise their beliefs when discoveries invalidate previous assumptions. A truth maintenance system is a problem solver subsystem for performing these functions by recording and maintaining the reasons for program beliefs. These recorded reasons are useful in constructing explanations of program actions in “responsible” programs, and in guiding the course of action of a problem solver. This paper describes the structure of a truth maintenance system, methods for encoding control structures in patterns of reasons for beliefs, and the method of dependency- directed backtracking.


AIM-462

Author[s]: William R. Swartout

A Comparison of PARSIFAL with Augmented Transition Networks

March 1978

ftp://publications.ai.mit.edu/ai-publications/0-499/AIM-462.ps

ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-462.pdf

This paper compares Marcus’ parser, PARSIFAL with Woods’ Augmented Transition Network (ATN) parser. In particular, the paper examines the two parsers in light of Marcus’ Determinism Hypothesis. An overview of each parser is presented. Following that, the Determinism Hypothesis is examined in detail. A method for transforming the PARSIFAL grammar rules into the ATN formalism is outlined. This transformation shows some of the fundamental differences between PARSIFAL and ATN parsers, and the nature of the hypotheses used in PARSIFAL. Finally, the principle of least commitment is proposed as an alternative to the Determinism Hypothesis.


AIM-463

Author[s]: Thomas M. Strat

Shaded Perspective Images of Terrain

March 1978

ftp://publications.ai.mit.edu/ai-publications/0-499/AIM-463.ps

ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-463.pdf

In order to perform image analysis, one must have a thorough understanding of how images are formed. This memo presents an algorithm that produces shaded perspective images of terrain as a vehicle to understanding the fundamentals of image formation. The image is constructed using standard projection equations along with an efficient hidden-surface removal technique. The image intensity is calculated using the reflectance map, a convenient way of describing the surface reflection as a function of surface gradient. Aside from its use as a tool toward understanding image analysis, the algorithm has several applications of its own, including providing video input to a flight simulator.


AIM-464

Author[s]: Michael S. Patterson and Carl E. Hewitt

Comparative Schematology

May 1978

ftp://publications.ai.mit.edu/ai-publications/0-499/AIM-464.ps

ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-464.pdf

While we may have the intuitive idea of one programming language having greater power than another, or of some subset of a language being an adequate “core” for that language, we find when we try to formalize this notion that there is a serious theoretical difficulty. This lies in the fact that even quite rudimentary languages are nevertheless “universal” in the following sense. If the language allows us to program with simple arithmetic or list processing functions, then any effective control structure can be simulated, traditionally by encoding a Turing machine computation in some way. In particular, a simple language with some basic arithmetic can express programs for any partial recursive function. Such an encoding is usually quite unnatural and impossibly inefficient. Thus in order to carry on a practical study of the comparative power of different languages we are led to banish explicit functions and deal instead with abstract, uninterpreted programs, or schemas. What follows is a brief report on some preliminary exploration in this area.


AIM-465

Author[s]: Berthold K.P. Horn and Robert J. Woodham

LANDSAT MSS Coordinate Transformations

February 1978

ftp://publications.ai.mit.edu/ai-publications/0-499/AIM-465.ps

ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-465.pdf

A number of image analysis tasks require the registration of a surface model with an image. In the case of satellite images, the surface model may be a map or digital terrain model in the form of surface elevations on a grid of points. We develop here an affine transformation between coordinates of Multi- Spectral Scanner (MSS) images produced by the LANDSAT satellites, and coordinates of a system lying in a plane tangent to the earth’s surface near the sub-satellite (Nadir) point.


AIM-466

Author[s]: Steven Rosenberg and Herbert A. Simon

Modeling Semantic Memory: Effects of Presenting Semantic Information in Different Modalities

April 1978

ftp://publications.ai.mit.edu/ai-publications/0-499/AIM-466.ps

ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-466.pdf

How is semantic information from different modalities integrated and stored? If related ideas are encountered in French and English, or in pictures and sentences, is the result a single representation in memory or two modality-dependent ones? Subjects were presented with items in different modalities, then were asked whether or not subsequently presented items were identical with the former ones. Subjects frequently accepted translations and items semantically consistent with those presented earlier as identical, although not as often as they accepted items actually seen previously. The same pattern of results was found when the items were French and English sentences, and when they were pictures and sentences. The results can be explained by the hypothesis that subjects integrate information across modalities into a single underlying semantic representation. A computer model, embodying this hypothesis, made predictions in close agreement with the data.


AIM-467

Author[s]: B.K.P. Horn and R.J. Woodham

Destriping Satellite Images

March 1978

ftp://publications.ai.mit.edu/ai-publications/0-499/AIM-467.ps

ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-467.pdf

Before satellite images obtained with multiple image sensors can be used in image analysis, corrections must be introduced for the differences in transfer functions on these sensors. Methods are here presented for obtaining the required information directly from the statistics of the sensor outputs. The assumption is made that the probability distribution of the scene radiance seen by each image sensor is the same. Successful destriping of LANDSAT images is demonstrated.


AIM-468

Author[s]: Candace Sidner

A Progress Report on the Discourse and Reference Components of PAL

April 1978

ftp://publications.ai.mit.edu/ai-publications/0-499/AIM-468.ps

ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-468.pdf

This paper reports on research being conducted on a computer assistant, called PAL. PAL is being designed to arrange various kinds of events with concern for the who, what, when, where and why of that event. The goal for PAL is to permit a speaker to interact with it in English and to use extended discourse to state the speaker’s requirements. The portion of the language system discussed in this report disambiguates references from discourse and interprets the purpose of sentences of the discourse. PAL uses the focus of discourse to direct its attention to a portion of the discourse and to the database to which the discourse refers. The focus makes it possible to disambiguate references with minimal search. Focus and a frames representation of the discourse make it possible to interpret discourse purposes. The focus and representation of the discourse are explained, and the computational components of PAL which implement reference disambiguation and discourse interpretation are presented in detail.


AITR-472

Author[s]: Edwina Rissland Michener

The Structure of Mathematical Knowledge

August 1978

ftp://publications.ai.mit.edu/ai-publications/0-499/AITR-472.ps

ftp://publications.ai.mit.edu/ai-publications/pdf/AITR-472.pdf

This report develops a conceptual framework in which to talk about mathematical knowledge. There are several broad categories of mathematical knowledge: results which contain the traditional logical aspects of mathematics; examples which contain illustrative material; and concepts which include formal and informal ideas, that is, definitions and heuristics.


AIM-473

Author[s]: David A. McAllester

A Three Valued Truth Maintenance System

May 1978

ftp://publications.ai.mit.edu/ai-publications/0-499/AIM-473.ps

ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-473.pdf

Truth maintenance systems have been used in recently developed problem solving systems. A truth maintenance system (TMS) is designed to be used by deductive systems to maintain the logical relations among the beliefs which those systems manipulate. These relations are used to incrementally modify the belief structure when premises are changed, giving a more flexible context mechanism than has been present in earlier artificial intelligence systems. The relations among beliefs can also be used to directly trace the source of contradictions or failures, resulting in far more efficient backtracking.


AITR-474

Author[s]: Guy Lewis Steele, Jr.

RABBIT: A Compiler for SCHEME

May 1978

ftp://publications.ai.mit.edu/ai-publications/0-499/AITR-474.ps

ftp://publications.ai.mit.edu/ai-publications/pdf/AITR-474.pdf

We have developed a compiler for the lexically-scoped dialect of LISP known as SCHEME. The compiler knows relatively little about specific data manipulation primitives such as arithmetic operators, but concentrates on general issues of environment and control. Rather than having specialized knowledge about a large variety of control and environment constructs, the compiler handles only a small basis set which reflects the semantics of lambda- calculus. All of the traditional imperative constructs, such as sequencing, assignment, looping, GOTO, as well as many standard LISP constructs such as AND, OR, and COND, are expressed in macros in terms of the applicative basis set. A small number of optimization techniques, coupled with the treatment of function calls as GOTO statements, serve to produce code as good as that produced by more traditional compilers. The macro approach enables speedy implementation of new constructs as desired without sacrificing efficiency in the generated code. A fair amount of analysis is devoted to determining whether environments may be stack-allocated or must be heap- allocated. Heap-allocated environments are necessary in general because SCHEME (unlike Algol 60 and Algol 68, for example) allows procedures with free lexically scoped variables to be returned as the values of other procedures; the Algol stack-allocation environment strategy does not suffice. The methods used here indicate that a heap- allocating generalization of the "display" technique leads to an efficient implementation of such "upward funargs". Moreover, compile- time optimization and analysis can eliminate many "funargs" entirely, and so far fewer environment structures need be allocated at run time than might be expected. A subset of SCHEME (rather than triples, for example) serves as the representation intermediate between the optimized SCHEME code and the final output code; code is expressed in this subset in the so-called continuation-passing style. As a subset of SCHEME, it enjoys the same theoretical properties; one could even apply the same optimizer used on the input code to the intermediate code. However, the subset is so chosen that all temporary quantities are made manifest as variables, and no control stack is needed to evaluate it. As a result, this apparently applicative representation admits an imperative interpretation which permits easy transcription to final imperative machine code. These qualities suggest that an applicative language like SCHEME is a better candidate for an UNCOL than the more imperative candidates proposed to date.


AIM-475

Author[s]: Steven Rosenberg

Understanding in Incomplete Worlds

May 1978

ftp://publications.ai.mit.edu/ai-publications/0-499/AIM-475.ps

ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-475.pdf

Most real world domains differ from the micro- worlds traditionally used in A.I. in that they have an incomplete factual database which changes over time. Understanding in these domains can be thought of as the generation of plausible inferences which are able to use the facts available, and respond to changes in them. A traditional rule interpreter such as Planner can be extended to construct plausible inferences in these domains by A) allowing assumptions to be made in applying rules, resulting in simplifications of rules which can be used in an incomplete database; B) monitoring the antecedents and consequents of a rule so that inferences can be maintained over a changing database. The resulting chains of inference can provide a dynamic description of an event. This allows general reasoning processes to be used to understand in domains for which large numbers of Schema-like templates have been proposed as the best model.


AIM-476

Author[s]: S. Ullman

The Interpretation of Structure From Motion

October 1976

ftp://publications.ai.mit.edu/ai-publications/0-499/AIM-476.ps

ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-476.pdf

The interpretation of structure from motion is examined from a computational point of view. The question addressed is how the 3-D structure and motion of objects can be inferred from the 2-D transformations of their projected images when no 3-D information is conveyed by the individual projections.


AIM-477

Author[s]: David Wayne Ihrie

Analysis of Synthetic Students as a Model of Human Behavior

May 1978

ftp://publications.ai.mit.edu/ai-publications/0-499/AIM-477.ps

ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-477.pdf

The research described in this report is an attempt to evaluate the educational effects of a computer game known as Wumpus. A set of five synthetic computer students was taken as a model of the progress of real students playing a sequence of twenty Wumpus “warrens”. Using a combination of observations made of the students, representations drawn by the students and protocols kept by the computer of each session, it was found that the synthetic students are a reasonable static model of real students, but miss completely many of the important dynamic factors which affect a student’s play. In spite of this, the Wumpus game was found to be an effective educational tool.


AIM-478

Author[s]: Berthold K.P.Horn, Ken-Ichi Hirokawa and Vijay Vazirani

Dynamics of a Three Degree of Freedom Kinematic Chain

October 1977

ftp://publications.ai.mit.edu/ai-publications/0-499/AIM-478.ps

ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-478.pdf

In order to be able to design a control system for high-speed control of mechanical manipulators, it is necessary to understand properly their dynamics. Here we present an analysis of a detailed model of a three-link device which may be viewed as either a “leg” in a locomotory system, or the first three degrees of freedom of an “arm” providing for its gross motions. The equations of motion are shown to be non-trivial, yet manageable.


AIM-479

Author[s]: Robert J. Woodham

Photometric Stereo

June 1978

ftp://publications.ai.mit.edu/ai-publications/0-499/AIM-479.ps

ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-479.pdf

Traditional stereo techniques determine range by relating two images of an object viewed from different directions. If the correspondence between picture elements is known, then distance to the object can be calculated by triangulation. Unfortunately, it is difficult to determine this correspondence. This paper introduces a novel technique called photometric stereo. The idea of photometric stereo is to vary the direction of the incident illumination between successive views while holding the viewing direction constant. This provides enough information to determine surface orientation at each picture element. Since the imaging geometry does not change, the correspondence between picture elements is known a priori. This stereo technique is photometric because it uses the intensity values recorded in a single picture element, in successive views, rather than the relative positions of features.


AIM-480

Author[s]: Kenneth M. Kahn and Carl Hewitt

Dynamic Graphics Using Quasi Parallelism

June 1978

ftp://publications.ai.mit.edu/ai-publications/0-499/AIM-480.ps

ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-480.pdf

Dynamic computer graphics is best represented as several processes operating in parallel. Full parallel processing, however, entails much complex mechanism making it difficult to write simple, intuitive programs for generating computer animation. What is presented in this paper is a simple means of attaining the appearance of parallelism and the ability to program the graphics in a conceptually parallel fashion without the complexity of a more general parallel mechanism. Each entity on the display screen can be independently programmed to move, turn, change size, color or shape and to interact with other entities.


AIM-482B

Author[s]: Kenneth M. Kahn

Director Guide

December 1979

ftp://publications.ai.mit.edu/ai-publications/0-499/AIM-482b.ps

ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-482b.pdf

Director is a programming language designed for dynamic graphics, artificial intelligence, and use by computer-naïve people. It is based upon the actor or object oriented approach to programming and resembles Act 1 and SmallTalk. Director extends MacLisp by adding a small set of primitive actors and the ability to create new ones. Its graphical features include an interface to the TV turtle, quasi-parallelism, many animation primitives, a parts/whole hierarchy and a primitive actor for making and recording “movies”. For artificial intelligence programming Director provides a pattern- directed data base associated with each actor, an inheritance hierarchy, and a means of conveniently creating non-standard control structures. For use by naïve programmers Director is appropriate because of its stress upon very powerful, yet conceptually simple primitives and its verbose, simple syntax based upon pattern matching. Director code can be turned into optimized Lisp which in turn can be compiled into machine code.


AIM-482

Author[s]: Kenneth M. Kahn

Director Guide

June 1978

ftp://publications.ai.mit.edu/ai-publications/0-499/AIM-482.ps

ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-482.pdf

Director is a programming language designed for dynamic graphics, artificial intelligence, and naïve users. It is based upon the actor or object oriented approach to programming and resembles Act 1 and SmallTalk. Director extends MacLisp by adding a small set of primitive actors and the ability to create new ones. Its graphical features include an interface to the TV turtle, pseudo-parallelism, many animation primitives, and a primitive actor for making and recording “movies”. For artificial intelligence programming Director provides a pattern-directed data base associated with each actor, an inheritance hierarchy, pseudo- parallelism, and a means of conveniently creating non-standard control structures. For use by relatively naïve programmers Director is appropriate because its stress upon very powerful, yet conceptually simple primitives and its verbose, simple syntax based upon pattern matching. Director code can be turned into optimized Lisp which in turn can be compiled into machine code.


AITR-483

Author[s]: Kurt A. Vanlehn

Determining the Scope of English Quantifiers

June 1978

ftp://publications.ai.mit.edu/ai-publications/0-499/AITR-483.ps

ftp://publications.ai.mit.edu/ai-publications/pdf/AITR-483.pdf

How can one represent the meaning of English sentences in a formal logical notation such that the translation of English into this logical form is simple and general? This report answers this question for a particular kind of meaning, namely quantifier scope, and for a particular part of the translation, namely the syntactic influence on the translation. Rules are presented which predict, for example, that the sentence: Everyone in this room speaks at least two languages. has the quantifier scope AE in standard predicate calculus, while the sentence: At lease two languages are spoken by everyone in this room. has the quantifier scope EA. Three different logical forms are presented, and their translation rules are examined. One of the logical forms is predicate calculus. The translation rules for it were developed by Robert May (May 19 77). The other two logical forms are Skolem form and a simple computer programming language. The translation rules for these two logical forms are new. All three sets of translation rules are shown to be general, in the sense that the same rules express the constraints that syntax imposes on certain other linguistic phenomena. For example, the rules that constrain the translation into Skolem form are shown to constrain definite np anaphora as well. A large body of carefully collected data is presented, and used to assess the empirical accuracy of each of the theories. None of the three theories is vastly superior to the others. However, the report concludes by suggesting that a combination of the two newer theories would have the greatest generality and the highest empirical accuracy.


AIM-484

Author[s]: Members of the LOGO Project

Interim Report of the LOGO Project in the Brookline Public Schools

June 1978

ftp://publications.ai.mit.edu/ai-publications/0-499/AIM-484.ps

ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-484.pdf

The LOGO activities of a group of 16 sixth- grade students, representing a full spectrum of ability, are being documented with a view to developing ways of capturing the learning possibilities of such an environment. The first group of eight subjects have completed 25 closely observed hours, extending over 7 weeks, in a LOGO classroom situated in a Brookline school. This is an interim report on these observations designed to exhibit the content of what has been learned; and insights into both the variety of cognitive styles of the pupils and the variety of learning situations available to a teacher with which to respond to different pupil styles and abilities. We have a large amount of data available for analysis, and we are interested in looking at this material from several points of view. The current state of our various analysis is presented here, without any effort to prune the considerable redundancy which has been generated in the process of doing this multiple-cut exercise.


AIM-485

Author[s]: Johan de Kleer and Gerald Jay Sussman

Propagation of Constraints Applied to Circuit Synthesis

September 1978

ftp://publications.ai.mit.edu/ai-publications/0-499/AIM-485.ps

ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-485.pdf

A major component in the process of design is synthesis, the determination of the parameters of the parts of a network given desiderata for the behavior of the network as a whole. Traditional automated synthesis techniques are either restricted to small, precisely defined classes of circuit functions for which exact mathematical methods exist or they depend upon numerical optimization methods in which it is difficult to determine the basis for any of the answers generated and their relations to the design desiderata and constraints. We are developing a symbolic computer-aided design tool, SYN, which can be of assistance to an engineer in the synthesis of a large class of circuits. The symbolic methods produce solutions which are clear and insightful. The dependence of each parameter on the individual design desiderata and circuit constraints can be easily traced.


AIM-486

Author[s]: Drew McDermott and Jon Doyle

Non-Monotonic Logic I

August 1978

ftp://publications.ai.mit.edu/ai-publications/0-499/AIM-486.ps

ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-486.pdf

“Non-monotonic” logical systems are logics in which the introduction of new axioms can invalidate old theorems. Such logics are very important in modeling the beliefs of active processes which, acting in the presence of incomplete information, must make and subsequently revise predictions in light of new observations. We present the motivation and history of such logics. We develop model and proof theories, a proof procedure, and applications for one important non-monotonic logic. In particular, we prove the completeness of the non-monotonic predicate calculus and the decidability of the non- monotonic sentential calculus. We also discuss characteristic properties of this logic and its relationship to stronger logics, logics of incomplete information, and truth maintenance systems.


AIM-486A

Author[s]: Drew McDermott and John Doyle

Non-Monotonic Logic I

January 1979

ftp://publications.ai.mit.edu/ai-publications/0-499/AIM-486a.ps

ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-486a.pdf

“Non-monotonic” logical systems are logics in which the introduction of new axioms can invalidate old theorems. Such logics are very important in modeling the beliefs of active processes which, acting in the presence of incomplete information, must make and subsequently revise predictions in light of new observations. We present the motivation and history of such logics. We develop model and proof theories, a proof procedure, and applications for one important non-monotonic logic. In particular, we prove the completeness of the non-monotonic predicate calculus and the decidability of the non- monotonic sentential calculus. We also discuss characteristic properties of this logic and its relationship to stronger logics, logics of incomplete information, and truth maintenance systems.


AIM-488

Author[s]: Edwina Rissland Michener

Understanding Understanding Mathematics

August 1978

ftp://publications.ai.mit.edu/ai-publications/0-499/AIM-488.ps

ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-488.pdf

In this paper we look at some of the ingredients and processes involved in the understanding of mathematics. We analyze elements of mathematical knowledge, organize them in a coherent way and take note of certain classes of items that share noteworthy roles in understanding. We thus build a conceptual framework in which to talk about mathematical knowledge. We then use this representation to describe the acquisition of understanding. We also report on classroom experience with these ideas.


AIM-490

Author[s]: Berthold K.P. Horn, Robert J. Woodham and M. Silverwilliam

Determining Shape and Reflectance Using Multiple Images

August 1978

ftp://publications.ai.mit.edu/ai-publications/0-499/AIM-490.ps

ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-490.pdf

Distributions of surface orientation and reflectance factor on the surface of an object can be determined from scene radiances observed by a fixed sensor under varying lighting conditions. Such techniques have potential application to the automatic inspection of industrial parts, the determination of the attitude of a rigid body in space and the analysis of images returned from planetary explorers. A comparison is made of this method with techniques based on images obtained from different viewpoints with fixed lighting.


AIM-491

Author[s]: D. Marr, T. Poggio and S. Ullman

Bandpass Channels, Zero-Crossings, and Early Visual Information Processing

September 1978

ftp://publications.ai.mit.edu/ai-publications/0-499/AIM-491.ps

ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-491.pdf

A recent advance by B.F. Logan in the theory of one octave bandpass signals may throw new light on spatial-frequency-tuned channels in early visual information processing.


AITR-492

Author[s]: Richard C. Waters

Automatic Analysis of the Logical Structure of Programs

December 1978

ftp://publications.ai.mit.edu/ai-publications/0-499/AITR-492.ps

ftp://publications.ai.mit.edu/ai-publications/pdf/AITR-492.pdf

This report presents a method for viewing complex programs as built up out of simpler ones. The central idea is that typical programs are built up in a small number of stereotyped ways. The method is designed to make it easier for an automatic system to work with programs. It focuses on how the primitive operations performed by a program are combined together in order to produce the actions of the program as a whole. It does not address the issue of how complex data structures are built up from simpler ones, nor the relationships between data structures and the operations performed on them.


AIM-493

Author[s]: Brian Cantwell Smith

A Proposal for a Computational Model of Anatomical and Physiological Reasoning

November 1978

ftp://publications.ai.mit.edu/ai-publications/0-499/AIM-493.ps

ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-493.pdf

The studies of anatomy and physiology are fundamental ingredients of medical education. This paper identifies six ways in which such functional knowledge serves as the underpinnings for general medical reasoning, and outlines the design of a computational model of common sense reasoning about human physiology. The design of the proposed model is grounded in a set of declarative representational ideas sometimes called “frame theory”: representational structures constructed from multiple-perspective, potentially redundant, descriptions, organized into structured collections, and associated with the objects and classes being described.


AIM-495

Author[s]: Ira Goldstein

Developing a Computational Representation for Problem Solving Skills

October 1978

ftp://publications.ai.mit.edu/ai-publications/0-499/AIM-495.ps

ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-495.pdf

This paper describes the evolution of a problem solving model over several generations of computer coaches. Computer coaching is a type of computer assisted instruction in which the coaching program observes the performance of a student engaged in some intellectual game. The coach’s function is to intervene occasionally in student generated situations to discuss appropriate skills that might improve the student’s play. Coaching is a natural context in which to investigate the teaching and learning processes, but it is a demanding task. The computer must be able to analyze the student’s performance in terms of a model of the underlying problem solving skills. This model must represent not only expertise for the task but also intermediate stages of problem solving skill and typical difficulties encountered by the learner. Implementing several generations of computer coaches to meet these demands has resulted in a model that represents problem solving skills a san evolving set of rules for a domain acting on an evolving representation of the problem and executed by a resource-limited problem solver. This paper describes this evolution from its starting point as a simple rule-based approach to its current form.


AIM-496

Author[s]: Seymour A. Papert and Sylvia Weir

Information Prosthetics for the Handicapped

September 1978

ftp://publications.ai.mit.edu/ai-publications/0-499/AIM-496.ps

ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-496.pdf

In this proposal we describe a technological step towards the realization of INFORMATION PROSTHETICS. Our primary focus is on using rather than making the technology. Specifically, our goal is to transpose for the use of cerebral-palsied children a computer- based learning environment we have developed, and to study in this environment a series of issues in developmental psychology, in the psychology of learning, in psycho-diagnostic techniques and in methods of instruction.


AIM-498

Author[s]: Berthold K.P. Horn and Robert W. Sjoberg

Calculating the Reflectance Map

October 1978

ftp://publications.ai.mit.edu/ai-publications/0-499/AIM-498.ps

ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-498.pdf

It appears that the development of machine vision may benefit from a detailed understanding of the imaging process. The reflectance map, showing scene radiance as a function of surface gradient, has proved to be helpful in this endeavor. The reflectance map depends both on the nature of the surface layers of the objects being imaged and the distribution of light sources. Recently, a unified approach to the specification of surface reflectance in terms of both incident and reflected beam geometry has been proposed. The reflectance-distribution function (BRDF). Here we derive the reflectance map in terms of the BRDF and the distribution of source radiance. A number of special cases of practical importance are developed in detail. The significance of this approach to the understanding of image formation is briefly indicated.


AIM-499

Author[s]: Johan de Kleer

Causal Reasoning and Rationalization in Electronics

September 1978

ftp://publications.ai.mit.edu/ai-publications/0-499/AIM-499.ps

ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-499.pdf

This research attempts to formalize the type of causal arguments engineerings employ to understand circuit behavior. A causal argument consists of a sequence of changes to circuit quantities (called events), each of which is caused by precious events. The set of events that an individual event can directly cause is largely an artifact of the point of view taken to analyze the circuit. A particular causal argument does not rule out other possibly conflicting causal arguments for the same circuit. If the actual behavior of the circuit is know or determined by measurements, the correct argument can be identified. The selected argument is a rationalization for the observed behavior since it explains but does not guarantee the observed behavior. A causal analysis program QUAL has been implemented which determines the response of a circuit to changes in input signals. It operates with a simple four valued arithmetic of unknown, unchanging, increasing and decreasing. This program is used to illustrate the applicability of causal reasoning to circuit recognition, algebraic analysis, troubleshooting and design.


 
horizontal line

MIT logo Computer Science and Artificial Intelligence Laboratory (CSAIL)
The Stata Center, Building 32 - 32 Vassar Street - Cambridge, MA 02139 - USA
tel:+1-617-253-0073 - publications@csail.mit.edu