CSAIL Publications and Digital Archive header
bullet Technical Reports bullet Work Products bullet Historical Collections bullet

link to publications.csail.mit.edu link to www.csail.mit.edu horizontal line

CSAIL Digital Archive - Artificial Intelligence Laboratory Series
Publications 300 through 399

AI PublicationsLast update Sun Mar 19 05:05:02 2006


Author[s]: Carl R. Flatau

Design Outline for Mini-Arms Based on Manipulator Technology

May 1973



The design of small manipulators is an art requiring proficiency in diverse disciplines. This paper documents some of the general ideas illustrated by a particular design for an arm roughly one quarter human size. The material is divided into the following sections: A. General design constraints. B. Features of existing manipulator technology. C. Scaling relationships for major arm components. D. Design of a particular small manipulator. E. Comments on future possibilities.


Author[s]: Richard C. Waters

A Mechanical Arm Control System

January 1974



This paper describes a proposed mechanical arm control system and some of the lines of thought which led to this design. In particular, the paper discusses the basic systme required in order for the arm to control its environment, and deal with error situations which arise. In addition the paper discusses the system needed to control the motion of the arm using the computed torque drive method, and force feedback.


Author[s]: Arthur J. Nevins

A Relaxation Approach to Splitting in an Automatic Theorem Prover

January 1974



The splitting of a problem into subproblems often involves the same variable appearing in more than one of the subproblems. This makes these subproblems dependent upon one another since a solution to one may not qualify as a solution to another. A two stage method of splitting is described which first obtains solutions by relaxing the dependency requirement and then attempts to reconcile solutions to different subproblems. The method has been realized as part of an automatic theorem prover programmed in LISP which takes advantage of the procedural power that LISP provides. The program has had success with sryptarithmetic problems, problems from blocks world, and has been used as asubroutine in a plane geometry theorem prover.


Author[s]: Arthur J. Nevins

Plane Geometry Theorem Proving Using Forward Chaining

January 1974



A computer program is described which operates on a subset of plane geometry. Its performance not only compares favorably with previous computer programs, but within its limited problem domain (e.g. no curved lines nor introduction of new points), it also invites comparison with the best human theorem provers. The program employs a combination of forward and backward chaining with the forward component playing the more important role. This, together with a deeper use of diagrammatic information, allows the program to dispense with the diagram filter in contrast with its central role in previous programs. An important aspect of human problem solving may be the ability to structure a problem space so that forward chaining techniques can be used effectively.


Author[s]: R.W. Gosper

Acceleration of Series

March 1974



The rate of convergence of infinite series can be accelerated b y a suitable splitting of each term into two parts and then combining the second part of the n-th term with the first part of the (n+1) -th term t get a new series and leaving the first part of the first term as an "orphan". Repeating this process an infinite number of times, the series will often approach zero, and we obtain the series of orphans, which may converge faster than the original series. H euristics for determining the splits are given. Various mathematical constants, originally defined as series having a term ratio which approaches 1, are accelerated into series having a term ratio less than 1. This is done with the constants of Euler and Catalan. The se ries for pi/4 = arctan 1 is transformed into a variety of series, among which is one having a term ration of 1/27 and another having a term ratio of 54/3125. A series for 1/pi is found which has a term ratio of 1/64 and each term of which is an integer divided by a powe r of 2, thus making it easy to evaluate the sum in binary arithmetic. We express zeta(3) in terms of pi-3 and a series having a term ra tio of 1/16. Various hypergeometric function identities are found, as well as a series for (arcsin y)-2 curiously related to a series f or y arcsin y. Convergence can also be accelerated for finite sums, as is shown for the harmonic numbers. The sum of the reciprocals of the Fibonacci numbers has been expressed as a series having the convergence rate of theta function. Finally, it is shown that a series whose n-th term ratio is (n+p)(n+q)/(n+r)(n+s), where p, q, r, s are integers, is equal to c + d pi-2, where c and d are rational.


Author[s]: Ira P. Goldstein

Summary of MYCROFT: A System for Understanding Simple Picture Programs

May 1974



A collection of powerful ideas—description, plans, linearity, insertions, global knowledge and imperative semantics—are explored which are fundamental to debugging skill. To make these concepts precise, a computer monitor called MYCROFT is described that can debug elementary programs for drawing pictures. The programs are those written for LOGO turtles.


Author[s]: Marvin Minsky

A Framework for Representing Knowledge

June 1974



This is a partial theory of thinking, combining a number of classical and modern concepts from psychology, linguistics, and AI. Whenever one encounters a new situation (or makes a substantial change in one's viewpoint) he selects from memory a structure called a frame, a remembered framework to be adopted to fit reality by changing details as necessary. A frame is a data-structure for representing a stereotyped situation, like being in a certain kind of living room, or going to a child's birthday party. Attached to each frame are several kinds of information. Some of this information is about how to use the frame. Some is about what one can expect to happen next. Some is about what to do if these expectations are not confirmed. The "top levels" of a frame are fixed, and represent things that are always true about the supposed situation. The lower levels have many "alota" that must be filled by specific instances or data. Collections of related frames are linked together into frame- systems. The effects of important actions are mirrored by transformations between the frames of a system. These are used to make certain kinds of calculations economical, to represent changes of emphasis and attention and to account for effectiveness of "imagery". In Vision, the different frames of a system describe the scene from different viewpoints, and the transformations between one frame and another represent the effects of moving from place to place. Other kinds of frame- systems can represent actions, cause-effect relations, or changes in conceptual viewpoint. The paper applies the frame-system idea also to problems of linguistic understanding: memory, acquisition and retrieval of knowledge, and a variety of ways to reason by analogy and jump to conclusions based on partial similarity matching.


Author[s]: Ira Goldstein, Henry Lieberman, Harry Bochner and Mark Miller

LLOGO: An Implementation of LOGO in LISP

June 1974



This paper describes LLOGO, an implementation of the LOGO language written in MACLISP for the ITS, TEN50 and TENEX PDP-10 systems, and MULTICS. The relative merits of LOGO and LISP as educational languages are discussed. Design decisions in the LISP implementation of LOGO are contrasted with those of two other implementations: CLOGO for the PDP-10 and 11LOGO for the PDP-11, both written in assembler language. LLOGO’s special facilities for character-oriented display terminals, graphic display ‘turtles’, and music generation are also described.


Author[s]: Ira Goldstein, Henry Lieberman, Harry Bochner and Mark Miller

LLOGO: An Implementation of LOGO in LISP

March 1975



This paper describes LLOGO, an implementation of the LOGO language written in MACLISP for the ITS, TEN50 and TENEX PDP-10 systems, and MULTICS. The relative merits of LOGO and LISP as educational languages are discussed. Design decisions in the LISP implementation of LOGO are contrasted with those of two other implementations: CLOGO for the PDP-10 and 11LOGO for the PDP-11, both written in assembler language. LLOGO’s special facilities for character-oriented display terminals, graphic display ‘turtles’, and music generation are also described.


Author[s]: Hirochika Inoue

Force Feedback in Precise Assembly Tasks

August 1974



This paper describes the execution of precise assembly tasks by a robot. The level of performance of the experimental system allows such basic actions as putting a peg into a hole, screwing a nut on a bolt, and picking up a thin piece from a flat table. The tolerance achieved in experiments was 0.001 inch. The experiments proved that force feedback enabled a reliable assembly of a bearing complex consisting of eight parts with close tolerances. A movie of the demonstration is available.


Author[s]: James R. Geiser

Commenting Proofs

May 1974



This paper constitutes a summary of a seminar entitled “Commenting Proofs” given a the Artificial Intelligence Laboratory during the spring of 1974. The work is concerned with new syntactic structures in formal proofs which derive from their pragmatic and semantic aspects. It is a synthesis of elements from Yessenin-Volpin’s foundational studies and developments in Artificial Intelligence concerned with commenting programs and the use of this idea in automatic debugging procedures.


Author[s]: Patrick H. Winston

New Progress in Artificial Intelligence

September 1974



This report concentrates on progress during the last two years at the M.I.T. Artificial Intelligence Laboratory. Topics covered include the representation of knowledge, understanding English, learning and debugging, understanding vision and productivity technology. It is stressed that these various areas are tied closely together through certain fundamental issues and problems.


Author[s]: Radia Perlman

TORTIS: Toddler's Own Recursive Turgle Interpreter System

December 1974



TORTIS is a device for preschool children to communicated with and program the turtle. It consistst of several boxes (currently 3 button boxes and two blox boxes) designed so that only a few new concepts are introduced at a time but more can be added when the child becomes familiar with what he has. Hopefully transitions are gradual enough so that the child never thinks talking to the turtle is too hard or that he is “too dumb”. And hopefully playing with the system should teach such concepts as numbers, breaking large problems into small solvable steps, writin and debugging procedures, recursion, variables, and conditionals. Most important of all, it should teach that learning is fun.


Author[s]: Jeanne Bamberger

The Luxury of Necessity

December 1974



This paper was originally written as an address to a conference of the National Association of Schools of Music on “The Music Consumer”. Posing a series of questions which point to fundamental issues underlyin the LOGO music project, the paper goes on to describe some of the specific projects with which students have been working in an effort to probe these issues. Emphasis is placed on “modes of representation” as a significant realm of enquiry: just how does an individual represent a tune to himself, what are the differences between formal and informal modes of representation – what features and relations of a melody does a representation capture, what does it leave out? What is the influence of such modes of “perception”, how do they effect strategies of problem solving, notions of “same” and “different” or even influence musical “taste”? Finally, there are some hints at what might constitute “sufficiently powerful representations” of musical design with examples from both simple and complex pieces of music as well as a probe into what might distinguish “simple” from “complex” musical designs.


Author[s]: Hal Abelson, Nat Goodman and Lee Rudolph

LOGO Manual

December 1974



This document descibes the LOGO system implemented for the PDP 11/45 at the M.I.T. Artificial Intelligence Laboratory. The “system” includes not only the LOGO evaluator, but also a dedicated time-sharing system which services about a dozen users. There are also various special devices such as robot turtles, tone generators, and CRT displays.


Author[s]: Jeanne Bamberger

What's in a Tune

November 1974



The work reported here began with two fundamental assumptions: 1) The perception of music is an active process; it involves the individual in selecting, sorting, and grouping the features of the phenomena before her. 2) Individual differences in response to a potentially sensible melody rest heavily on just which features the individual has access to or is able to focus on.


Author[s]: Hal Abelson and Jim Adams

A Glossary of LOGO Primitives

December 1974



This is a brief description of the primitives in PDP 11 LOGO. It is intended to provide a quick reference for users who are already familiar with LOGO basics. For a more detailed and comprehensive description of LOGO, consult the LOGO Manual (A.I. Memo 313, LOGO Memo 7).


Author[s]: E. Paul Goldenberg

A Glossary of PDP11 LOGO Primitives

March 1975



This glossary was written for the purpose of providing a quick and concise yet accurate description of the primitives and special words and characters of the March 18, 1975 PDP 11 implementation of the LOGO languge. Many entries include references to other related words and/or examples of the use of the primitive being described, but this is not intended to replace the functions of a good manual. For a more detailed and comprehensive description of the language, see the LOGO MANUAL, LOGO MEMO 7. The description of each LOGO word includes the work, itself, any arguments that the word may require, the “type” of word it is, abbreviated and alternate forms of the work, if any, and a definition correct as the date of this glossary. Word tupe is described on the first page and an example of the formatt of the entries is given below. In the appendix to this glossary are sections about 1) LOGO words that take a variable number of inputs, 2) infix operators, 3) editing characters, 4) special characters, 5) special names, 6) decimal ascii code and corresponding characters.


Author[s]: Ann D. Rubin

Hypothesis Formation and Evaluation in Medical Diagnosis

January 1975



This thesis describes some aspects of a computer system for doing medical diagnosis in the specialized field of kidney disease. Because such a system faces the spectre of combinatorial explosion, this discussion concentrates on heuristics which control the number of concurrent hypotheses and efficient "compiled" representations of medical knowledge. In particular, the differential diagnosis of hematuria (blood in the urine) is discussed in detail. A protocol of a simulated doctor/patient interaction is presented and analyzed to determine the crucial structures and processes involved in the diagnosis procedure. The data structure proposed for representing medical information revolves around elementary hypotheses which are activated when certain disposing of findings, activating hypotheses, evaluating hypotheses locally and combining hypotheses globally is examined for its heuristic implications. The thesis attempts to fit the problem of medical diagnosis into the framework of other Artifcial Intelligence problems and paradigms and in particular explores the notions of pure search vs. heuristic methods, linearity and interaction, local vs. global knowledge and the structure of hypotheses within the world of kidney disease.


Author[s]: Gerald Jay Sussman and Allen L. Brown

Localization of Failures in Radio Circuits: A Study in Causal and Teleological Reasoning

December 1974



This paper examines some methodologies for diagnosing correctly designed radio circuits which are failing to perform in the intended way because of some faulty component. Particular emphasis is placed on the utility and necessity of good teleological descriptions in successfully executing the task of isolating failing components.


Author[s]: Harold Abelson, Andrea diSessa and Lee Rudolph

Velocity Space and the Geometry of Planetary Orbits

December 1974



We develop a theory of orbits for the inverse- square central force law which differs considerably from the usual deductive approach. In particular, we make no explicit use of calculus. By beginning with qualitative aspects of solutions, we are led to a number of geometrically realizable physical invariants of the orbits. Consequently most of our theorems rely only on simple geometrical relationships. Despite its simplicity, our planetary geometry is powerful enough to treat a wide range of perturbations with relative ease. Furthermore, without introducing any more machinery, we obtain full quantitative results. The paper concludes with sugestions for further research into the geometry of planetary orbits.


Author[s]: Shimon Ullman

Model-Driven Geometry Theorem Prover

May 1975



This paper describes a new Geometry Theorem Prover, which was implemented to illuminate some issues related to the use of models in theorem provin. The paper is divided into three parts: Part 1 describes G.T.P. and presents the ideas embedded in it. It concentrates on the forward search method, and gives two examples of proofs produced that way. Part 2 describes the backward search mechanism and presents proofs to a sequence of successively harder problems. The last section of the work addresses the notion of similarity in a problem, defines a notion of semantic symmetry, and compares it to Gelernter’s concept of syntactic symmetry.


Author[s]: Benjamin J. Kuipers

A Frame for Frames: Representing Knowledge for Recognition

March 1975



This paper presents a version of frames suitable for representing knowledge for a class of reconition problems. An initial section gives an intuitive model of frames, and illustrates a number of desirable features of such a representation. A more technical example describes a small recognition program for the Blocks World which implements some of these features. The final section discusses the more general significance of the representation and the recognition process used in the example.


Author[s]: Berthold K. P. Horn

Orienting Silicon Integrated Circuit Chips for Lead Bonding

January 1975



Will computers that see and understand what they see revolutionize industry by automating the part orientation and part inspection processes? There are two obstacles: the expense of computin and our feeble understanding of images. We believe these obstacles are fast ending. To illustrate what can be done we describe a working program that visually determines the position and orientation of silicon chips used in integrated circuits.


Author[s]: David Marr

On the Purpose of Low-level Vision

December 1974



This article advances the thesis that the purpose of low-level vision is to encode symbolically all of the useful information contained in an intensity array, using a vocabulary of very low-level symbols: subsequent processes should have access only to this symbolic description. The reason is one of computational expediency: it allows the low-level processes to run almost autonomously: and it greatly simplifies the application of criteria to an image, whose representation in terms of conditions on the initial intensities, or on simple measurements made from them, is very cummbersome. The implications of this thesis for physiological and for computational approaches to vision are discussed. A list is given of several computational problems in low-level vision: some of these are dealt with in the accompanying articles.


Author[s]: David Marr

The Low-level Symbolic Representation of Intensity Changes in an Image

December 1974



A family of symbols is defined by which much of the useful information in an image may be represented, and its choice is justified. The family includes symbols for the various commonly occuring intensity profiles that are associated with the edges of objects, and symbols for the gradual luminance changes that provide clues about a surface's shape. It is shown that these descriptors may readily be computed from measurements similar to those made by simple cells in the visual cortex of the cat. The methods that are described have been implemented, and examples are shown of their application to natural images.


Author[s]: David Marr

The Recognition of Sharp, Closely Spaced Edges

December 1974



The recognition of sharp edges from edge- and bar-mask convolutions with an image is studied for the special case where the separation of the edges is of the order of the masks' panel-widths. Desmearing techniques are employed to separate the items in the image. Attention is also given to parsing de-smeared mask convolutions into edges and bars; to detecting edge and bar terminations; and to the detection of small blobs.


Author[s]: David Marr

A Note on the Computation of Binocular Disparity in a Symbolic, Low-Level Visual Processor

December 1974



The goals of the computation that extracts disparity from pairs of pictures of a scene are defined, and the contraints imposed upon that computation by the three-dimensional structure of the world are determined. Expressing the computation as a grey-level correlation is shown to be inadequate. A precise expression of the goals of the computation is possible in a low-level symbolic visual processor: the constraints translate in this environment to pre-requisites on the binding of disparity values to low-level symbols. The outine of a method based on this is given.


Author[s]: Gerald Jay Sussman and Richard Matthew Stallman

Heuristic Techniques in Computer Aided Circuit Analysis

March 1975



We present EL, a new kind of circuit analysis program. Whereas other circuit analysis systems rely on classical, formal analysis techniques, EL employs heuristic “inspection” methods to solve rather complex DC bias circuits. These techniques also give EL the ability to explain any result in terms of its own qualitative reasoning processes. EL’s reasoning is based on the concept of a “local one-step deduction” augmented by various “teleological” principles and by the concept of a “macro-element”. We present several annotated examples of EL in operation and an explanation of how it works. We also show how EL can be extended in several directions, including sinusoidal steady state analysis. Finally, we touch on possible implications for engineering education. We feel that EL is significant not only as a novel approach to circuit analysis but also as an application of Artificial Intelligence techniques to a new and interesting domain.


Author[s]: Tomas Lozano-Perez

Parsing Intensity Profiles

May 1975



Much low-level vision work in AI deals with one-dimensional intensity profiles. This paper describes PROPAR, a system that allows a convenient and uniform mechanism for recognizin such profiles. PROPAR is a modified Augmented Transition Networks parser. The grammar used by the parser serves to describe and label the set of acceptable profiles. The input to the parser are descriptions of segments of a piecewise linear approximation to an intensity profile. A sample grammar is presented and the results discussed.


Author[s]: Howard Austin

A Computational View of the Skill of Juggling

December 1974



This research has as its basic premise the belief that physical and mental skills are highly similar, enough so in fact that computation paradigms such as the ones used in Artificial Intelligence research about predominantly mental skills can be usefully extended to include physical skills. This thesis is pursued experimentally by categorization of “juggling bugs” via detailed video observations. A descriptive language for juggling movements is developed and a taxonomy of bugs is presented. The remainder of the paper is concerned with an empirical determination of the characteristics of an ultimate theory of juggling movements. The data presented is relevant to the computational issues of control structure, naming, addressing and subprocedurization.


Author[s]: Scott E. Fahlman

Thesis Progress Report: A System for Representing and Using Real-World Knowledg

May 1975



This paper describes progress to date in the development of a system for representing various forms of real-world knowledge. The knowledge is stored in the form of a net of simple parallel processing elements, which allow certain types of deduction and set- intersection to be performed very quickly and easily. It is claimed that this approach offers definite advantages for recognition and many other data-accessing tasks. Suggestions are included for the application of this system as a tool in vision, natural-language processing, speech recognition, and other problem domains.


Author[s]: Erik Sandewall

Ideas About Management of LISP Data Bases

May 1975



The paper advocates the need for systems which support maintenance of LISP-type data bases, and describes an experimental system of this kind, call DABA. In this system, a description of the data base’s structure is kept in the data base itself. A number of utility programs use the description for operations on the data base. The description must minimally include syntactic information reminiscent of data structure declarations in more conventional programming languages, and can be extended by the user. Two reasons for such systems are seen: (1) As A.I. programs develop from toy domains using toy data bases, to more realistic exercises, the management of the knowledge base becomes non-trivial and requires program support. (2) A powerful way to organize LISP programs is to make them data-driven, whereby pieces of program are distributed throughout a data base. A data base management system facilitates the use of this programming style. The paper describes and discusses the basic ideas in the DABA system as well as the technique of data driven programs.


Author[s]: Shimon Ullman

On Visual Detection of Light Sources

May 1975



The paper addresses the following problem: Given an array of light intensities obtained from some scene, find the light sources in the original scene. The following factors are discussed from the point of view of their relevance to light sources detection: The highest intensity in the scene, absolute intensity value, local and global contrast, comparison with the average intensity, and lightness computation. They are shown to be insufficient for explaining humans’ ability to identify light sources in their visual field. Finally, a method for accomplishing the source detection task in the mondrian world is presented.


Author[s]: D. Marr

Analyzing Natural Images: A Computational Theory of Texture Vision

June 1975



A theory of early and intermediate visual information processing is given, which extends to about the level of figure-ground separation. Its core is a computational theory of texture vision. Evidence obtained from perceptual and from computational experiments is adduced in its support. A consequence of the theory is that high-level knowledge about the world influences visual processing later and in a different way from that currently practiced in machine vision.


Author[s]: Berthold K.P. Horn

Image Intensity Understanding

August 1975



Image intensities have been processed traditionally without much regard to how they arise. Typically they are used only to segment an image into regions or to find edge- fragments. Image intensities do carry a great deal of useful information about three- dimensional aspects of objects and some initial attempts are made here to exploit this. An understanding of how images are formed and what determines the amount of light reflected from a point on an object to the viewer is vital to such a development. The gradient-space, popularized by Huffman and Mackworth is a helpful tool in this regard.


Author[s]: Howard Austin

Teaching Teachers LOGO: The Lesley Experiments

April 1976



This research is concerned with the question of whether or not teachers who lack specialized backgrounds can adapt to and become proficient in the technically complex, philosophically sophisticated LOGO learning environment. Excellent results were obtained and are illustrated through a series of examples of student work. The report then gives some brief observations about the thought styles observed and concludes with suggestions for further work.


Author[s]: Ira Goldstein and Seymour Papert

Artificial Intelligence, Language and the Study of Knowledge

July 1975 (Revised March 1976)



This paper studies the relationship of Artificial Intelligence to the study of language and the representation of the underlying knowledge which supports the comprehension process. It develops the view that intelligence is based on the ability to use large amounts of diverse kinds of knowledge in procedural ways, rather than on the possession of a few general and uniform principles. The paper also provides a unifying thread to a variety of recent approaches to natural language comprehension. We conclude with a brief discussion of how Artificial Intelligence may have a radical impact on education if the principles which it utilizes to explore the representation and use of knowledge are made available to the student to use in his own learning experiences. This paper is a revised version of an earlier document written with Marvin Minsky. Many of the ideas in this paper owe much to Minsky’s thoughtful critique; the authors, however, take responsibility fo the organization and wording of this document.


Author[s]: Harvey A. Cohen

The Art of Snaring Dragons

November 1974 (Revised May 1975)



DRAGONs are formidable problems in elementary mechanics not amenable to solution by naïve formula cranking. What is the intellectual weaponry one needs to snare a Dragon? To snare a Dragon one brings to mind an heuristic frame – a specifically structured association of problem solving ideas. Data on the anatomy of heuristic frames – just how and what ideas are linked together – has been obtained from the protocols of many attacks on Dragons by students and physicists. In this paper various heuristic frames are delineated by detailing how they motivate attacks on two particular Dragons, Milko and Jugglo, from the writer’s compilation. This model of the evolution of problem solving skills has also been applied to the interpretation of the intellectual growth of children, and in an Appendix we use it to give a cogent interpretation for the protocols of Piagetian “Conservation” experiments. The model provides a sorely needed theoretical framework to discuss teaching strategems calculated to promote problem solving skills.


Author[s]: Drew V. McDermott

Very Large Planner-Type Data Bases

September 1975



This paper describes the implementation of a typical data-base manaer for an A.I. language like Planner, Conniver, or QA4, and some proposed extensions for applications involving greater quantities of data than usual. The extensions are concerned with data bases involving several active and potentially active sub-data-bases, or “contexts”. The major mechanisms discussed are the use of contexts as packets of data with free variables; and indexing data according to the contexts they appear in. The paper also defends the Planner approach to data representations against some more recent proposals.


Author[s]: D. Marr

Early Processing of Visual Information

December 1975



The article describes a symbolic approach to visual information processing, and sets out four principles that appear to govern the design of complex symbolic information processing systems. A computational theory of early visual information processing is presented, which extends to about the level of figure-ground separation. It includes a process-oriented theory of texture vision. Most of the theory has been implemented, and examples are shown of the analysis of several natural images. This replaces Memos 324 and 334.


Author[s]: D. Marr and H.K. Hishihara

Spatial Disposition of Axes in a Generalized Cylinder Representation of Objects

December 1975



It is proposed that the 3-D representation of an object is based primarily on a stick-figure configuration, where each stick represents one or more axes in the object’s generalized cylinder representation. The loosely hierarchical description of a stick figure is interpreted by a special-purpose processor, able to maintain two vectors and the gravitational vertical relative to a Cartesian space-frame. It delivers information about the appearance of these vectors, which helps the system to rotate its model into the correct 3-D orientation relative to the viewer during recognition.


Author[s]: Jeanne Bamberger

The Development of Musical Intelligence I: Strategies for Representing Simple Rhythms

November 1975



This paper is the first in a series of monographs which will describe various aspects of the development of musical intelligence.


Author[s]: Cynthia J. Solomon

Leading a Child to a Computer Culture

December 1975



“LOGO” is sometimes used as the name of a programming language. It is also used as the name of…what shall I call it?… an environment, a culture, a way of thinking about computers and about learning and about putting the two together. I shall try to convey to you how I bring a child into this environment.


Author[s]: Murray Elias Danofsky

How Near is Near?

February 1976



This paper presents a system for understanding the concept of near and far, weighing such factors as purpose of the judgement, dimensions of the objects, absolute size of the distance, and size of the distance relative to other objects, ranges, and standards. A further section discusses the meaning of phrases such as very near, much nearer than, and as near as. Although we will speak of near as a judgement about physical distance, most of the ideas developed will be applicable to any continuous measurable parameter, such as size or time. An adaptation for rows (discrete spaces) is made as well.


Author[s]: Eugene C. Freuder

Computer System for Visual Recognition Using Active Knowledge

June 1976



A system for visual recognition is described, with implications for the general problem of representation of knowledge to assist control. The immediate objective is a computer system that will recognize objects in a visual scene, specifically hammers. The computer receives an array of light intensities from a device like a television camera. It is to locate and identify the hammer if one is present. The computer must produce from the numerical “sensory data” a symbolic description that constitutes its perception of the scene. Of primary concern is the control of the recognition process. Control decisions should be guided by the partial results obtained on the scene. If a hammer handle is observed this should suggest that the handle is part of a hammer and advise where to look for the hammer head. The particular knowledge that a handle has been found combines with general knowledge about hammers to influence the recognition process. This use of knowledge to direct control is denoted here by the term “active knowledge”. A descriptive formalism is presented for visual knowledge which identifies the relationships relevant to the active use of the knowledge. A control structure is provided which can apply knowledge organized in this fashion actively to the processing of a given scene.


Author[s]: John M. Hollerbach

Hierarchical Shape Description of Objects by Selection and Modification of Prototypes

November 1975



An approach towards shape description, based on prototype modification and generalized cylinders, has been developed and applied to the object domains pottery and polyhedra: (1) A program describes and identifies pottery from vase outlines entered as lists of points. The descriptions have been modeled after descriptions by archeologists, with the result that identifications made by the program are remarkably consisten with those of the archeologists. It has been possible to quantify their shape descriptors, which are everyday terms in our language applied to many sorts of objects besides pottery, so that the resulting descriptions seem very natural. (2) New parsing strategies for polyhedra overcome some limitations of previous work. A special feature is that the processes of parsing and identification are carried out simultaneously.


Author[s]: Robert Carter Moore

Reasoning from Incomplete Knowledge in a Procedural Deduction System

December 1975



One very useful idea in AI research has been the notion of an explicit model of a problem situation. Procedural deduction languages, such as PLANNER, have been valuable tools for building these models. But PLANNER and its relatives are very limited in their ability to describe situations which are only partially specified. This thesis explores methods of increasing the ability of procedural deduction systems to deal with incomplete knowledge. The thesis examines in detail, problems involving negation, implication, disjunction, quantification, and equality. Control structure issues and the problem of modelling change under incomplete knowledge are also considered. Extensive comparisons are also made with systems for mechanica theorem proving.


Author[s]: Andy diSessa

Turtle Escapes the Plane: Some Advanced Turtle Geometry

December 1975



Since the LOGO Turtle took his first step he has been mathematically confined to running around on flat surfaces. Fortunately the physically intuitive, procedurally oriented nature of the Turtle which makes him a powerful explorer in the plane is equally, if not more apparent when he is liberated to tread curved surfaces. This paper is aimed roughly at the High School level. Yet because it is built on intuition and physical action rather than formalism, it can reach such “graduate school” mathematical ideas as geodesics, Gaussian Curvature, and topological invariants as expressed in the Gauss-Bonnet Theorem.


Author[s]: Gerald J. Sussman and Guy L. Steele, Jr

An Interpreter for Extended Lambda Calculus

December 1975



Inspired by ACTORS [Greif and Hewitt] [Smith and Hewitt], we have implemented an interpreter for a LISP-like language, SCHEME, based on the lambda calculus [Church], but extended for side effects, multiprocessing, and process synchronization. The purpose of this implementation is tutorial. We wish to: (1) alleviate the confusion caused by Micro-PLANNER, CONNIVER, etc. by clarifying the embedding of non-recursive control structures in a recursive host language like LISP. (2) explain how to use these control structures, independent of such issues as pattern matching and data base manipulation. (3) have a simple concrete experimental domain for certain issues of programming semantics and style.


Author[s]: Marc Raibert

A State Space Model for Sensorimotor Control and Learning

January 1976



This is the first of a two-part presentation which deals with certain computer controlled manipulator problems. This first part discusses a model which is designed to address problems of motor control, motor learning, adaptation, and sensorimotor integration. In this section the problems are outlined and a solution is given which makes used of a state space memory and a piece- wise linearization of the equations of motion. A forthcoming companion article will present the results of tests performed on an implementation of the model.


Author[s]: Johan De Kleer

Qualitative and Quantitative Knowledge in Classical Mechanics

December 1975



This thesis investigates what knowledge is necessary to solve mechanics problems. A program NEWTON is described which understands and solves problems in mechanics mini-world of objects moving on surfaces. Facts and equations such as those given in mechanics text need to be represented. However, this is far from sufficient to solve problems. Human problem solvers rely on "common sense" and "qualitative" knowledge which the physics text tacitly assumes to be present. A mechanics problem solver must embody such knowledge. Quantitative knowledge given by equations and more qualitative common sense knowledge are the major research points exposited in this thesis. The major issue in solving problems is planning. Planning involves tentatively outlining a possible path to the solution without actually solving the problem. Such a plan needs to be constructed and debugged in the process of solving the problem. Envisionment, or qualitative simulation of the event, plays a central role in this planning process.


Author[s]: Guy Lewis Steele, Jr. and Gerald Jay Sussman

Lambda: The Ultimate Imperative

March 1976



We demonstrate how to model the following common programmingsconstructs in terms of an applicative order language similar to LISP: Simple Recursion, Iteration, Compound Statements and Expressions, GO TO and Assignment, Continuation-Passing, Escape Expressions, Fluid Variables, Call by Name, Call by Need, and Call by Reference. The models require only (possibly self-referent) lambda application, conditionals, and (rarely) assignment. No complex data structures such as stacks are used. The models are transparent, involving only local syntactic transformations. This paper is partly tutorial in intent, gathering all the models together for purposes of context.


Author[s]: Charles Rich and Howard E. Shrobe

Initial Report on a LISP Programmer's Apprentice

December 1976



This is an initial report on the design and partial implementation of a LISP programmers apprentice, an interactive programming system to be used by an expert programmer in the design, coding, and maintenance of large, complex programs.


Author[s]: David Marr

Artificial Intelligence -- A Personal View

March 1976



The goal of A.I. is to identify and solve useful information processing problems. In so doing, two types of theory arise. Here, they are labelled Types 1 and 2, and their characteristics are outlined. This discussion creates a more than usually rigorous perspective of the subject, from which past work and future prospects are briefly reviewed


Author[s]: H. Abelson, J. Bamberger, I. Goldstein, and S. Papert

Logo Progress Report 1973-1975

September 1975 (Revised March 1976)



Over the past two years, the Logo Project has grown along many dimensions. This document provides an overview in outline form of the main activities and accomplishments of the past as well as the major goals guiding our current research. Research on the design of learning environments, the corresponding development of a theory of learning and the exploration of teaching activities in these environments is presented.


Author[s]: D. Marr and T. Poggio

From Understanding Computation to Understanding Neural Circuitry

May 1976



The CNS needs to be understood at four nearly independent levels of description: (1) that at which the nature of computation is expressed; (2) that at which the algorithms that implement a computation are characterized; (3) that at which an algorithm is committed to particular mechanisms; and (4) that at which the mechanisms are realized in hardware. In general, the nature of a computation is determined by the problem to be solved, the mechanisms that are used depend upon the available hardware, and the particular algorithms chosen depend on the problem and on the available mechanisms. Examples are given of theories at each level.


Author[s]: Joseph D. Cohen

The Text-Justifier TJ6

May 1976



This memo, intended as both a reference and user’s manual describes the text-justifying program TJ6, which compiles a neat output document from a sloppy input manuscript. TJ6 can justify and fill text; automatically number pages and figures; control page format and indentation; underline, superscript, and subscript; print a table of contents; etc.


Author[s]: Benjamin Kuipers

Spatial Knowledge

June 1976



This paper introduces a model of spatial cognition to describe the states of partial knowledge that people have about the spatial structure of a large-scale environment. Spatial knowledge has several different representations, each of which captures one aspect of the geography. With knowledge stored in multiple representations, we must examine the procedures for assimilating new information for solving problems, and for communicating information between representations. The model centers on an abstract machine called the TOUR machine, which executes a description of the route to drive the “You Are Here” pointer (a small working memory) through a map that describes the geography. Representations for local and global spatial knowledge are discussed in detail. The model is compared with a survey of the psychological literature. Finally, the directions of necessary and desirable future research are outlined.


Author[s]: Radia Perlman

Using Computer Technology to Provide a Creative Learning Environment for Preschool Children

May 1976



TORTIS is a system of special terminals together with software which is designed to provide programming capability and be accesible for use by very young children. The system is designed to add capabilities in small increments so that the child is never overwhelmed by too much to learn at one time, and maintains a feeling of control over the environment. This system facilitates learning of various concepts such as relative size of numbers, frames of reference, procedures, conditionals, and recursion, but more importantly it teaches good problem solving techniques and a healthy approach to learning.


Author[s]: Henry Lieberman

The TV Turtle: A Logo Graphics System for Raster Displays

June 1976



Until recently, most computer graphics systems have been oriented toward the display of line drawins, continually refreshing the screen from a display list of vectors. Developments such as plasma panel displays and rapidly declining memory prices have now made feasible raster graphics systems, which instead associate some memory with each point on the screen, and display points according to the contents of the memory. This paper discusses the advantages and limitations of such systems. Raster systems permit operations which are not feasible on vector displays, such as reading directly from the screen as well as writing it, and manipulating two dimensional areas as well as vectors. Conceptual differences between programming for raster and vector systems are illustrated with a description of the author’s TV Turtle, a graphics system for raster scan video display terminals. This system is embedded in Logo, a Lisp-like interactive programming language designed for use by kids, and is based on Logo’s turtle geometry approach to graphics. Logo provides powerful ideas for using graphics which are easy for kids to learn, yet generalize naturally when advanced capabilities such as primitives for animation and color are added to the system.


Author[s]: Allen Brown

Qualitative Knowledge, Casual Reasoning and the Localization of Failures

November 1976



This report investigates some techinques appropriate to representing the knowledge necessary for understanding a class of electronic machines -- radio receivers. A computational performance model - WATSON - is presented. WATSONs task is to isolate failures in radio receivers whose principles of operation have been appropriately described in his knowledge base. The thesis of the report is that hierarchically organized representational structures are essential to the understanding of complex mechanisms. Such structures lead not only to descriptions of machine operation at many levels of detail, but also offer a powerful means of organizing "specialist" knowledge for the repair of machines when they are broken.


Author[s]: Kent A. Stevens

Occlusion Clues and Subjective Contours

June 1976



The paper describes some experiments with a visual agnosia patient who has lost the abillity to perceive subjective contours. The patient’s interpretations of simple examples of occlusion indicate that he fails to notice monocular occlusion clues, as well. The findings support the hypothesis that subjective countours are constructions that account for occluded figures, in the absence of objective edges. The patient’s ability to perceive coutours by stereopsis demonstrates that stereopsis independently gives rise to disparity countours. Furthermore, the overall results strongly suggest that the detection of occlusion is modularized, and that the module for detecting


Author[s]: D. Marr and T. Poggio

Cooperative Computation of Stereo Disparity

June 1976



The extraction of stereo disparity information from two images depends upon establishing a correspondence between them. This article analyzes the nature of the correspondence computation, and derives a cooperative algorithm that implements it. We show that this algorithm successfully extracts information from random-dot stereograms, and its implications for the psychophysics and neurophysiology of the visual system are briefly discussed.


Author[s]: Berthold K.P. Horn and Patrick H. Winston

A Laboratory Environment for Applications Oriented Vision and Manipulation

May 1976



This report is a brief summary guide to work done in the M.I.T. Artificial Intelligence Laboratory directed at the production of tools for productivity technology research. For detailed coverage of the work, readers should use this summary as an introduction to the reports and papers listed in the bibliography.


Author[s]: Patrick H. Winston

Proposal to the Advanced Research Projects Agency

May 1976



This is the substance of a proposal submitted in June, 1975, for research in the areas of large data bases and intelligent terminals, applications of machine vision and manipulation, basic studies in Artificial Intelligence, and LISP machine development.


Author[s]: Shimon Ullman

Filling in the Gaps: The Shape of Subjective Contours and a Model for Their Generation

October 1976



The properties of isotropy, smoothness, minimum curvature and locality suggest the shape of filled-in contours between two boundary edges. The contours are composed of the arcs of two circles tangent to the given edges, meeting smoothly, and minimizing the total curvature. It is shown that shapes meeting all the above requirement can be generated by a network which performs simple, local computations. It is suggested that the filling-in process plays an important role in the early processing of visual information.


Author[s]: Richard C. Waters

A System for Understanding Mathematical FORTRAN Programs

August 1976



This paper proposes a system which, when implemented, will be able to understand mathematical FORTRAN programs such as those in the IBM Scientific Subroutine Package. The system takes, as input, a program and annotation of the program. In order to understand the program, the system develops a “plan” for it. The “plan” specifies the purpose of each feature of the program, and how these features cooperate in order to create the behavior exhibited by the program. The system can use its understanding of the program to answer questions about it including questions about the ramifications of a proposed modification. It is also able to aid in debugging the program by detecting errors in it, and by locating the features of the program which are responsible for an error. The system should be of significant assistance to a person who is writing a program.


Author[s]: David Taenzer

Physiology and Psychology of Color Vision -- A Review

August 1976



This paper is a review of the anatomy, physiology, and psychology of human color vision.


Author[s]: Eugene C. Freuder

Synthesizing Constraint Expressions

July 1976



An algorithm is presented for determining the values which simultaneously satisfy a set of relations, or constraints, involving different subsets of n variables. The relations are represented in a series of constraint networks, which ultimately contain a node for every subset of the n variables. Constraints may be propagated through such networks in (potentially) parallel fashion to determine the values which simultaneously satisfy all the constraints. The iterated constraint propagation serves to mitigate combinatorial explosion. Applications in scene analysis, graph theory, and backtrack search are provided.


Author[s]: Seymour A. Papert

Proposal to NSF: An Evaluative Study of Modern Technology in Education

June 1976



This proposal to the NSF describes a new phase of research planned in LOGO. Previous phases have concentrated on developing a conceptual superstructure (theories and teaching methods) and a material infra- structure (hardware and software) for a new style of using computers in education. We now want to test, to prove and to disseminate the results of our work, which will, of course, continue along the lines of the early phases. Part 1 is an overview of where we are and what we have to do next in the historical framework of the uses of computers for education. Parts 2 and 3 focus more on the specific content of the work planned for the next three years (1976-79).


Author[s]: D. Marr

Analysis of Occluding Contour

October 1976



Almost nothing can be deduced about a general 3-D surface given only its occluding contours in an image, yet contour information is easily and effectively used by us to infer the shape of a surface. Therefore, implicit in the perceptual analysis of occluding contour must lie various assumptions about the viewed surfaces. The assumptions that seem most natural are (a) that the distinction between convex and concave segments reflects real properties of the viewed surface; and (b) that contiguous portions of contour arise from contiguous parts of the viewed surface – i.e. there are no invisible obscuring edges. It is proved that, for smooth surfaces, these assumptions are essentially equivalent to assuming that the viewed surface is a generalized cone. Methods are defined for finding the axis of such a cone, and for segmenting a surface constructed of several cones into its components, whose axes can then be found separately. These methods, together with the algorithms for implementing them devised by Vatan & Marr (1977), provide one link between an uninterpreted figure extracted from an image, and the 3-D representation theory of Marr and Nishihara (1977).


Author[s]: Seymour A. Papert

Some Poetic and Social Criteria for Education Design

June 1976



Ten years is in some ways a challenging and in some ways a very awkward period for predicting the impact of computers in education. If you asked me whether the practice of education will have undergone a fundamental change through the impact of computers in either five years of in twenty-five years, I could answer with complete confidence “NO” to the first question and “YES” to the second. But what happens in the ten years depends very sensitively on how hard we try; on when the people with the requisite financial, intellectual and moral resources recognize the opportunity and the urgency of action. If we act smartly it is still possible that by 1985 the existence of model schools and learning centers will have changed the ball-park in which society sets the sights of its educational ambitions.


Author[s]: Glen Speckert

A Computerized Look at Cat Locomotion or One Way to Scan a Cat

July 1976



This paper describes a three phase project concerning the watchin, analyzing, and describing of motions of a cat in various gaits. All data is based on two 16mm films of an actual cat moving on a treadmill. In phase I, the low level issues of tracking key points on the cat from frame to frame are discussed. Phase II deals with building and using a graphics tool to analyze the data of phase I. Pahse III is a high level discussion of cat locomotion based on the trajectories and movements explored by phase II.


Author[s]: Cynthia J. Solomon and Seymour Papert

A Case Study of a Young Child Doing Turtle Graphics in LOGO

July 1976



This paper explores some important issues with regard to using computers in education. It probes into the question of what programming ideas and projects will engage young children. In particular, a seven year old child’s involvement in turtle graphics is presented as a case study.


Author[s]: Harold Abelson

Computational Geometry of Linear Threshold Functions

July 1976



Linear threshold machines are defined to be those whose computations are based on the outputs of a set of linear threshold decision elements. The number of such elements is called the rank of the machine. An analysis of the computational geometry of finite-rank linear threshold machines, analogous to the analysis of finite-order perceptrons given by Minsky and Papert, reveals that the use of such machines as “general purpose pattern recognition systems” is severely limited. For example, these machines cannot recognize any topological invariant, nor can they recognize non-trivial figures “in context”.


Author[s]: D. Marr and H.K. Nishihara

Representation and Recognition of the Spatial Organization of Three-Dimensional

August 1976



A method is given for representing 3-D shapes. It is based on a hierarchy of stick figures (called 3-D models), where each stick corresponds to an axis in the shape’s generalized cone representation. Although the representation of a complete shape may contain many stick figures at different levels of detail, only one stick figure is examined at a time while the representation is being used ot interpret an image. By thus balancing scope of description against detail, the complexity of the computations needed to support the representation is minimized. The method requires (a) a database of stored stick figures; (b) a simple device called the image-space processor for moving between object- centered and viewer-centered coordinate frames; and (c) a process for “relaxing” a stored model onto the image during recognition. The relation of the theory to “mental rotation” phenomena is discussed, and some critical experimental predictions are made.


Author[s]: Guy L. Steele Jr.

Arithmetic Shifting Considered Harmful

September 1976



For more than a decade there has been great confusion over the semantics of the standard "arithmetic right shift" instruction. This confusion particularly afflicts authors of computer reference handbooks and of optimizing compilers. The fact that shifting is not always equivalent to division has been red iscovered over and over again over the years, but has never been publicized. This paper quotes a large number of sources to prove the widespread extent of this confusion, and then proceeds to a short discussion of the problem itself and what to do about it.


Author[s]: Guy Lewis Steele Jr.

LAMBDA: The Ultimate Declarative

November 1976



In this paper, a sequel to "LAMBDA: The U ltimate Imperative", a new view of LAMBDA as a renaming operator is presented and contrasted with the usual functional view taken by L ISP. This view, combined with the view of function invocation as a kind of generalized GOTO, leads to several new insights into the nat ure of the LISP evaluation mechanism and the symmetry between form and function, evaluation and application, and control and environmen t. It also complements Hewitt's actors theory nicely, explaining the intent of environment manipulation as cleanly, generally, and intu itively as the actors theory explains control structures. The relationship between functional and continuation-passing styles of progra mming is also clarified. This view of LAMBDA leads directly to a number of specific techniques for use by an optimizing compiler: 1.) T emporary locations and user-declared variables may be allocated in a uniform manner. 2.) Procedurally defined data structures may compi le into code as good as would be expected for data defined by the more usual declarative means. 3.) Lambda-calculus-theoretic models of such constructs as GOT, DO loops, call-by-name, etc. may be used directly as macros, the expansion of which may then compile into code as good as that produced by compilers which are designed especially to handle GOTO, DO, etc. The necessary characteristics of such a c ompiler designed according to this philosophy are discussed. Such a compiler is to be built in the near future as a testing ground for these ideas.


Author[s]: Richard M. Stallman and Gerald Jay Sussman

Forward Reasoning and Dependency-Directed Backtracking in a System for Computer-Aided Circuit Analysis

September 1976



We present a rule-based system for computer-aided circuit analysis. The set of rules, called EL, is written in a rule language called ARS. Rules are implemented by ARS as pattern-directed invocation demons monitoring an associative data base. Deductions are performed in an antecedent manner, giving EL’s analysis a catch-as- catch-can flavor suggestive of the behavior of expert circuit analyzers. We call this style of circuit analysis propagation of constraints. The system threads deduced facts with justifications which mention the antecedent facts and the rule used. These justifications may be examined by the user to gain insight into the operation of the set of rules as they apply to a problem. The same justifications are used by the system to determine the currently active data-base context for reasoning in hypothetical situations. They are also used by the system in the analysis failures to reduce the search space. This leads to effective control of cominatorial search which we call dependency-directed backtracking.


Author[s]: James L. Stansfield, Brian P. Carr and Ira P. Goldstein

Wumpus Advisor 1: A First Implementation Program that Tutors Logical and Probabilistic Reasoning Skills

October 1976



The Wumpus Advisor program offers advice to a player involved in choosing the best move in a game for which competence in dealing with incomplete and uncertain knowledge is required. The design and implementation of the advisor explores a new paradigm in Computer Assisted Instruction, in which the performance of computer-based tutors is greatly improved through the application of Artificial Intelligence techniques. This report describes the design of the Advisor and outlines directions for further work. Our experience with the tutor is informal and psychological experimentation remains to be done.


Author[s]: Steven T. Rosenberg

Dual Coding and the Representation of Letter Strings

July 1977



Sub-strings derived from four-letter strings (e.g. ABCD) were presented to subjects using a variation on Bransford and Franks’ (1971) paradigm. Each strins was in either upper or lower case. Subjects were then tested for recognition of the strings, false recognition of translations of the strings into the other case, and false recognitions of new but legal strings. Subjects accepted previously seen strings most frequently, following by translations, with New strings accepted least often. This replicateds Rosenberg and Simon’s (in press) findings with sentences and pictures that express the same concept. However, in the present experiment the two forms of a string were unbiased with respect to verbal or pictorial encoding. The forms in which a string could appear (upper or lower case) were not confounded with the two types of encoding (verbal and pictorial) hypothesized by a dual coding theory. The results supported the view that the previously reported difference between the original form and a translation is best explained by a model which uses a single representation that preserves some form distinctions.


Author[s]: Mark L. Miller and Ira P. Goldstein

Overview of a Linguistic Theory of Design

February 1977



SPADE is a theory of the design of computer programs in terms of complementary planning and debugging processes. An overview of the authors’ recent research on this theory is provided. SPADE borrows tools from computational linguistics – grammars, augmented transition networks (ATN’s), chart- based parsers – to formalize planning and debugging. The theory has been applied to parsing protocols of programming episodes, constructing a grammar-based editor in which programs are written in a structured fashion, and designing an automatic programming system based ont eh ATN formalism.


Author[s]: Mark L. Miller and Ira P. Golstein

Overview of a Linguistic Theory of Design

February 1977



The SPADE theory uses linguistic formalisms to model the program planning and debugging processes. The theory has been applied to constructing a grammar-based editor in which programs are written in a structured fashion, designing an automatic programming system based on Augmented Transition Network, and parsing protocols of programming episodes.


Author[s]: Ira P. Goldstein and Mark L. Miller

AI Based Personal Learning Environments: Directions for Long Term Research

December 1976



The application of artificial intelligence (AI) techniques to the design of personal learning environments is an enterprise of both theoretical and practical interest. In the short term, the process of developing and testing intelligent tutoring programs serves as a new experimental vehicle for exploring alternative cognitive and pedagogical theories. In the long term, such programs should supplement the educational supervision and guidance provided by human teachers. This paper illustrates our long term perspective by a scenario with a hypothetical tutoring system for elementary graphics programming.


Author[s]: Mark L. Miller and Ira P. Goldstein

Parsing Protocols Using Problem Solving Grammars

December 1976



A theory of the planning and debugging of programs is formalized as is context free grammar. The grammar is used to reveal the constituent structure of problem solving episodes, by parsing protocols in which programs are written, tested and debugged. This is illustrated by the detailed analysis of an actual session with a beginning student. The virtues and limitations of the context free formalism are considered.


Author[s]: Mark L. Miller and Ira P. Goldstein

SPADE: A Grammar Based Editor for Planning and Debugging Programs

December 1976



A grammar of plans is developed from a taxonomy of basic planning techniques. This grammar serves as the basis for the design of a new kind of interactive programming environment (SPADE), in which programs are generated by explicitly articulating planning descisions. The utility of this approach to program definition is that a record of these decisions, called the plan derivation, provides guidance for subsequent modification of debugging of the program.


Author[s]: Ira P. Goldstein and Mark L. Miller

Structured Planning and Debugging: A Linguistic Theory of Design

December 1976



A unified theory of planning an debugging is explored by designing a problem solving program called PATN. PATN uses an augmented transition network (ATN) to represent a broad range of planning techniques, including identification, decomposition, and reformulation. (The ATN [Woods 1970] is a simple yet powerful formalism which has been effectively utilized in computational linguistics.)


Author[s]: Mark L. Miller and Ira P. Goldstein

PAZATN: A Linguistic Approach to Automatic Analysis of Elementary Programming Protocols

December 1976



PATN is a design for a machine problem solver which uses an augmented transition network (ATN) to represent planning knowledge. In order to explore PATN’s potential as a theory of human problem solving, a linguistic approach to protocol analysis is presented. An interpretation of a protocol is taken to be a parse tree supplemented by semantic and pragmatic annotation attached to various nodes. This paradigm has implications for constructing a cognitive model of the individual and designing computerized tutors.


Author[s]: Ira Goldstein

The Computer as Coach: An Athletic Paradigm for Intellectual Education

December 1976



Over the next five years, computer games will find their way into a vast number of American homes, creating a unique educational opportunity: the development of “computer coaches” for the serious intellectual skills required by some of these games. From the player’s perspective, the coach will provide advice regarding strategy and tactics for better play. But, from the perspective of the coach, the request for help is an opportunity to tutor basic mathematical, scientific or other kinds of knowledge that the game exercises.


Author[s]: Neil Rowe

Grammar as a Programming Language

October 1976



This paper discusses some student projects involving generative grammars. While grammars are usually associated with linguisitics, their usefuleness goes far beyond just “language” to make different domains. Their application is general enough to make grammars a sort of programming language in their own right.


Author[s]: Kent A. Stevens

Computation of Locally Parallel Structure

March 1977



A Moire-like effect can be observed in dot patterns consisting of two superimposed copies of a random dot pattern where one copy has been expanded, translated, or rotated. One perceives in these patterns a structure that is locally parallel. Our ability to perceive this structure is shown by experiment to be limited by the local geometry of the pattern, independent of the overall structure or the dot density. A simple representation of locally parallel structure is proposed, and it is found to be computable by a non-iterative, parallel algorithm. An implementation of this algorithm is demonstrated. Its performance parallels that observed experimentally, providing a potential explanation for human performance. Advantages are discussed for the early description of locally parallel structure in the course of visual processing.


Author[s]: Harold Abelson and Andy diSessa

Student Science Training Program in Mathematics, Physics and Computer Science

September 1976



During the summer of 1976, the Massachussetts Institute of Technology Artificial Intelligence Laboratory sponsored a Student Science Training Program in Mathematics, Physics and Computer Science for high ability secondary school students. This report describes, in some detail, the style of the program, the curriculum and the projects the students undertook.


Author[s]: Johan de Kleer

Local Methods for Localizing Faults in Electronic Circuits

November 1976



The work described in this paper is part of an investigation of the issues involved in making expert problem solving programs for engineering design and for maintenance of engineered systems. In particular, the paper focuses on the troubleshooting of electronic circuits. Only the individual properties of the components are used, and not the collective properties of groups of components. The concept of propagation is introduced which uses the voltage-current properties of components to determing additional information from given measurements. Two propagated values can be discovered for the same point. This is called a coincidence. In a faulted circuit, the assumptions made about components in the coinciding propagations can then be used to determine information about the faultiness of these components. In order for the program to deal with actual circuits, it handles errors in measurement readings and tolerances in component parameters. This is done by propagating ranges of numbers instead of single numbers. Unfortunately, the comparing of ranges introduces many complexities into the theory of coincidences. In conclusion, we show how such local deductions can be used as the basis for qualitative reasoning and troubleshooting.


Author[s]: Robert Lawler

Pre-Readers' Concepts of the English Word

November 1976



Pre-Readers exhibit concepts of the English word different from those of literate adults. The inclusive word concept is primary: A word is what we call an utterance and any of its parts. Pre-Readers suffer confusion between homophones at the syllabic level, e.g., the sound of the suffix in “PUPPY” is confused with the name of the letter. Conflict between implicit judgments of wordhood (inferred from the child’s counting of the number of words in an utterance) and explicit judgments (responses to questions about whether an item is a word) vary from high, for pre-readers, to low, for beginning readers. The justifications pre-readers offer to support their judgments of wordhood are notable for not including any argumetns based on immediate verbal context. A concept development theory is offered to interpret this data and their relaxation to learning to read.


Author[s]: Cynthia J. Solomon

Teaching the Computer to Add: An Example of Problem-Solving in an Anthropomorphic Computer Culture

December 1976



Computers open up new ways to think about knowledge and learning. Learning computer science should draw upon and feed these new approaches. In a previous paper called “Leading a Child to a Computer Culture” I discuss some ways to do so in a very elementary context. This paper is a contribution to extending such thinking to a more advanced project.


Author[s]: Tomas Lozano-Perez

The Design of a Mechanical Assembly System

December 1976



This thesis describes a mechanical assembly system called LAMA (Language for Automatic Mechanical Assembly). The goal of the work was to create a mechanical assembly system that transforms a high-level description of an automatic assembly operation into a program or execution by a computer controlled manipulator. This system allows the initial description of the assembly to be in terms of the desired effects on the parts being assembled. Languages such as WAVE [Bolles & Paul] and MINI [Silver] fail to meet this goal by requiring the assembly operation to be described in terms of manipulator motions. This research concentrates on the spatial complexity of mechanical assembly operations. The assembly problem is seen as the problem of achieving a certain set of geometrical constraints between basic objects while avoiding unwanted collisions. The thesis explores how these two facets, desired constraints and unwanted collisions, affect the primitive operations of the domain.


Author[s]: Jeanne Bamberger

Capturing Intuitive Knowledge in Procedural Description

December 1976



Trying to capture intuitive knowledge is a little like trying to capture the moment between what just happened and what is about to happen. Or to quote a famous philosopher, “You can’t put your foot in the same river once.” The problem is tha tyou can only “capture” what stands still. Intuitive knowledge is not a static structure, but rather a continuing process of constructing coherence and meaning out of the sensory phenomena that come at you. To capture intuitive knowledge, then means: Given some phenomena, what are your spontaneous ways of selecting significant features or for choosing what constitutes an element; how do you determine what is the same and what is different; how do you agregate or chunk the sensory data before you?


Author[s]: Akinori Yonezawa and Carl Hewitt

Symbolic Evaluation Using Conceptual Representations for Programs with Side-Effects

December 1976



Symbolic evaluation is a process which abstractly evaluates an program on abstract data. A formalism based on conceptual representations is proposed as a specification language for programs with side-effects. Relations between algebraic specifications and specifications based on conceptual representations are discussed and limitations of the current algebraic specification techniques are pointed out. Symbolic evaluation is carried out with explicit use of a notion of situations. Uses of situational tags in assertions make it possible to state relations about properties of objects in different situations. The proposed formalism can deal with problems of side- effects which have been beyond the scope of Floyd-Hoare proof rules and give a solution to McCarthy’s frame problem.

horizontal line

MIT logo Computer Science and Artificial Intelligence Laboratory (CSAIL)
The Stata Center, Building 32 - 32 Vassar Street - Cambridge, MA 02139 - USA
tel:+1-617-253-0073 - publications@csail.mit.edu