CSAIL Publications and Digital Archive header
bullet Technical Reports bullet Work Products bullet Historical Collections bullet

link to publications.csail.mit.edu link to www.csail.mit.edu horizontal line

CSAIL Digital Archive - Artificial Intelligence Laboratory Series
Publications 100 through 199

AI PublicationsLast update Sun Mar 19 05:05:02 2006


Author[s]: Seymour Papert

The Summer Vision Project

July 1966



The summer vision project is an attempt to use our summer workers effectively in the construction of a significant part of a visual system. The particular task was chosen partly because it can be segmented into sub-problems which allow individuals to work independently and yet participate in the construction of a system complex enough to be real landmark in the development of "pattern recognition". The basic structure is fixed for the first phase of work extending to some point in July. Everyone is invited to contribute to the discussion of the second phase. Sussman is coordinator of "Vision Project" meetings and should be consulted by anyone who wishes to participate. The primary goal of the project is to construct a system of programs which will divide a vidisector picture into regions such as likely objects, likely background areas and chaos. We shall call this part of its operation FIGURE-GROUND analysis. It will be impossible to do this without considerable analysis of shape and surface properties, so FIGURE-GROUND analysis is really inseparable in practice from the second goal which is REGION DESCRIPTION. The final goal is OBJECT IDENTIFICATION which will actually name objects by matching them with a vocabulary of known objects.


Author[s]: Richard Greenblatt and Donald A. Sordillo

Sides 21

August 1966



SIDES 21 produces a graph consisting of the locations of lines which comprise the sides of either a geometric solid or a plane figure. The representation is in floating point mode, suitable for subsequent processing. The input is a picture intensity-function.


Author[s]: Gerald Jay Sussman and Adolfo Guzman

Summer Vision Group: A Quick Look at Some of Our Programs

July 1966



no abstract


Author[s]: John White

Additions to Vision Library

August 1966



Modified LAP: Additions have been made to LAP as described in the PDP-6 write-up.


Author[s]: Jack Holloway

Output to the PDP-6 Calcomp Plotter

August 1966



The plotter on the console of the PDP-6 is currently attached to device number 774, and accepts stepping pulses given under control of a CONO to that device. Its normal mode of operation is to CONO the desired bits on, wait an instruction, and cono a zero.


Author[s]: Tom Knight

Modifications to PDP-6 Teletype Logic

August 1966



The existing teletype logic for the PDP-6 has been modified to accommodate up to four additional teletypes. These were added with a minimum of change to the existing logic, and are easily removable by taking out the cable in 4M2 and replacing the cable in 4M1 with the jumper module.


Author[s]: D. Eastlake

An Input Macro for TECO

September 1966



A macro has been written for TECO that enables one to insert characters into the buffer as they are typed with the entire current page (if not greater than the display screen"s height in length) always being displayed. This macro now exists on the MACDMP system tape as a file entitled "CTLP INP".


Author[s]: Donald Sordillo

Music Playing on the PDP-6

August 1966



This memo describes a process of converting coded music into auditory stimuli on the PDP-6. Attached is a copy of the original specifications for the coding (a PDP-1 memo by Peter Samson).


Author[s]: Donald E. Eastlake III

A Primitive Control P Feature

October 1966



A program, some TECO macros, and some small modifications to existing systems software have been written, called PRO, whose purpose is to reduce the large number of control languages and system programs it has been necessary to know about and the large amount of redundant typing it has been necessary to do to effectively use the MAC PDP-6 system. PRO allows a user knowing the command languages on only TECO, DOT, and PRO to effectively edit and debug email absolute programs with a minimum of command typing overhead (systems of this sort are called control P features for historic reasons). The remainder of this memo, which describes PRO and its use in detail, assumes some knowledge of TECO, DOT, and the MAC PDP-6 system. (In this memo the symbol $ always stands for the character ALT MOD).


Author[s]: Donald Sordillo


October 1966



This program will take a list of display instructions and cause it to be plotted. For further or more detailed information consult with Michael Speciner.


Author[s]: John White

Figure Boundary Description Routings for the PDP-6sVision Project

September 1966



As a step in the direction of "computer vision," several programs have been written which transform the output of a vidisector into some mathematical descriptions of the boundaries enclosing the objects in the field of view. Most of the discussion concerns the techniques used to transform a sequence of points, presumably representing a curve in the two-dimensional plane of view, into the best-fit conic-curve segment, or best-fit straight line. The resultant output of this stage is a list of such segments, one list for each boundary found.


Author[s]: Leslie Lamport

Summer Vision Programs

October 1966



We assume that we are given a square array that describes a scene. The name of the array will be "array." The number of points representing the side length of the array will be called "pts." (I.e., (pts)2 is the total number of entries in the array.)


Author[s]: Donald Sordillo


October 1966



CHAR PLOT is a routine which enables one to use the Calcomp plotter as an output typewriter. This program is stored as CHPLOT BIN [English CHAR PLOT]. In use a code, representing a character of command as defined in Appendix I, is placed into accumulator C. Upon calling the routine the plotter will, either print a character, or set itself into one of several modes. The input to the routine is a word whose 8 low order bits contain a code and whose sign bit must be 0. The routine is entered by MOVE C, [WORD], PUSHJ P, PLOTC. A word=O stops everything and initiates the system. Note: The program starts off in lower case mode. While it is in this mode any attempt to issue a lower-case code causes the computer to hang up. It is suggested that the first call be used to set the routine to upper case and the 8th bit in the code used to shift between upper and lower cases. The symbols P,C and CRKCHN are global and user-defined. Other symbols are PLOTC (Normal entry point), UCTAB (Beginning of lower case table.), LCTAB (Beginning of lower case table, CLNGTH (Routine which returns length of the character which was its argument in Acc. C.


Author[s]: Larry Krakauer


January 1968



The CNTOUR program plots an intensity relief map of an image which is read from tape, disc, or from either vidisector camera. It is used to examine vidisector images. It may also be used as a general purpose aiming, monitoring and focusing program, especially for high-contrast images, for which it produces something like a line drawing. The program is available both in a time sharing and a non time sharing version.


Author[s]: Larry Krakauer

A Description of the CNTOUR Program

November 1966



The CNTOUR program plots an intensity relief map of an image which is read from the vidisector camera (TV-B). It may be used as a general purpose aiming, monitoring and focusing program, especially for high-contrast images, for which it produces something like a line drawing.


Author[s]: William Martin

A Step by Step Computer Solution of Three Problems in Non-Numerical Analysis

July 1966



This memo describes the step by step solution of three problems from different fields of applied mathematics. These problems are solved by typing a series of computer commands for the manipulation of symbolic mathematical expressions. These commands are best typed at the PDP-6 console, so that the Type 30 display and the wider range of keyboard symbols can be used. The syntax of commands typed at the PDP-6 will be described. These commands are translated into a string of symbols which are sent to CTSS, where they are parsed into a LISP expression, which is then evaluated. The mathematical operators which are available in the system will be described and then the step by step solution of each of the problems will be given.


Author[s]: Peter Samson

Program Memo about EYE

December 1966



EYE is a program (on the Vision System tape with the name EYE BALL) which displays on the 340 field of view of the vidisector. The program is controlled by the light pen, which selects various modes and options; and by the box with four pots, to locate the exact area examined.


Author[s]: None listed

PDP-6 LISP (LISP 1.5) Revised

April 1967



This is a mosaic description of PDP-6 LISP, intended for readers familiar with the LISP 1.5 Programmer’s Manual or who have used LISP on some other computer. Many of the features, such as the display, are subject to change. Thus, consult a PDP-6 system programmer for any differences which may exist between LISP of Oct. 14, 1966 and present LISP on the system tape.


Author[s]: Russell Noftsker

Hardware Memo - Input Multiplexer Status

October 1966



Note: Computer control of Input Multiplexer and Output Sample and Hold is available when clock and test switches on the I/O box are in "Computer Input" and "Computer Output" positions, respectively. Manual operation of the Input Multiplexer and Output Sample and Hold is available when the same switches are "Clock Mode" and "Test Mode" respectively. In "Test Mode," output commands are derived from input channels 154 through 177 as noted in the current INPUT MULTIPLEXER STATUS. These channels are potentiometer readings from wither Joy Stick Console where Pot No. 1 is at the top and No. 10 is consecutively at bottom. See OUTPUT SAMPLE AND HOLD for Output Channel numbers.


Author[s]: Donald E. Eastlake III

PDP-6 Software Update

January 1967.



Conventions of this memo- Most numbers written in Arabic numerals are octal while all those written out in English are decimal. Underlying a character and immediately preceding it with a vertical bar indicates the character produced by holding down the control key while striking that character except in the case of 1$ which represents an ALT MODE. Characters not indictable with the character set used in this memo or control of such a character are described between angle bracket. The string from the open to the close angle bracket should be considered as one character which may be controlled by underlining and preceding with a vertical bar. Lower case letters in a command string usually indicate a possibly optional variable while capital letters or special characters are constant. Note the special conventions involving [cents] in the MACDMP section. Organization of PDP-6 Software: MACDMP is normally used to load system and user machine language programs. If when one approaches the PDP-6 it is not in MACDMP (which is usually displaying a file directory) one should first try starting at location 177400 which is MACDMPs starting address. If this fails be sure a system tape is mounted of drive number one and try reading in at location 0 (see appendix). If that loses try locations 1 and 2. If still unsuccessful try placing a paper tape of MACDMP in the paper tape reader, turning it on, and starting at location 20 (appendix). If all else fails you can conclude that most of memory is clobbered and load a paper tape of MACDMP according to the instructions on the inside of the left door of the first bay of the PDP-6 to the left of the console.


Author[s]: Adolfo Guzman

A Primitive Recognizer of Figures in a Scene

January 1967



Given a scene, as seen for instance from a T.V. camera or a picture, it is desired to analyze it to organize, differentiate and identify desired objects or classes of objects (i.e., patterns) in it. The present report describes a program, written in CONVERT, which partially achieves this goal. Two inputs to the program determine its behavior and response: 1. The scene to be analyzed, which is entered in a symbolic format (it may contain 3-dimensional and curved objects). 2. A symbolic description -- called the model -- of the class for the objects we want to identify in the scene (1): Given a set of models for the objects we want to locate, and a scene or picture, the program will identify in it all those objects or figures which are similar to one of the models, provided they appear complete in the picture (i.e., no partial occlusion or hidden parts). Recognition is independent of position, orientation, size etc.; it strongly depends on the topology of the model. Important restrictions and suppositions are: (a) the input is assumed perfect --noiseless-- and highly organized; (b) more than one mode is, in general, required for the description of one object and (c) only objects which appear unobstructed are recognized. Work is continuing in order to drop restriction (c) and to improve (a).


Author[s]: Marvin Minsky

Vision Memo

February 1967



This Memo proposes a set of systems programs for vision work. Please comment immediately as we should start on it at once. Values stored outside an array range should have no effect, but set an overflow flag: values read outside a range are zero and also should set a flag. Coordinates normally occur as a dotted pair (x. y) in half words. For display purposes, normally the 10 most significant bits are used, but higher resolution options will be available. To specify a sub-array we have to state its size, location and mesh. All sub-arrays will be square. (Generalizing to rectangle is unwise because the natural generalization for later systems will be projective).


Author[s]: Marvin Minsky

Estimating Stereo Disparities

February 1967



An interesting practical and theoretical problem is putting bounds on how much computation one needs to find the stereo-disparity between two narrow-angle stereo scenes. By narrow angle I mean situations wherein the angle subtended by the eyes is a very few degrees: the kind of correlation-disparity method discussed here probably isn’t applicable to the wide-angle stereo we’ll usually use for scene-analysis in the Project. The method we consider is to find the local maximum of local correlation between the left and right scenes, over a range of displacements along the eye-eye axis. Obviously this is a simple-minded method that will fail in certain situations: here we are not interested in bad cases so much as in getting estimates of the minimal computation in the favorable situations. A correlation can be considered as a properly-normalized sum of pairwise products of intensifies (or other surface functions). The correlation, for each disparity d, is obtained by using pairs that are d units apart in visual angle, referred to a standard azimuth scale in each eye. One can imagine a scheme in which the pairs are all different in the retinas.


Author[s]: Marvin Minsky

Remarks on Correlation Tracking

March 1967



The problem is to track the motion of part of a field of view. Let us assume that the scene is a two-dimensional picture in a plane perpendicular to the roll axis. (these simplifying assumptions, of course, are a main problem in estimating how the system works in real life). So we can think of the picture as a function f(x,y) in some plane. Now suppose that at time to the scene is fo(x,y) and at some time later it has moved, and is ft(x,y). Suppose also that the scene has not changed, but has only been moved rigidly in the plane. Then an elegant mathematical way to estimate this motion is to compute the cross-correlation of the original and current picture. First let us review a basic simple mathematical fact. Given any function f(x) and any displacement {triangle}, it is true that sf(x)f(x)>_sf(x)f(x+triangle).


Author[s]: Marvin Minsky and Seymour Papert

Computer Tracking of Eye Motions

March 1967



This memo is to explain why the Artificial Intelligence group of Project MAC is developing methods for on-line tracking of human eye movements. It also gives a brief resume of results to date and the next steps.


Author[s]: Donald Sordillo


March 1967



CHAR PLOT is a routine which enables one to use the CalComp plotter as a versatile output device. It is presently available as CHPLOT BIN (English CHAR PLOT) on tape MS 3. The program CHAR PLOT is normally called by a PUSHJ P, PL:OTC with a code representing a command or character (as defined in Appendix I) in accumulator C. Upon calling, the routine will either plot a character or line, or perform an internal control function. A O code initializes the routine, erasing any unexecuted (buffered) commands.


Author[s]: Joel Moses

A Quick Fail-Safe Procedure for Determining Whether the GCD of 2 Polynomials is 1

March 1967



One of the most widely used routines in an algebraic manipulation system is a polynomial manipulation package (1,2,3). The crucial operation in such routines is the extraction of the Greatest Common Divisor (GCD) of two polynomials. This operation is crucial because of its frequent use and because it is an expensive operation in regard to time and space. Experiments by Collins(1) have shown that given two polynomials chosen at random, the GCD has a high probability of being 1. Taking into account this probability and the cost of obtaining a GCD (some GCDs of polynomials of degree 5 in two or three variables can take on the order of a minute on the 7094(1), it appears that a quick method of determining whether the GCD is exactly 1 would be profitable. While no such complete method is known to exist, a fail-safe procedure has been found and is described here. A fail-safe procedure is characterized by the fact that when it comes to decision (in this case that the GCD is 1), then the decision is correct. However, the conclusion (i.e. that the GCD is 1) may be true, and the procedure need not arrive at a decision regarding it. It is believed that the fail-safe procedure presented here (and its extension to the linear case) will arrive at a decision quite frequently when the GCD is actually 1.


Author[s]: Roland Silver

Incorporating MIDAS Routines into PDP-6 LISP

March 1967



Some PDP6 LISP users have felt a need for a way to incorporate MIDAS subroutines into LISP. LISP has been changed to let you do this, using files found on the LISP SYSTEM microtape. You write a routine for LISP in much the same way that you write any other MIDAS relocatable subroutine. You must, however, observe the constraints imposed by LISP’s allocation and use of accumulators, and its methods of handling input, output, and interrupts. In addition, you require linkage to LISP before your routine can operate properly: The entry point(s) of the subroutine must be put on the property list(s) of the appropriate atom(s), and the address fields of the instructions pointing to other routines, to list structure, or the other LISP data structures must be set properly. This is done when LISP begins operation—after allocation, but before going into its listen loop. We provide eight macros to ease the job of creating such linkages: SUBR, FSUBR, MACRO, QUOTE, E, SPECIAL, and SYM. If you write "SUBR name" at a location a in your routine, LISP will subsequently ascribe the property SUBR to the atom name, with entry location a. Similar remarks apply to the use of FSBUR, LSBUR, and MACRO.


Author[s]: Roland Silver

LISP Linkage Feature: Incorporating MIDAS into PDP-6 LISP

October 1967 (Revised)



Some PDP6 LISP users have felt a need for a way to incorporate MIDAS subroutines into LISP. LISP has been changed to let you do this, using files found on the LISP SYSTEM microtape. You write a routine for LISP in much the same way that you write any other MIDAS relocatable subroutine. You must, however, observe the constraints imposed by LISP’s allocation and use of accumulators, and its methods of handling input, output, and interrupts. In addition, you require linkage to LISP before your routine can operate properly: The entry point(s) of the subroutine must be put on the property list(s) of the appropriate atom(s), and the address fields of the instructions pointing to other routines, to list structure, or the other LISP data structures must be set properly. This is done when LISP begins operation—after allocation, but before going into its listen loop. We provide eight macros to ease the job of creating such linkages: SUBR, FSUBR, MACRO, QUOTE, E, SPECIAL, and SYM. If you write "SUBR name" at a location a in your routine, LISP will subsequently ascribe the property SUBR to the atom name, with entry location a. Similar remarks apply to the use of FSBUR, LSBUR, and MACRO. The significance and use of other four macros is perhaps best communicated through examples: 1. An instruction like "MOVEI A,QUOTE(X Y Z)" will be assembled as "MOVEI A,O". At link time, however, LISP will insert the location of list (X Y Z) into the address field of the instruction. 2. 2. Suppose that the atom FOO has the properties shown in Figure 1. Then the instructions "MOVEI A QUOTE FOO", "MOVEM B, SPECIAL FOO", "PUSHJ P, SYM FOO", and "CALL E FOO" will each be assembled with a zero address field, which will be modified at link time to be b, c, 106, and 101, respectively.


Author[s]: Michael Beeler

Hardware and Program Memo About SERVO

March 1967



SERVO is intended as an engineering and programming analyzing and debugging aid for use with devices connected through the input and output multiplexers to the PDP-6. Cannel numbers and values to output, as well as some other numeric arguments, are in octal. Only the frequency of K, N, Q & W, the duration of I & U, and the argument of Z are decimal. Commands are single letters, as follows.


Author[s]: Stephen Smoliar

EUTERPE A Computer Language for the Expression of Musical Ideas

April 1967



The electronic medium has vastly increased the amount of material available to the contemporary composer. The various pieces of electronic equipment available today allow one to produce any conceivable sound; yet because of the complex nature of their output, these devices are generally difficult to control and the composer of electronic music may take several hours to prepare but a few minutes of his creation. EUTERPE was designed during the summer of 1966 by Marvin Minsky as a "real-time" music program" to be used at a teletype which was a direct link with a digital computer. The program is an interpreter and compiler, basically a translation device to convert symbolic input into internal machine language of a computer. The symbolic input consists of yup to six "voice-programs" which are strings of words.


Author[s]: Harold V. McIntosh and Adolfo Guzman

A Miscellaney of Convert Programming

April 1967



CONVERT shares with other programming languages the circumstance that it is was easier to evaluate the language and to learn its uses if it is possible to scrutinize a representative sample of programs which effect typical but simple and easily understood calculations. Consequently we have gathered together several examples of varying degrees of difficulty in order to show CONVERT in action. In each case the CONVERT program, written as a LISP function ready for execution in CTSS, is shown, together with the results of its application to a small variety of arguments, and a general explanation of the program, its intent, form of its arguments, and method of operation. When the notation CLOCK (()) … CLOCK (T) appears, the time f execution has been determined, and is shown, in tenths of seconds immediately after the result has been printed. Since there is no particular organization to the selection of examples, we here give a brief catalogue of them.


Author[s]: Arnold K. Griffith


April 1967



POLYSEG takes as input a list of dotted pairs of numbers. These pairs are assumed to be the co-ordinates of adjacent points along a single closed line. It is further assumed that the x and y co-ordinates of successive points differ by 1, 0, or -1. The output of POLYSEG is a list of dotted pairs of numbers, representing vertices of a polygonal approximation to the figure whose boundary was input. The scale is increased by a factor of four over that of the output; and the output is in fixed or floating point mode; according to the input.


Author[s]: John L. White

Additions to LAP

July 1967



In addition to the description on page 13 of AI Memo 116A LAP has the following features: Current Assembly Location Reference, Assembly Time Arithmetic, Constants, Multiple Entry Routines, and Defined Machine Operations in LAP. The atom "*" has a SYM value during assembly an integer which is the current cell address being assembled into. Thus (JRST O *) is a well known infinite loop equivalent to A (JRST O A). When LAP encounters a non-atomic argument in the position normally occupied but the address part of an instruction, and it is not one of the recognizable forms (QUOTE atom) (E function) of (C constant), then the assembly time calculates of the list of members are summed and this is the quantity assigned as address. Thus (JRST O (* 1)) is a do-little instruction roughly equivalent to TRA * +1 in FAP.


Author[s]: Russ Abbott

A Glossary of Vision Terms

June 1967



Underlined terms are included in the glossary.


Author[s]: Jim Bowring

PSEG: Standardization of Data

June 1967



PSEG is a function of one argument--a region name which comes from REGIONLIST, as created by TOPOLOGIST. When it is done, the following data structure exists. *indicates that the data was already stored correctly when PSEG got it. REGIONLIST is a list of region names created by TOPOLOGIST. On the property list of each region are the following indicators: TYPE, OUTERBOUNDARY, NUCLEUS, HOLES, holes, NEIGHBORS, SHAPE, VERTIS, and SEGS. VERTEXLIST and SEGMENTLISTs are also discussed.


Author[s]: M. Blum and C. Hewitt

Automata On a 2-Dimensional Tape

June 1967



This paper explains our approach to the problem of pattern recognition by serial computer. The rudimentary theory of vision presented here lies within the framework of automata theory. Out goal is to classify the types of patterns that can be recognized by an automaton that scans a finite 2-dimensional tape. For example, we would like to know if an automaton can decide whether or not a given pattern on a tape forms a connected region. This paper should be viewed as a Progress Report on work done to date. Our goal now is to generalize the theory presented here and make it applicable to a wide variety of pattern-recognizing machines.


Author[s]: John L. White

Matrix Inversion in LISP

July 1967



Very shortly there will appear on the vision library tape a field named @IAS which is a collection of compiled SUBR"s for performing general matrix row reduction and inversions. For an array A a call (IAS A NEW N M) performs gaussian row reduction on the first N rows of the array A (and in fact operated on only the first M columns); so that if M>N then the N+1 st through the Mth columns of the output array contain the solutions to the implicit M-N+1 systems of NxN simultaneous linear equations, while the first N columns contain the inverse matrix of A11 ANN. If the NEW is "T" then a new array of size NXM is declared and the answers are stored directly over the input array and no new array declarations are done. Currently, maximization of pivotal elements is not done; thus IAS will give wrong answers on certain numerically ill-conditioned matrices even though they be non-singular. It is possible to remedy this problem, at some expense, if necessary. IAS also uses a portion of binary program space for temporary storage and may give an error message if not enough space is available.


Author[s]: Carl Hewitt

PLANNER: A Language for Proving Theorems

July 1967



The following is a description of SCHEMATISE, a proposal for a program that proves very elementary theorems though the use of planning. The method is most easily explained through an example die to Black.


Author[s]: Michael Speciner

The Calcomp Plotter as an Output Device

July 1967



(1)CHAR PLOT (see AI Memo 125) has been modified for TS. [It may be found on MS4 with the non-TS version]. The following changes should be noted: CRKBRK (now called PLTBRK in the non-TS CHAR PLOT), SUBPLT (which is not needed since PLOTC can be called recursively), PP (ditto), LBUFF and LWBUFF (as the TS system does the buffering) do not exist in the TS version. CRKCHN, now called PLTCHN (in both TS and non-TS versions) does exist. The command 1110 … (go to the effective address at process time) still exists, bit in TS return is with "POPJ P", rather than JRST 12, @ PLTBRK". The character codes 0 and 200 (lower case 0) respectively OPEN and CLOSE the plotter. (2) CHARPL SCOPE may soon be also so modified for TS. (3) SCOPE PLOT is unchanged. (4) None of the above TS routines can be used easily at present due to the lack of TS STINK.


Author[s]: Adolfo Guzman

Decomposition of a Visual Scene into Bodies

September 1967



This memorandum describes a program which finds bodies in a scene, presumably formed by 3-dimensional objects, with some of them perhaps not completely visible.


Author[s]: Marvin Minsky and Seymour Papert

Linearly Unrecognizable Patterns




The central theme of this study is the classification of certain geometrical properties according to the type of computation necessary to determine whether a given figure has them.


Author[s]: Stephen Smoliar

EUTERPE-LISP: A LISP System with Music Output

September 1967



EUTERPE (Ai memo no. 129) was designed as a "real-time music program" which would interpret music described as "voice-programs" in DDT. These voice-programs consisted of note words, description of tones to be sounded, and control words which determined the parameters of pitch, tempo, articulation and wave form and allowed for a subroutine feature and transfer within the voice-program. It had been hoped that complex musical forms could be described in terms of a few collections of note words and sequences of control words. However, musical variation and development is more subtle than the developmental power of these control words. Any transformation of musical material may be expressed as a LISP function; therefore, the control words were abandoned and EUTERPE was linked to LISP. The voice-programs would be written and loaded by LISP and played by EUTERPE. The principle function in the system is LOAD which takes two arguments: 1) an absolute location in core and 2) a list of note words. The note words are translated into EUTERPE-readable code and loaded into the proper voice program. The addresses of the first location of each if the six voice programs are SETQed by the system with the names VOICE1, …, VOICE6. The value of LOAD s the next file word in core, so a series of lists may be loaded by bootstrapping.


Author[s]: Peter Samson


September 1967



This document describes the STRING programming language which has been implemented on the MAC Artificial Group's PDP-6 computer. In the STRING system, all objects--constants, variables, functions and programs--are stored and processed in the form of strings of characters. The STRING language is unusually concise, yet at the dame time unusually rich in commands, including a strong arithmetic facility.


Author[s]: Marvin Minsky

Stereo and Perspective Calculations

September 1967



A brief introduction to use of projecting coordinates for hand-eye position computations. Some standard theorems. Appendix A reproduces parts of Roberts’ thesis concerning homogenous coordinated and matching of perspectively transformed objects. Appendix B by Arnold Griffith derives the stereo calibration formulae using just the invariance of cross-ratios on projections of lines, and he describes a program that uses this.


Author[s]: Michael Beeler

I/O Test

October 1967



IO TEST is intended as a hardware testing and debugging aid for use with the PDP-6 and its associated input multiplexer (analog to digital converter) and output multiplexer (digital to analog converter). While all characters typed are echoed, only the following have any effect on the program’ S operations: F, Y, W, V, B, E, D, S, nT, P A.


Author[s]: William A. Martin

A Fast Parsing Scheme for Hand-Printed Mathematical Expressions

October 19, 1967



A set of one-line text-book-style mathematical expressions is defined by a context free grammar. This grammar generates strings which describe the expressions in terms of mathematical symbols and some simple positional operators, such as vertical concatenation. The grammar rules are processed to abstract information used to drive the parsing scheme. This has been called syntax-controlled as opposed to syntax-directed analysis. The parsing scheme consists of two operations. First, the X-Y plane is searched in such a way that the mathematical characters are picked up in a unique order. Then, the resulting character string is parsed using a precedence algorithm with certain modifications for special cases. The search of the X-Y plane is directed by the particular characters encountered.


Author[s]: Roland Silver

PICPAC: A PDP-6 Picture Package

October 1967



PICPAC is a program to be used for manipulating pictures of real-world scenes. It operated under ITS (the Incompatible Time-Sharing System) under control of a simple on-line command language. It includes facilities for reading pictures from either vidissector, for reading and writing them on disk or microtape, and for displaying or plotting them. It also includes focusing and control functions.


Author[s]: Thomas Knight

A Multiple Procedure DDT

January 1968



This Memo. Describes a version of DDT used as the command level of the A.I. Group PDP-6 Time Sharing System (ITS). Special features include capability to handle multiple jobs, ability to stop open read or write references to a given location, and the ability of system programs to return command strings to be executed by the DDT.


Author[s]: Eric Osman

DDT Reference Manual

September 1971



This memo describes the version of DDT used as the command level of the A.I. Laboratory Time Sharing System (ITS). Besides the usual program control, examination, and modification features, this DDT provides many special utility commands. It also has the capability to control several programs for a user and to a single instruction continue mode and interrupt on read or write reference to a given memory location. This memo was prepared with the assistance of Donald E. Eastlake and many others.


Author[s]: Harold V. McIntosh

SUBM: A CONVERT Program for Constructing the Subset Machine Defined by a Transition System

January 1968



SUBM is a CONVERT program, realized in the CTSS LISP of Project MAC, for constructing the subset machine with the same behaviour as a given transition system. The program interactively collects the six items defining a transition system: its state set, alphabet, transition function, initial states, accepting states and spontaneous transitions. It then computes the subset machine, producing its state set, transition function, initial state and accepting states.


Author[s]: Harold V. McIntosh

REC/8: A CONVERT Compiler of REC for the PDP-8

January 1968



REC/8 is a CONVERT program, realized in the CTSS LISP of Project MAC, for compiling RED expressions into the machine language of the PDP-8 computer. Since the compilation consists in its majority of subroutines calls (to be compiled, after removal of LISP parentheses by MACPO-8) the technique is applicable with trivial modification to any other computer having the subroutine jump and indirect transfer instructions. The purpose of the program is both to compile REC expressions and to illustrate the workings of the REC language, and accordingly a description of this language is given. It contains operators and predicates; flow of control is achieved by parentheses which define subexpressions, colon which implies interaction, and semicolon which terminates the execution of an expression. Predicates pass control to the position following the next colon or semicolon, allowing the execution of alternative expression strings.


Author[s]: Harold V. McIntosh

CGRU and CONG: CONVERT and LISP Programs to Find the Congruence Relations of a Finite State Machine

January 1968



CRGU is a CONVERT program, CONG its literal transcription into LISP, realized in the CTSS LISP of Project MAC, for finding all the congruence relations of a finite state machine whose transition table is given as an argument. Central to both programs is the hull construction, which forms the smallest congruence relation containing a given relation. This is done by examining all pairs of equivalent elements to see if their images are equivalent. Otherwise the image classes are joined and the calculation repeated. With the hull program, one starts with the identity relation and proceed by joining pairs of congruence classes in previously found partitions, and forming the hull in order to see if he may produce a new partition. The process terminates when all such extensions have been tried without producing any new relations.


Author[s]: Carl Hewitt

Functional Abstraction in LISP and PLANNER

January 1968



Presented here is part of the graduate work that I am doing in the much broader area of protocol analysis (see A.I. memo 137). The goal of the function abstraction is to find a procedure that satisfies a given set of fragmentary protocols. Thus functional abstraction is the inverse operation to taking a set of protocols of a routine. The basis technique in functional abstraction (which we shall call IMAGE) is to find a minimal homomorphic image of a set of fragmentary protocols. It is interesting to note that the technique of finding a minimal homomorphic image is the same one used to compute the schematized goal tree in A.I. memo 137. We define (a less than b) to mean that a is erased and b is written in its place. We shall use (a:b) to mean that the value of b is a.


Author[s]: John L. White


January 1968



LAP is a LISP FEXPR (or FSUBR when compiled) which is executed primarily for its side effect—namely assembling a symbolic listing into core as a machine language subroutine. As such, it is about the most convenient and rapid way for a LISP user to add machine language primitives to the LISP system, especially if the function in question are in a developmental stage and are reasonably small (e.g. 1-500 instructions). Also, the LISP compiler currently gives its results as a file of LAP code, which may then be loaded into core by IAP. Virtually any function definition, whether by DEFPROP, LABEL, or LAP is an extension of LISP’s primitives; and as in any actual programming language, the side-effects and global interactions are often of primary importance. Because of this, and because of the inherently broader range of machine instructions and data formats, a function quite easily described and written in PDP-6 machine language may accomplish what is only most painfully and artificially written in LISP. One must, then, consider the total amount of code in each language to accomplish a given task, the amount of commentary necessary to clarify the intent of the task given the program (in this sense, LISP code rates very high—a major benefit of the confines of LISP is that a good program serves as its own comment, and usually needs no further elucidations), and other considerations of programming convenience.


Author[s]: Harold V. McIntosh

REEX: A CONVERT Program to Realize the McNaughton-Yamada Analysis Algorithm

January 1968



REEX is a CONVERT program, realized in the CTSS-LISP of Project Mac, for carrying out the McNaughton-Yamada analysis algorithm, whereby a regular expression is found describing the words accepted by a finite state machine whose transition table is given. Unmodified the algorithm will produce 4n terms representing an n-state machine. This number could be reduced by eliminating duplicate calculations and rejecting ona high level expressions corresponding to no possible path in the same state diagram. The remaining expressions present a serious simplification problem, since empty expressions and null words are generated liberally by the algorithm. REEX treats only the third of these problems, and at that makes simplifications mainly oriented toward removing null words, empty expressions, and expressions of the form XUX*, AuB*A, and others closely similar. REEX is primarily useful to understand the algorithm, but hardly usable for machines with six or more states.


Author[s]: Seymour Papert

The Artificial Intelligence of Hubert L. Dreyfus: A Budget of Fallacies

January 1968



In December 1965 a paper by Hubert Dreyfus revived the old game of generating curious arguments for and against Artificial Intelligence. Dreyfus hit top form in September 1967 with an explanation in the Review of Metaphysics of the philosophically interesting difficulties encountered in constructing robots. The best of these is that a mechanical arm controlled by a digital computer could not reasonably be expected to move fast enough to play ping-pong.


Author[s]: William A. Martin

A Left to Right then Right to Left Parsing Algorithm

February 1968



Determination of the minimum resources required to parse a language generated by a given context free grammar is an intriguing and yet unsolved problem. It seems plausible that any unambiguous context free grammar could be parsed in time proportional to the length, n, of each input string. Early (2) has presented an algorithm which parses "many" grammars in the proportional to n, but requires n2 on some. His work is an extension of Knuth’s method. Knuth’s method fails when more than one alternative must be examined by a push-down automation making a left to right scan of the input string. Early’s extension takes all possible alternatives simultaneously without duplication of effort at any given one step. The method presented here continues through the string in order to gain pass, which is made on the symbols accumulated on the stack of the automation. The algorithm is probably more efficient than Early’s on certain grammars; it will fail completely on others. The essential idea may be interesting to those attacking the general problem.


Author[s]: Marvin L. Minsky

Linear Decision and Learning Models

March 1968



This memorandum is a first draft of an essay on the simplest "learning" process. Comments are invited. Subsequent sections will treat, among other things: the "stimulus-sampling" model of Estes, relations between Perceptron-type error, reinforcement and Bayesian-type correlation reinforcement and some other statistical methods viewed in the same way.


Author[s]: John White

Time-Sharing LISP for the PDP-6

March 1968



This memo written in the style and convention of A.I. memo No. 116A, may be considered an addendum thereto. It should prove to be a welcome updating on the LISP system.


Author[s]: Joel Moses

SARGE: A Program for Drilling Students in Freshman Calculus Integration Problems

March 1968



The SARGE program is a prototype of a program which is intended to be used as an adjacent to regular classroom work in freshman calculus. Using SARGE, students can type their step-by-step solution to an indefinite integration problem, and can have the correctness of their solution determined by the system. The syntax for these steps comes quite close to normal mathematical notation, given the limitations of typewriter input. The methods of solution is pretty much unrestricted as long as no mistakes are made along the way. If a mistake is made, SARGE will catch it and yield an error message. The student may modify the incorrect step, or he may ask the program for advice on how the mistake arose by typing "help". At present the program is weak in generating explanations for mistakes. Sometimes the "help" mechanisms will just yield a response which will indicate the way in which the erroneous step can be corrected. In order to improve the explanation mechanism one would need a sophisticated analysis of students solutions to homework or quiz problems. Experience with the behavior of students with SARGE, which is nil at present, should also help in accomplishing this goal. SARGE is available as SARGE SAVED in T302 2517.


Author[s]: Jayant M. Shah

Numerical Solution of Elliptic Boundary Value Problems by Spline Functions

April 1968



A numerical method for solving linear, two-dimensional elliptic boundary value problems is presented. The method is essentially the Ritz procedure which uses; polynomial spline functions to approximate the exact solution. The spline functions are constructed by defining a polynomial function over each of a set of disjoint subdomains and imposing certain compatibility conditions along common boundaries between subdomains. The main advantage of the methods is that it does not even require the continuity of the spline functions across the boundaries between subdomains. Therefore it is easy to construct classes of spline functions which will produce any specified rate of convergence.


Author[s]: B.K.P. Horn


May 1968



This memo describes a method of automatically focusing the new vidisector (TVC). The same method can be used for distance measuring. Included are instructions describing the use of a special LISP and the required LISP-functions. The use of the vidisectors, as well as estimated of their physical characteristics is also included, since a collection of such data has not previously been available.


Author[s]: D. Eastlake, R. Greenblatt, J. Holloway, T. Knight and S. Nelson

ITS 1.5 Reference Manual

July 1969



This reference manual consists of two parts. The first (sections 1 through 6) is intended for those who are either interested in the ITS 1.5 time sharing monitor for its own sake or who wish to write machine language programs to run under it. Some knowledge of PDP-6 (or PDP-10) machine language is useful in reading this part. The second part (sections 7, 8, and 9) describes three programs that run under ITS. The first program (DDT) is a modified machine language debugging program that also replaces the "monitor command" level (where the user is typing directly at the monitor) present in most time-sharing systems. The remaining two (PEEK and LOCK) are a status display and a miscellaneous utility program. It should be remembered that the McCulloch Laboratory PDP-6 and PDP-10 installation is undergoing continuous software and hardware development which may rapidly outdate this manual.


Author[s]: D. Eastlake, R. Greenblatt, J. Holloway, T. Knight and S. Nelson

ITS 1.5 Reference Manual

June 1968



This reference manual consists of two parts. The first (sections 1 through 6) is intended for those who are either interested in the ITS 1.5 time sharing monitor for its own sake or who wish to write machine language programs to run under it. Some knowledge of PDP-6 (or PDP-10) machine language is useful in reading this part. The second part (sections 7, 8, and 9) describes three programs that run under ITS. The first program (DDT) is a modified machine language debugging program that also replaces the "monitor command" level (where the user is typing directly at the monitor) present in most time-sharing systems. The remaining two (PEEK and LOCK) are a status display anda miscellaneous utility program. It should be remembered that the McCulloch Laboratory PDP-6 and PDP-10 installation is undergoing continuous software and hardware development which may rapidly outdate this manual.


Author[s]: Marvin Minsky

Remarks on Visual Display and Console Systems

June 1968



This serves as a preliminary draft of Deluxe Picture Maintenance System, June, 1963. It is Technical Memorandum No. 1.


Author[s]: Patrick H. Winston


August 1968 (revised April 1970)



This memo originally had two parts. The first dealt with certain deficiencies in an early version of Guzman's program, SEE. The problems have been fixed, and the corresponding discussion has been dropped from this memo. The part remaining deals with line drawings of objects with holes.


Author[s]: R. Greenblatt, B.K.P. Horn and L.J. Krakauer

The Text-Justifier TJ6

June 1970



This memo describes the TJ6 type justifying program, which can be used in the production of memos, such as this one. In addition, Appendices 1, 2, and 3 of this memo contain related information about TECO, the "Selectric" and the type 37 teletype, thus gathering most of the information needed for producing write ups into one location. A sample of input to TJ6 is given in section IV and is in fact the very input used to produce this page of output. The output from TJ6 may be either justified text, with the right margin exactly aligned, as in this introduction, or it may be "filled" text, as in this introduction, with the right margin only approximately aligned. The remainder of this memo will be justified. The sections of this memo are: Introduction, Using TJ6, Console operation of TJ6 and Sample TJ6 input. Appendix 1 relates to inserting lower case letters into the TECO buffer, Appendix 2 relates to the "Selectric" output device, and Appendix 3 is how to use a type 37 Teletype.


Author[s]: Larry Krakauer

Producing Memos, Using TJ6, TECO and the Type 37 Teletype

September 1968



This memo describes the TJ6 type justifying program, which can be used in the production of memos, such as this one. In addition, sections III and IV of this memo contain related information about TECO and the type 37 teletype, thus gathering most of the information needed for producing write ups into one location. A sample of input to TJ6 is given in section V, and is in fact the very input used to produce this page of output. The output from TJ6 may be either justified text, with the right margin exactly aligned, as in this introduction, or it may be "filled" text, with the right margin only approximately aligned. Since I do not personally like the appearance of justified text, the remainder of this memo will not be justified, but this decision, of course, rests with each particular user. The sections of this report are: Introduction, using TJ6, Inserting lower case letters into the TECO buffer, How to use a type 37 teletype, and Sample TJ6 input.


Author[s]: Jean-Yves Gresser

Description and Control of Manipulation by Computer-Controlled Arm

September 1968



The immediate purpose of the research on Intelligent Automata is to have an autonomous machine able to understand uncomplicated commands and to manipulate simple objects without human intervention. This thesis is concerned with the programming of a special output device of the present machine existing at Project MAC: an arm with eight degrees of freedom, made of our identical segments. Classical approaches through hill-climbing and optimal control techniques are discussed. However a new method is proposed to decompose the problem, in an eight-dimensional space, into a sequence of subproblems in spaces with fewer dimensions. Each subproblem can then be solved with simple analytical geometry. A simulation program, which applies this method, is able to propose several configurations for a given goal (expressed as a point in a five-dimensional space).


Author[s]: Terry Beyer

Recognition of Topological Invariants by Modular Arrays

September 1968



In this paper we study recognition of topological invariant properties of patterns by use of finite, rectangular 2-dimensional, interactive arrays of finite state automata (hereafter called modular arrays). The use of modular arrays as pattern recognition devices has been studied by Atrubin [1] and by Unger [2]. Our aim is to show that modular arrays can not only recognize a large variety of topological invariants, but can do so in times that are almost minimal for a certain class of machines. We begin by describing our model of the modular array as a pattern recognition connectivity. Next, we introduce a fundamental transformation of patterns and prove several interesting properties of the transformation. Finally, we apply the transformation to modular arrays to obtain fast methods of recognizing a wide variety of topological invariants.


Author[s]: Marvin Minsky and Seymour Papert

Linear Separation and Learning

October 1968



This is a reprint of page proofs of Chapter 12 of Perceptrons, M. Minsky and S. Papert, MIT Press 1968, (we hope). It replaces A.I. Memo No. 156 dated March 1968. The perceptron and convergence theorems of Chapter 11 are related to many other procedures that are studied in an extensive and disorderly literature under such titles as LEARNING MACHINES, MODELS OF LEARNING, INFORMATION RETRIEVAL, STATISTICAL DECISION THEORY, PATTERN RECOGNITION and many more. In this chapter we will study a few of these to indicate points of contact with the perception and to revel deep differences. We can give neither a fully rigorous account not a unifying theory of these topics: this would go as far beyond our knowledge as beyond the scope of this book. The chapter is written more in the spirit of inciting students to research than to offering solutions to problems.


Author[s]: Carl Hewitt

PLANNER: A Language for Manipulating Models and Proving Theorems in a Robot

August 1970 (Revised)



PLANNER is a language for proving theorems and manipulating models in a robot. The language is built out of a number of problem-solving primitives together with a hierarchical control structure. Statements can be asserted and perhaps later withdrawn as the state of the world changes. Conclusions can be drawn from these various changes in state. Goals can be established and dismissed when they are satisfied. The deductive system of PLANNER is subordinate to the hierarchical control structure in order to make the language efficient. The use of a general-purpose matching language makes the deductive system more powerful. The language is being applied to solve problems faced by a robot and as a semantic base for English.


Author[s]: Donald Eastlake III


November 1968



This memo describes two small utility programs that are of assistance in using the ITS 1.4 (see A.I. 161, MAC-M-377) time sharing system. LOCK performs miscellaneous utility functions while PEEK displays, with periodic updates, various aspects of the time sharing system's status.


Author[s]: John Holloway


January 1969



This memo describes a design aid used for the automatic production of wirelists for machine or hand wiring of wire-cards.


Author[s]: Adolfo Guzman

Decomposition of a Visual Scene into Three-Dimensional Bodies

January 1969



The program described here takes as its input a collection of lines, vertices and surfaces describing a scene, and analyzes the scene into a composition of three-dimensional objects. The program does not need to know the form (model, or pattern) of the objects which are likely to appear: the scene is not searched for cubes, wedges, or houses, with an a-priori knowledge of the form of these objects; rather, the program pays attention to configurations of surfaces and lines which would make plausible three-dimensional solids, and in this way "bodies" are identified. Partially occluded bodies are handled correctly. The program is restricted to scenes formed by straight lines, where no shadows or noise are present. It has been tested in rather complicated scenes composed by rather simple objects. Examples are given.


Author[s]: Stewart Nelson and Michael Levitt

Robot Utility Functions

February 1969



This document describes a set of routines which have been provided at both the monitor and user level to facilitate the following operations: 1) Vidissector input; 2) Pot Box input; 3) Arm motion; and 4) Display list generation. This program was developed under contract with Systems Concepts, Incorporated.


Author[s]: Patrick Winston

A Heuristic Program that Constructs Decision Trees

March 1969



Suppose there is a set of objects, {A, B,…E} and a set of tests, {T1, T2,…TN). When a test is applied to an object, the result is wither T or F. Assume the test may vary in cost and the object may vary in probability or occurrence. One then hopes that an unknown object may be identified by applying a sequence if tests. The appropriate test at any point in the sequence in general should depend on the results of previous tests. The problem is to construct a good test scheme using the test cost, the probabilities of occurrence, and a table of test outcomes.


Author[s]: Richard D. Greenblatt, Donald E. Eastlake III and Stephen D. Crocker

The Greenblatt Chess Program

April 1969



Since mid-November 1966 a chess program has been under development at the Artificial Intelligence Laboratory of Project MAC at M.I.T. This paper describes the state of the program as of August 1967 and gives some of the details of the heuristics and algorithms employed.


Author[s]: C.K. Chow

On Optimum Recognition Error and Reject Tradeoff

April 1969



The performance of a pattern recognition system is characterized by its error and reject tradeoff. This paper describes an optimum rejection rule and presents a general relation between the error and reject probabilities and some simple properties of the tradeoff in the optimum recognition system. The error rate can be directly evaluated from the reject function. Some practical implications of the results are discussed. Examples in normal distributions and uniform distributions are given.


Author[s]: Patrick Winston

Discovering Good Regions for Teitelman's Character Recognition Scheme

May 1969



Warren Teitelman presented a novel scheme for real time character recognition in his master's thesis submitted in June of 1963. A rectangle, in which a character is to be drawn, is divided into two parts, one shaded and the other unshaded. Using this division a computer converts characters into ternary vectors in the following way. If a pen enters the shaded region, a 1 is added to the vector. When the unshaded region is entered, a 0 is appended. Finally 1 illustrates the basic idea he used. Thus, with the shading shown, the character V is converted to 1 0 x 1 0.* A V drawn without lifting the pen would yield a 1 0 1. A t gives 1 0 w 1, and so on. Notice that each character may yield several vectors, depending upon the style of the user as well as the division of the rectangle into shaded and unshaded regions. In order to conserve storage space and reduce search time, the character vectors of Teitelman"s scheme are stored in a tree-like structure like that shown in figure 2.


Author[s]: H.N. Mahabala

Preprocessor for Programs which Recognize Scenes

August 1969



A visual scene is transformed from a very simple and convenient format, to an internal format which describes the same scene, but is more akin to complex manipulations. This format is compatible with programs like "SEE". The entire analysis is done using a basic primitive which gives the orientation of a point with respect to a directed line. A novel handling of inaccuracies in the scene is achieved by considering the lines to be stripes of small but negligible width. The criterion is very general and easy to modify.


Author[s]: B.K.P. Horn

The Image Dissector "Eyes"

August 1969



This is a collection of data on the construction operation and performance of the two image dissector cameras. Some of this data is useful in deciding whether certain shortcomings are significant for a given application and if so how to compensate for them.


Author[s]: B.K.P. Horn

The Arithmetic-Statement Pseudo-Ops: .I and .F

August 1969



This is a feature of MIDAS which facilitates the rapid writing and debugging of programs involving much numerical calculation. The statements used are ALGOL-like and easy to interpret.


Author[s]: Joel Moses

The Integration of a Class of Special Functions with the Risch Algorithm

September 1969



We indicate how to extend the Risch algorithm to handle a class of special functions defined in terms of integrals. Most of the integration machinery for this class of functions is similar to the machinery in the algorithm which handles logarithms. A program embodying much of the extended integration algorithm has been written. It was used to check a table of integrals and it succeeded in finding some misprints in it.


Author[s]: Terry Winograd

PROGRAMMER: A Language for Writing Grammars

November 1969



This memo describes PROGRAMMER, a parser for natural language. It consists of a language for writing grammars in the form of programs, and an interpreter which can use these grammars to parse sentence. PROGRAMMER is one part of an integrated system being written for the computer comprehension of natural language. The system will carry on a discourse in English, accepting data statements, answering questions, and carrying out commands. It has a verbally integrated structure, to perform parsing, semantic analysis, and deduction concurrently, and to use the results of each t guide the course of the entire process. This interaction is possible because all three aspects are written in the form of programs. This will allow the system to make full use of its "intelligence" (including non-linguistic knowledge about the subject being discussed) in interpreting the meaning of sentences.


Author[s]: Thomas O. Binford

Display Functions in LISP

March 1970



This note describes a system which compiles various forms of LISP lists and arrays into display commands for the DEC 340 display, and provides supporting functions for scaling, for moving elements in a display, for pot control of certain displays, and for adding elements to and removing elements from the display.


Author[s]: Annette Herskovits and Thomas O. Binford

On Boundary Detection

July 1970



A description is given of how edge erase of prismatic objects appear through a television camera serving as visual input to a computer. Two types of edge-finding predicates are proposed and compared, one linear in intensity, the other non-linear. A statistical analysis of both is carried out, assuming input data distorted by a Gaussian noise. Both predicates have been implemented as edge-verifying procedures, ie. Procedures aiming at high sensitivity and limited to looking for edges when approximate location and directions are given. Both procedures have been tried on actual scenes. Of the two procedures the non-linear one emerged as a satisfactory solution to line-verification because it performs well in spite of surface irregularities.


Author[s]: William Martin

Parsing Key Word Grammars

March 1969



Key word grammars are defined to be the same as context free grammars, except that a production may specify a string of arbitrary symbols. These grammars define languages similar to those used in the programs CARPS and ELIZA. We show a method of implementing the LR9k) parsing algorithm for context free grammars which can be modified slightly in order to parse key word grammars. When this is done algorithm can use many of the techniques used in ELIZA parse. Therefore, the algorithm helps to show the relation between the classical parsers and key word parsers.


Author[s]: Marvin Minsky and Seymour Papert

Proposal to ARPA for Research on Artificial Intelligence at MIT, 1970-1971

December 1970



The MIT Artificial Intelligence Project has a variety of goals all bound together by search for principles of intelligent behavior. Among our immediate goals are to develop systems with practical applications for: Visually- controlled automatic manipulation and physical world problem-solving, machine understanding of natural language text and narrative, and advanced applied mathematics. The long-range goals are concerned with simplifying, unifying and extending the techniques of heuristic programming. We expect the results of our work to: make it easier to write and debug large heuristic programs, develop packaged collections of knowledge about many different kinds of things, lending to programs with more resourcefulness, understanding and common sense", and identify and sharpen certain principles for programming intelligence.


Author[s]: Marvin Minsky

Form and Content in Computer Science

December 1969



The trouble with computer science today is an obsessive concern with form instead of content. This essay has three parts, suggesting form-content displacements in Theory of Computation in Programming languages and in Education.


Author[s]: Manuel Blum, Arnold Griffith and Bernard Neumann

A Stability Test for Configurations of Blocks

February 1970



This work is based on notes provided by Manuel Blum, which are paraphrased in section I, and which contain the examples used in the appendix. The main portion of this report was written by Bernard Neumann, who generalized Blum's results to situation involving friction. The program performing the relevant computation, which appears in the appendix, was written by Arnold Griffith, who compiled this memo.


Author[s]: Edwin Roger Banks

Construction of Decision Trees

February 1970



The construction of optimal decision trees for the problem stated within can be accomplished by an exhaustive enumeration. This paper discusses two approaches. The section on heuristic methods gives mostly negative results (E.G. there is no merit factor that will always yield the optimal tests, etc.), but most to these methods do give good results. The section entitled "Exhaustive Enumeration Revisited" indicates some powerful shortcuts that can be applied to an exhaustive enumeration, extending the range of this method.


Author[s]: John L. White

An Interim LISP User's Guide

March 1970



The substance of this memo is to initiate the naïve LISP user into the intricacies of the system at the Project MAC A.I. Lab. It is composed, at this time, of a Progress Report on the development of the LISP system and a few appendices but as such should be nearly adequate to start out a person who understands the basic ideas of LISP, and has understood a minimal part of the LISP 1.5 Primer.


Author[s]: Richard Orban

Removing Shadows in a Scene

August 1970



This paper describes a LISP function, ERASER, to be used in the process of recognizing objects by a computer. It is a pre-processor to a program called SEE which finds whole bodies in a scene. A short description of SEE and the required data-form for a scene is given. SEE is simulated for five different scenes to demonstrate the effects of shadows on its operation. The function , ERASER is explained through a sequence of operation, the heuristic used and detailed results for test cases. Finally, a "how to use it" section describes the data required to be on the property lists of the vertices in the scene, and the cruft that ERASER puts on these p-lists as it operates.


Author[s]: Michael Beeler

Movie Memo

April 1970



This is intended as brief explanation of how to use the Kodak movie camera in sync with a display.


Author[s]: Thomas L. Jones

INSIM1: A Computer Model of Simple Forms of Learning

April 1970



INSIM1 is a computer program, written in LISP, which models simple forms of learning analogues to the learning of a human infant during the first few weeks of his life, such as learning to suck the thumb and learning to perform elementary hand-eye coordination. The program operates by discovering cause- effect relationship and arranging them in a goal tree. For example, if A causes B, and the program wants B, it will set up A as a subgoal, working backward along the chain of causation until it reaches a subgoal which can be reached directly; i.e. a muscle pull. Various stages of the simulated infant's learning are described.


Author[s]: Lewis Wilson

Hypergeometric Functions in MATHLAB

June 1970



This memo describers some of the important properties and manipulations of Hypergeometric Functions which my be useful in MATHLAB. A convention for representing the function is adopted which is readily adaptable to LISP operation. The most general tye of HGF with which we will be concerned is a function of a single variable, x, and is parametricized by "a" list, of length p, and a "B" list, of length "q". the latter consists, in general, of atoms; the argument is usually x, but may also be a simple function of x.


Author[s]: Terry Winograd

A Simple Algorithm for Self-Replication

May 1970



A recurrent topic of interest in the theory of automata has been the possibility of self-reproducing automata, particularly those which could reproduce globally through an application of a algorithm. In such a device, the "growth" at any point would depend at any time only on the local environment, but overall effect would be the reproduction of complex structures. This paper gives an algorithm of this type (an extension of an algorithm brought to my attention by Professor Fredkin) and examines the conditions under which such replication will occur. The system on which it operates will be defined, and the main theorem on its operation will follow several lemmas.


Author[s]: Edwin Roger Banks

Cellular Automata

June 1970



This paper presents in order 1) a brief description of the results, 2) a definition of cellular automata, 3) discussion of previous work in this area by Von Neumann and Codd, and 4) details of how the prescribed behaviors are achieved (with computer simulations included in the appendices). The results include showing that a two state cell with five neighbors is sufficient for universality.


Author[s]: Joel Moses

The Function of FUNCTION in LISP, or Why the FUNARG Problem Should be Called the Environment Problem

June 1970



A problem common to many powerful programming languages arises when one has to determine what values to assign to free variables in functions. Different implementational approaches which attempt to solve the problem are considered. The discussion concentrates on LISP implementations and points out why most current LISP systems are not as general as the original LISP 1.5 system. Readers not familiar with LISP should be able to read this paper without difficulty since we have tried to couch the argument in ALGOL-like terms as much as possible.

horizontal line

MIT logo Computer Science and Artificial Intelligence Laboratory (CSAIL)
The Stata Center, Building 32 - 32 Vassar Street - Cambridge, MA 02139 - USA
tel:+1-617-253-0073 - publications@csail.mit.edu