CSAIL Digital Archive - Artificial Intelligence
Laboratory Series
AIM-200 Author[s]: Marvin Minsky and Seymour Papert 1968-1969 Progress Report 1970 ftp://publications.ai.mit.edu/ai-publications/0-499/AIM-200.ps ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-200.pdf This report mainly summarizes the Project MAC A.I. Group work between July 1968 and June 1969 but covers some work up to February 1970. The work on computer vision is described in detail. This summary should be read in conjunction with last year’s A.I. Group Report which is included at the end of this Memo.
AIM-201 Author[s]: Michael S. Paterson and Carl E. Hewitt Comparative Schematology November 1970 ftp://publications.ai.mit.edu/ai-publications/0-499/AIM-201.ps ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-201.pdf While we may have the intuitive idea of one programming language having greater power than another, or of some subset of a language being an adequate 'core' for that language, we find when we try to formalize this notion that there is a serious theoretical difficulty. This lies in the fact that even quite rudimentary languages are nevertheless 'universal' in the following sense. If the language allows us to program with simple arithmetic or list-processing functions then any effective control structure can be simulated, traditionally by encoding a Turing machine computation in some way. In particular, a simple language with some basic arithmetic can express programs for any partial recursive function. Such an encoding is usually quite unnatural and impossibly inefficient. Thus, in order to carry on a practical study of the comparative power of different languages we are led to banish explicit functions and deal instead with abstract, uninterpreted programs or schemas. What follows is a brief report on some preliminary exploration in this area.
AIM-202 Author[s]: Michael Beeler Peter Samson's Music Processor, BIG July 1970 ftp://publications.ai.mit.edu/ai-publications/0-499/AIM-202.ps ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-202.pdf The contents of this memo are: commands which create a name, commands which create music, playing commands, plotting commands, general utility commands, debugging commands (in relation to relics of the past, features you might hope to live to see), error comments and a final appendix--MUSCOM.
AIM-203A Author[s]: Gerald Jay Sussman, Terry Winograd and Eugene Charniak Micro-Planner Reference Manual (Update) December 1971 ftp://publications.ai.mit.edu/ai-publications/0-499/AIM-203a.ps ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-203a.pdf This is a manual for the use of the Micro Planner interpreter, which implements a subset of Carl Hewitt's language, PLANNER and is now available for use by the Artificial Intelligence Group.
AIM-203 Author[s]: Gerald Sussman and Terry Winograd Micro-Planner Reference Manual July 1970 ftp://publications.ai.mit.edu/ai-publications/0-499/AIM-203.ps ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-203.pdf
Micro-Planner is an implementation of a subset of Cal Hewitt's language, PLANNER by Gerald Jay Sussman, Terry Winograd, and Eugene Charniak on the AI group computer in LISP. Micro-Planner is now a publically accessible systems program in the AI group systems ITS. The current version of Micro-Planner, embedded in an allocated LISP, may be obtained by incanting ':PLNR' or 'PLNR
Author[s]: Martin Rattner
Extending Guzman's SEE Program
July 1970
ftp://publications.ai.mit.edu/ai-publications/0-499/AIM-204.ps
ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-204.pdf
Adolfo Guzman's SEE program groups the regions of a two-dimensional scene into bodies, using, using local evidence in the scene to link regions together. This paper discusses an extended version of the SEE procedure that makes extensive use of evidence in the scene which indicated that two regions should be split into separate bodies.
The new procedure is better in several ways: 1) it correctly analyzes many scenes for which SEE makes mistakes; 2) it can interact with a higher-level object-recognizing program; 3) it can provide alternative solutions on demand.
Author[s]: David S. Johnson
Look-Ahead Strategies in One Person Games with Randomly Generated Game Trees
July 1970
ftp://publications.ai.mit.edu/ai-publications/0-499/AIM-205.ps
ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-205.pdf
A random method for generated binary trees
is presented, ad twp forms of a class of one
person games called, "Tree Solitaire" which
have such trees as their game trees are
defined. After what "look ahead strategy"
means in terms of such games is discussed,
as theorem on the most efficient use of
unlimited look-ahead is proved, and a
collection of strategies involving 0, 1, or 2
look-ahead per move is introduced.
A method involving diagrams is presented for
calculation the probability of winning under the
various strategies over a restricted class of
games. The superiority of one of the l look-
ahead strategies over the other is proved for
games of the first form on this restricted
class. For games of the second form of this
class, all the introduced strategies have their
chances of winning calculated, and these
results are compared among themselves,
with the result for the first form of the game,
and with the results of Monte Carlo estimation
of the chance of winning in a particular case.
An approximate methods for evaluating
strategies form any given position is
introduced, used to explain some of the
previous results, and suggest modifications
of strategies already defined, which are then
evaluated by Monte Carlo methods.
Finally, variants on Tree Solitaire are
suggested, their general implications are
discussed, and using the methods already
developed one of the most suggestive
variants is studied and the results show a
significant reversal from those of the original
game, which is explained by the difference in
the games on one particular.
Author[s]: Thomas O. Binford
The Vision Laboratory: Part One
July 1970
ftp://publications.ai.mit.edu/ai-publications/0-499/AIM-206.ps
ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-206.pdf
Some of the facilities for vision programming are discussed in the format of a user's manual.
Author[s]: Carl E. Hewitt
More Comparative Schematology
August 1970
ftp://publications.ai.mit.edu/ai-publications/0-499/AIM-207.ps
ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-207.pdf
Schemas are programs in which some of the function symbols are un-interpreted. In this paper we compare classed of schemas in which various kinds of constraints are imposed on some of the function symbols. Among the classes of schemas compared are program, recursive, hierarchical and parallel.
Author[s]: Carl Hewitt
Teaching Procedures in Humans and Robots
September 1970
ftp://publications.ai.mit.edu/ai-publications/0-499/AIM-208.ps
ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-208.pdf
Analysis of the structure of procedures is
central to the foundations of problem soling.
In this paper we explore three principle
means for teaching procedures: telling,
canned loops, and procedural abstraction.
The most straightforward way to teach a
procedure is by telling how to accomplish it in
a high level goal-oriented language. In the
method of canned loops the control structure
that is needed for the procedure is supposed
and the procedure is deduced. In the method
of procedural abstraction the procedure is
abstracted from protocols of the procedure on
examples.
Author[s]: Jeffrey P. Golden
A User's Guide to the A.I. Group LISCOM LISP Complier: Interim Report
December 1970
ftp://publications.ai.mit.edu/ai-publications/0-499/AIM-210.ps
ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-210.pdf
The LISCOM version of the AI group PDP/6 LISP compiler is a descendant of the original Greenblatt-Nelson compiler, and is a friendly sibling to the COMPLR version maintained by Jon L. White. The compiler operates in two passes to translate LISP code into LAP code. The first pass performs a general study of the S-expression function definition which is to be compiled, producing as output a modified S-expression and various tables attached to free variables. The second pass does the actual compilation (generation of assembly code), making use of the transformations performed and the information gathered by the first pass.
The LISCOM version of the compiler is being used as a vehicle for the implementation of "fast arithmetic" in LISP. This work is being done under the auspices of the MATHLAB project of the AI Laboratory. The early stages of the compiler implementation were handled by W. Diffie, and the work has been continued by the present author.
Author[s]: Michael Stewart Paterson
Equivalence Problems in a Model of Computation
August 1967 (Issued November 1970)
ftp://publications.ai.mit.edu/ai-publications/0-499/AIM-211.ps
ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-211.pdf
A central problem in the mathematical teory of computers and computation is to find a suitable framework for expressing the ececution of a computer program by a computer. Within the framework we want to be alble to provide answers to such questions as; (1) Does a certain program perform a certain task? (2) Are two programs equivalent, i.e., do they perform the same task? (3) Under what conditions, if at all, will a program fail to help? (4) how can a given program be simplified, in some sense, or made more efficient? These kinds of questions are customarily answered by experienced intuition, for simple programs, supplemented by trial and, often error for more complicated ones. We should like to replace such methods by a formalizable procedure, capable of being carried out by a computer program.
Author[s]: Gordon Mumma and Stephen Smoliar
The Computer as a Performing Instrument
February 1971
ftp://publications.ai.mit.edu/ai-publications/0-499/AIM-213.ps
ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-213.pdf
This memo was originally presented as a Project MAC seminar on February 20, 1970. From the outset, the computer has established two potential roles in the musical arts--the one as a sound synthesizer and the other as a composer (or composer's assistant). The most important developments in synthesis have been due to MAX Matthew at the Bell telephone Laboratories [7]. His music V system endows a computer with most of the capabilities of the standard hardware of electronic music. Its primary advantage is that the user may specify arbitrarily complex sound sequences and achieve then with a minimum of editing effort. Its primary disadvantage is that it is not on-line, so that the user loses that critical sense of immediacy which he, as a composer, may deem valuable.
Author[s]: Peter Samson
Linking Loader for MIDAS
March 1971
ftp://publications.ai.mit.edu/ai-publications/0-499/AIM-214.ps
ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-214.pdf
This memo was originally printed as MAC Memo 268, January 31, 1966.
The MIDAS Linking Loader is a PDP-6program to load re-locatable format output from the MIDAS assembler, with facilities to handle symbolic cross-reference between independently assembled programs. Although it is arranged primarily to load from DECtape, the loader is able to load paper-tape re-locatable programs.
Author[s]: Mark Dowson
How to Get onto the System
April 1971
ftp://publications.ai.mit.edu/ai-publications/0-499/AIM-215.ps
ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-215.pdf
This memo is intended to get very new users started on the MAC AI system. It presents some simple rituals for making and editing fields, getting print outs, making microtapes, and so on. Most of the rituals given are not the only ways of doing something or even necessarily the simplest, but they do work. Some sources of more detailed documentation are referenced at the end of this memo; read then when you want to know more. If you don't understand something or need any kind of help- ask. No one minds; they all know how you feel.
Author[s]: Mark Dowson
Instant TJ6. How to Get the System to Type Your Papers
September 1971
ftp://publications.ai.mit.edu/ai-publications/0-499/AIM-215a.ps
ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-215a.pdf
TJ6 is a program that takes disk files of text and arranges them so that they can be printed out neatly on 8 1/2 by 11 paper, lines justified, pages numbered, and so on. So that TJ6 will know what to do you must insert instructions to it in your file. AI Memo No 164 A fully describes TJ6 and lists all the instructions available. This note described a useful subset of the instructions to get you started.
Author[s]: Mitchell Wand
Theories, Pre-Theories and Finite State Transformations on Trees
May 1971
ftp://publications.ai.mit.edu/ai-publications/0-499/AIM-216.ps
ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-216.pdf
The closure of an algebra is defined as a generalization of the semigroup of a finite automation. Pretheories are defined as a subclass of the closed algebras, and the relationship between pretheories and the algebraic theories of Lawrence [1963] is explored. Finally, pretheories are applied to the characterization problem of finite state transformations on trees, solving an open problem of Thatcher [1969].
Author[s]: W.W. Bledsoe, Robert S. Boyer and William H. Henneman
Computer Proofs of Limit Theorems
June 1971
ftp://publications.ai.mit.edu/ai-publications/0-499/AIM-217.ps
ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-217.pdf
In this paper we describe some relatively simple changes that have been made to an existing automatic theorem proving program to enable it to prove efficiently a number of the limit theorems of elementary calculus. These changes include subroutines of a general nature which apply to all areas of analysis , and a special "limit-heuristic" design for the limit theorems of calculus.
Author[s]: Michael Beeler
Information Theory and the Game of JOTTO
August 1971
ftp://publications.ai.mit.edu/ai-publications/0-499/AIM-218.ps
ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-218.pdf
The word game, JOTTO, has attracted the interest of several computer programmers over the years, not to mention countless devoted players. Rules are: 1.) Each of 2 players selects a 5-letter English word, or a proper noun, as his "secret word." 2.) Play consists of alternate turns of naming a "test word, whose constraints are the same as ton the secret words, and the opponent answering how close the test word is to his secret word. 3.) Closeness is measured in jots; each jot is a one-to-one letter match, and independent of which word is the test word. GLASS versus SMILE or SISSY is 2 jots. 4.) The first payer to guess his opponent's secret word wins.
Constraints on a JOTTO program are; First, it must have a dictionary of all possible words at the outset of each game. (The modification of adding newly experienced words to its dictionary is trivial in practice ad not worth the programming efforts, especially since one wants to avoid adding word-like typing errors, etc.) the 9unacceptable) alternative is to have a letter-deducing algorithm and then a "word-proposer" to order the 5 factorial = 120 combinations (perhaps based on diagram frequencies and vowel constraints) once all 5 letters are found. Second, the most use the program can make of the jots from a given test word is to eliminate from its list of "possible secret words of opponent" all those which do not have that number of jots against that test word. Hence, each test word should be chosen to maximize the expected information derived.
Author[s]: Daniel G. Bobrow
Natural Language Input for a Computer Problem Solving System
September 1964
ftp://publications.ai.mit.edu/ai-publications/0-499/AITR-219.ps
ftp://publications.ai.mit.edu/ai-publications/pdf/AITR-219.pdf
The STUDENT problem solving system,
programmed in LISP, accepts as input a
comfortable but restricted subset of English
which can express a wide variety of algebra
story problems. STUDENT finds the solution
to a large class of these problems. STUDENT
can utilize a store of global information not
specific to any one problem, and may make
assumptions about the interpretation of
ambiguities in the wording of the problem
being solved. If it uses such information or
makes any assumptions, STUDENT
communicates this fact to the user. The thesis
includes a summary of other English
language questions-answering systems. All
these systems, and STUDENT, are evaluated
according to four standard criteria. The
linguistic analysis in STUDENT is a first
approximation to the analytic portion of a
semantic theory of discourse outlined in the
thesis. STUDENT finds the set of kernel
sentences which are the base of the input
discourse, and transforms this sequence of
kernel sentences into a set of simultaneous
equations which form the semantic base of
the STUDENT system. STUDENT then tries to
solve this set of equations for the values of
requested unknowns. If it is successful it
gives the answers in English. If not,
STUDENT asks the user for more information,
and indicates the nature of the desired
information. The STUDENT system is a first
step toward natural language communication
with computers. Further work on the semantic
theory proposed should result in much more
sophisticated systems.
Author[s]: Bertram Raphael
SIR: A Computer Program for Semantic Information Retrieval
June 1964
ftp://publications.ai.mit.edu/ai-publications/0-499/AITR-220.ps
ftp://publications.ai.mit.edu/ai-publications/pdf/AITR-220.pdf
SIR is a computer system, programmed in the
LISP language, which accepts information
and answers questions expressed in a
restricted form of English. This system
demonstrates what can reasonably be called
an ability to “understand” semantic
information. SIR’s semantic and deductive
ability is based on the construction of an
internal model, which uses word associations
and property lists, for the relational
information normally conveyed in
conversational statements. A format-matching
procedure extracts semantic content from
English sentences. If an input sentence is
declarative, the system adds appropriate
information to the model. If an input sentence
is a question, the system searches the model
until it either finds the answer or determines
why it cannot find the answer. In all cases SIR
reports its conclusions. The system has
some capacity to recognize exceptions to
general rules, resolve certain semantic
ambiguities, and modify its model structure in
order to save computer memory space.
Judging from its conversational ability, SIR, is
a first step toward intelligent man-machine
communication. The author proposes a next
step by describing how to construct a more
general system which is less complex and yet
more powerful than SIR. This proposed
system contains a generalized version of the
SIR model, a formal logical system called
SIR1, and a computer program for testing the
truth of SIR1 statements with respect to the
generalized model by using partial proof
procedures in the predicate calculus. The
thesis also describes the formal properties of
SIR1 and how they relate to the logical
structure of SIR.
Author[s]: Warren Teitelman
PILOT: A Step Toward Man-Computer Symbiosis
September 1966
ftp://publications.ai.mit.edu/ai-publications/0-499/AITR-221.ps
ftp://publications.ai.mit.edu/ai-publications/pdf/AITR-221.pdf
PILOT is a programming system constructed
in LISP. It is designed to facilitate the
development of programs by easing the
familiar sequence: write some code, run the
program, make some changes, write some
more code, run the program again, etc. As a
program becomes more complex, making
these changes becomes harder and harder
because the implications of changes are
harder to anticipate. In the PILOT system, the
computer plays an active role in this
evolutionary process by providing the means
whereby changes can be effected
immediately, and in ways that seem natural to
the user. The user of PILOT feels that he is
giving advice, or making suggestions, to the
computer about the operation of his
programs, and that the system then performs
the work necessary. The PILOT system is
thus an interface between the user and his
program, monitoring both in the requests of
the user and operation of his program. The
user may easily modify the PILOT system
itself by giving it advice about its own
operation. This allows him to develop his own
language and to shift gradually onto PILOT the
burden of performing routine but increasingly
complicated tasks. In this way, he can
concentrate on the conceptual difficulties in
the original problem, rather than on the
niggling tasks of editing, rewriting, or adding
to his programs. Two detailed examples are
presented. PILOT is a first step toward
computer systems that will help man to
formulate problems in the same way they now
help him to solve them. Experience with it
supports the claim that such “symbiotic
systems” allow the programmer to attack and
solve more difficult problems.
Author[s]: Lewis Mark Norton
ADEPT: A Heuristic Program for Proving Theorems of Group Theory
September 1966
ftp://publications.ai.mit.edu/ai-publications/0-499/AITR-222.ps
ftp://publications.ai.mit.edu/ai-publications/pdf/AITR-222.pdf
A computer program, named ADEPT (A
Distinctly Empirical Prover of Theorems), has
been written which proves theorems taken
from the abstract theory of groups. Its
operation is basically heuristic, incorporating
many of the techniques of the human
mathematician in a “natural” way. This
program has proved almost 100 theorems, as
well as serving as a vehicle for testing and
evaluating special-purpose heuristics. A
detailed description of the program is
supplemented by accounts of its performance
on a number of theorems, thus providing
many insights into the particular problems
inherent in the design of a procedure capable
of proving a variety of theorems from this
domain. Suggestions have been formulated
for further efforts along these lines, and
comparisons with related work previously
reported in the literature have been made.
Author[s]: William A. Martin
Symbolic Mathematical Laboratory
January 1967
ftp://publications.ai.mit.edu/ai-publications/0-499/AITR-223.ps
ftp://publications.ai.mit.edu/ai-publications/pdf/AITR-223.pdf
A large computer program has been
developed to aid applied mathematicians in
the solution of problems in non-numerical
analysis which involve tedious manipulations
of mathematical expressions. The
mathematician uses typed commands and a
light pen to direct the computer in the
application of mathematical transformations;
the intermediate results are displayed in
standard text-book format so that the system
user can decide the next step in the problem
solution. Three problems selected from the
literature have been solved to illustrate the
use of the system. A detailed analysis of the
problems of input, transformation, and display
of mathematical expressions is also
presented.
Author[s]: Adolfo Guzman-Arenas
Some Aspects of Pattern Recognition by Computer
February 1967
ftp://publications.ai.mit.edu/ai-publications/0-499/AITR-224.ps
ftp://publications.ai.mit.edu/ai-publications/pdf/AITR-224.pdf
A computer may gather a lot of information
from its environment in an optical or graphical
manner. A scene, as seen for instance from a
TV camera or a picture, can be transformed
into a symbolic description of points and lines
or surfaces. This thesis describes several
programs, written in the language CONVERT,
for the analysis of such descriptions in order
to recognize, differentiate and identify desired
objects or classes of objects in the scene.
Examples are given in each case. Although
the recognition might be in terms of
projections of 2-dim and 3-dim objects, we do
not deal with stereoscopic information. One of
our programs (Polybrick) identifies
parallelepipeds in a scene which may contain
partially hidden bodies and non-
parallelepipedic objects. The program TD
works mainly with 2-dimensional figures,
although under certain conditions
successfully identifies 3-dim objects.
Overlapping objects are identified when they
are transparent. A third program, DT, works
with 3-dim and 2-dim objects, and does not
identify objects which are not completely
seen. Important restrictions and suppositions
are: (a) the input is assumed perfect
(noiseless), and in a symbolic format; (b) no
perspective deformation is considered. A
portion of this thesis is devoted to the study of
models (symbolic representations) of the
objects we want to identify; different schemes,
some of them already in use, are discussed.
Focusing our attention on the more general
problem of identification of general objects
when they substantially overlap, we propose
some schemes for their recognition, and also
analyze some problems that are met.
Author[s]: Allen Forte
Syntax-Based Analytic Reading of Musical Scores
April 1967
ftp://publications.ai.mit.edu/ai-publications/0-499/AITR-225.ps
ftp://publications.ai.mit.edu/ai-publications/pdf/AITR-225.pdf
As part of a larger research project in musical
structure, a program has been written which
“reads” scores encoded in an input language
isomorphic to music notation. The program is
believed to be the first of its kind. From a
small number of parsing rules the program
derives complex configurations, each of which
is associated with a set of reference points in
a numerical representation of a time-
continuum. The logical structure of the
program is such that all and only the defined
classes of events are represented in the
output. Because the basis of the program is
syntactic (in the sense that parsing operations
are performed on formal structures in the
input string), many extensions and
refinements can be made without excessive
difficulty. The program can be applied to any
music which can be represented in the input
language. At present, however, it constitutes
the first stage in the development of a set of
analytic tools for the study of so-called atonal
music, the revolutionary and little understood
music which has exerted a decisive influence
upon contemporary practice of the art. The
program and the approach to automatic data-
structuring may be of interest to linguists and
scholars in other fields concerned with basic
studies of complex structures produced by
human beings.
Author[s]: Joel Moses
Symbolic Integration
September 1967
ftp://publications.ai.mit.edu/ai-publications/0-499/AITR-226.ps
ftp://publications.ai.mit.edu/ai-publications/pdf/AITR-226.pdf
SIN and SOLDIER are heuristic programs in
LISP which solve symbolic integration
problems. SIN (Symbolic INtegrator) solves
indefinite integration problems at the difficulty
approaching those in the larger integral
tables. SIN contains several more methods
than are used in the previous symbolic
integration program SAINT, and solves most
of the problems attempted by SAINT in less
than one second. SOLDIER (SOLution of
Ordinary Differential Equations Routine)
solves first order, first degree ordinary
differential equations at the level of a good
college sophomore and at an average of
about five seconds per problem attempted.
The differences in philosophy and operation
between SAINT and SIN are described, and
suggestions for extending the work presented
are made.
Author[s]: Eugene Charniak
CARPS: A Program which Solves Calculus Word Problems
July 1968
ftp://publications.ai.mit.edu/ai-publications/0-499/AITR-227.ps
ftp://publications.ai.mit.edu/ai-publications/pdf/AITR-227.pdf
A program was written to solve calculus word
problems. The program, CARPS (CALculus
Rate Problem Solver), is restricted to rate
problems. The overall plan of the program is
similar to Bobrow’s STUDENT, the primary
difference being the introduction of
“structures” as the internal model in CARPS.
Structures are stored internally as trees. Each
structure is designed to hold the information
gathered about one object. A description of
CARPS is given by working through two
problems, one in great detail. Also included is
a critical analysis of STUDENT.
Author[s]: Adolfo Guzman-Arenas
Computer Recognition of Three-Dimensional Objects in a Visual Scene
December 1968
ftp://publications.ai.mit.edu/ai-publications/0-499/AITR-228.ps
ftp://publications.ai.mit.edu/ai-publications/pdf/AITR-228.pdf
Methods are presented (1) to partition or
decompose a visual scene into the bodies
forming it; (2) to position these bodies in
three-dimensional space, by combining two
scenes that make a stereoscopic pair; (3) to
find the regions or zones of a visual scene
that belong to its background; (4) to carry out
the isolation of objects in (1) when the input
has inaccuracies. Running computer
programs implement the methods, and many
examples illustrate their behavior. The input is
a two-dimensional line-drawing of the scene,
assumed to contain three-dimensional
bodies possessing flat faces (polyhedra);
some of them may be partially occluded.
Suggestions are made for extending the work
to curved objects. Some comparisons are
made with human visual perception. The
main conclusion is that it is possible to
separate a picture or scene into the
constituent objects exclusively on the basis of
monocular geometric properties (on the basis
of pure form); in fact, successful methods are
shown.
Author[s]: Wendel Terry Beyer
Recognition of Topological Invariants by Iterative Arrays
October 1969
ftp://publications.ai.mit.edu/ai-publications/0-499/AITR-229.ps
ftp://publications.ai.mit.edu/ai-publications/pdf/AITR-229.pdf
A study is made of the recognition and
transformation of figures by iterative arrays of
finite state automata. A figure is a finite
rectangular two-dimensional array of
symbols. The iterative arrays considered are
also finite, rectangular, and two-dimensional.
The automata comprising any given array are
called cells and are assumed to be
isomorphic and to operate synchronously with
the state of a cell at time t+1 being a function
of the states of it and its four nearest
neighbors at time t. At time t=0 each cell is
placed in one of a fixed number of initial
states. The pattern of initial states thus
introduced represents the figure to be
processed. The resulting sequence of array
states represents a computation based on
the input figure. If one waits for a specially
designated cell to indicate acceptance or
rejection of the figure, the array is said to be
working on a recognition problem. If one waits
for the array to come to a stable configuration
representing an output figure, the array is said
to be working on a transformation problem.
Author[s]: Arnold K. Griffith
Computer Recognition of Prismatic Solids
August 1970
ftp://publications.ai.mit.edu/ai-publications/0-499/AITR-230.ps
ftp://publications.ai.mit.edu/ai-publications/pdf/AITR-230.pdf
An investigation is made into the problem of
constructing a model of the appearance to an
optical input device of scenes consisting of
plane-faced geometric solids. The goal is to
study algorithms which find the real straight
edges in the scenes, taking into account
smooth variations in intensity over faces of the
solids, blurring of edges and noise. A general
mathematical analysis is made of optimal
methods for identifying the edge lines in
figures, given a raster of intensities covering
the entire field of view. There is given in
addition a suboptimal statistical decision
procedure, based on the model, for the
identification of a line within a narrow band on
the field of view given an array of intensities
from within the band. A computer program
has been written and extensively tested which
implements this procedure and extracts lines
from real scenes. Other programs were
written which judge the completeness of
extracted sets of lines, and propose and test
for additional lines which had escaped initial
detection. The performance of these
programs is discussed in relation to the
theory derived from the model, and with
regard to their use of global information in
detecting and proposing lines.
Author[s]: Patrick H. Winston
Learning Structural Descriptions from Examples
September 1970
ftp://publications.ai.mit.edu/ai-publications/0-499/AITR-231.ps
ftp://publications.ai.mit.edu/ai-publications/pdf/AITR-231.pdf
The research here described centers on how
a machine can recognize concepts and learn
concepts to be recognized. Explanations are
found in computer programs that build and
manipulate abstract descriptions of scenes
such as those children construct from toy
blocks. One program uses sample scenes to
create models of simple configurations like
the three-brick arch. Another uses the
resulting models in making identifications.
Throughout emphasis is given to the
importance of using good descriptions when
exploring how machines can come to
perceive and understand the visual
environment.
Author[s]: Berthold K. P. Horn
Shape from Shading: A Method for Obtaining the Shape of a Smooth Opaque Object from One View
November 1970
ftp://publications.ai.mit.edu/ai-publications/0-499/AITR-232.ps
ftp://publications.ai.mit.edu/ai-publications/pdf/AITR-232.pdf
A method will be described for finding the
shape of a smooth apaque object form a
monocular image, given a knowledge of the
surface photometry, the position of the
lightsource and certain auxiliary information to
resolve ambiguities. This method is
complementary to the use of stereoscopy
which relies on matching up sharp detail and
will fail on smooth objects. Until now the
image processing of single views has been
restricted to objects which can meaningfully
be considered two-dimensional or bounded
by plane surfaces. It is possible to derive a
first-order non-linear partial differential
equation in two unknowns relating the
intensity at the image points to the shape of
the objects. This equation can be solved by
means of an equivalent set of five ordinary
differential equations. A curve traced out by
solving this set of equations for one set of
starting values is called a characteristic strip.
Starting one of these strips from each point on
some initial curve will produce the whole
solution surface. The initial curves can
usually be constructed around so-called
singular points. A number of applications of
this metod will be discussed including one to
lunar topography and one to the scanning
electron microscope. In both of these cases
great simplifications occur in the equations. A
note on polyhedra follows and a quantitative
theory of facial make-up is touched upon. An
implementation of some of these ideas on the
PDP-6 computer with its attached image-
dissector camera at the Artificial intelligence
Laboratory will be described, and also a
nose-recognition program.
Author[s]: Edwin Roger Banks
Information Processing and Transmission in Cellular Automata
January 1971
ftp://publications.ai.mit.edu/ai-publications/0-499/AITR-233.ps
ftp://publications.ai.mit.edu/ai-publications/pdf/AITR-233.pdf
A cellular automaton is an iterative array of
very simple identical information processing
machines called cells. Each cell can
communicate with neighboring cells. At
discrete moments of time the cells can
change from one state to another as a
function of the states of the cell and its
neighbors. Thus on a global basis, the
collection of cells is characterized by some
type of behavior. The goal of this investigation
was to determine just how simple the
individual cells could be while the global
behavior achieved some specified criterion of
complexity – usually the ability to perform a
computation or to reproduce some pattern.
The chief result described in this thesis is that
an array of identical square cells (in two
dimensions), each cell of which
communicates directly with only its four
nearest edge neighbors and each of which
can exist in only two states, can perform any
computation. This computation proceeds in a
straight forward way. A configuration is a
specification of the states of all the cells in
some area of the iterative array. Another result
described in this thesis is the existence of a
self-reproducing configuration in an array of
four-state cells, a reduction of four states from
the previously known eight-state case. The
technique of information processing in
cellular arrays involves the synthesis of some
basic components. Then the desired
behaviors are obtained by the interconnection
of these components. A chapter on
components describes some sets of basic
components. Possible applications of the
results of this investigation, descriptions of
some interesting phenomena (for vanishingly
small cells), and suggestions for further study
are given later.
Author[s]: Lawrence W. Krakauer
Computer Analysis of Visual Properties of Curved Objects
May 1971
ftp://publications.ai.mit.edu/ai-publications/0-499/AITR-234.ps
ftp://publications.ai.mit.edu/ai-publications/pdf/AITR-234.pdf
A method is presented for the visual analysis
of objects by computer. It is particularly well
suited for opaque objects with smoothly
curved surfaces. The method extracts
information about the object’s surface
properties, including measures of its
specularity, texture, and regularity. It also aids
in determining the object’s shape. The
application of this method to a simple
recognition task – the recognition of fruit – is
discussed. The results on a more complex
smoothly curved object, a human face, are
also considered.
Author[s]: Terry Winograd
Procedures as a Representation for Data in a Computer Program for Understanding Natural Language
January 1971
ftp://publications.ai.mit.edu/ai-publications/0-499/AITR-235.ps
ftp://publications.ai.mit.edu/ai-publications/pdf/AITR-235.pdf
This paper describes a system for the
computer understanding of English. The
system answers questions, executes
commands, and accepts information in
normal English dialog. It uses semantic
information and context to understand
discourse and to disambiguate sentences. It
combines a complete syntactic analysis of
each sentence with a “heuristic understander”
which uses different kinds of information
about a sentence, other parts of the
discourse, and general information about the
world in deciding what the sentence means. It
is based on the belief that a computer cannot
deal reasonably with language unless it can
“understand” the subject it is discussing. The
program is given a detailed model of the
knowledge needed by a simple robot having
only a hand and an eye. We can give it
instructions to manipulate toy objects,
interrogate it about the scene, and give it
information it will use in deduction. In addition
to knowing the properties of toy objects, the
program has a simple model of its own
mentality. It can remember and discuss its
plans and actions as well as carry them out. It
enters into a dialog with a person, responding
to English sentences with actions and
English replies, and asking for clarification
when its heuristic programs cannot
understand a sentence through use of context
and physical knowledge.
Author[s]: Patrick E. ONeil
An Inquiry into Algorithmic Complexity
September 1971
ftp://publications.ai.mit.edu/ai-publications/0-499/AIM-237.ps
ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-237.pdf
This is the first section in a proposed
monograph on algorithmic complexity theory.
Future sections shall include: information
Theory as a Proof Technique; Algorithms
Using Linear Form Inequalities; Some
Probabilistic Analyses of Algorithms, etc.
Comments, suggestions and corrections are
welcomed. Please let me know what you
think. This is not a limited distribution
document, although I may wish to publish it
later. Anyone who develops an idea based on
this work to a more advanced state is
welcome to publish first. I would be very eager
to see any such result as soon as possible.
Author[s]: Donald E. Eastlake
ITS Status Report
April 1972
ftp://publications.ai.mit.edu/ai-publications/0-499/AIM-238.ps
ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-238.pdf
ITS is a time-shared operating system designed for the Artificial Intelligence Laboratory DEC PDP-10/PDP-6 installation and tailored to its special requirements. This status report described the design philosophy behind the ITS system, the hardware and software facilities of the system implemented with this philosophy, and some information on work currently in progress or desirable in the near future.
Author[s]: M. Beeler, R.W. Gosper and R. Schroeppel
HAKMEM
February 1972
ftp://publications.ai.mit.edu/ai-publications/0-499/AIM-239.ps
ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-239.pdf
Here is some little know data which may be of i
nterest to computer hackers. The items and examples are so sketchy that to decipher them may require more sincerity and curiosity than
a non-hacker can muster. Doubtless, little of this is new, but nowadays it's hard to tell. So we must be content to give you an insight
, or save you some cycles, and to welcome further contributions of items, new or used.
Author[s]: Donald Eastlake
LLSIM Reference Manual
December 1971
ftp://publications.ai.mit.edu/ai-publications/0-499/AIM-240.ps
ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-240pdf
A program that simulates a Digital Equipment
Corporation PDP-11 computer and many of its
peripherals on the AI Laboratory Time Sharing
System (ITS) is described from a user's
reference point of view. This simulator has a
built in DDT-like command level which
provides the user with the normal range of
DDT facilities but also with several special
debugging features built into the simulator.
The DDT command language was
implemented by Richard M. Stallman while
the simulator was written by the author of this
memo.
Author[s]: Donald E. Eastlake
LLSIM Reference Manual
February 1972
ftp://publications.ai.mit.edu/ai-publications/0-499/AIM-240a.ps
ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-240a.pdf
A program that simulates a Digital Equipment
Corporation PDP-11 computer and many of its
peripherals on the AI Laboratory Time Sharing
System (ITS) is described from a user's
reference point of view. This simulator has a
built in DDT-like command level which
provides the user with the normal range of
DDT facilities but also with several special
debugging features built into the simulator.
The DDT command language was
implemented by Richard M. Stallman while
the simulator was written by the author of this
memo.
Author[s]: Terry Winograd
An AI Approach to English Morphemic Analysis
February 1971
ftp://publications.ai.mit.edu/ai-publications/0-499/AIM-241.ps
ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-241.pdf
This paper illustrated an approach toward understanding natural language through the techniques of artificial intelligence. It explores the structure of English word-endings both morpho-graphemically and semantically. It illustrated the use of procedures and semantic representations in relating the broad range of knowledge a language user brings to bear on understanding and utterance.
Author[s]: Stephen W. Smoliar
A Parallel Processing Model of Musical Structures
September 1971
ftp://publications.ai.mit.edu/ai-publications/0-499/AITR-242.ps
ftp://publications.ai.mit.edu/ai-publications/pdf/AITR-242.pdf
Euterpe is a real-time computer system for
the modeling of musical structures. It provides
a formalism wherein familiar concepts of
musical analysis may be readily expressed.
This is verified by its application to the
analysis of a wide variety of conventional
forms of music: Gregorian chant, Mediaeval
polyphony, Back counterpoint, and sonata
form. It may be of further assistance in the
real-time experiments in various techniques
of thematic development. Finally, the system
is endowed with sound-synthesis apparatus
with which the user may prepare tapes for
musical performances.
Author[s]: Stephen W. Smoliar
Using the EUTERPE Music System
October 1971
ftp://publications.ai.mit.edu/ai-publications/0-499/AIM-243.ps
ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-243.pdf
This memo describes the practical
implementation of programs written in the
language EUTERPE. Details of this language
are given in the author's thesis (A Parallel
Processing Model of Musical Structures) and
will not be treated here. We shall only be
concerned with the preparation and
processing of a EUTREPE source program.
Sample programs are given in their entirely in
the thesis or may be read off the authors file
directory (SWS;). Notational conventions are
those of Dowson's guide to the AI lab
timesharing system (AI Memo No 215).
Author[s]: Marvin Minsky and Seymour Papert
Proposal to ARPA for Research on Artificial Intelligence at M.I.T., 1971-1972
October 1971
ftp://publications.ai.mit.edu/ai-publications/0-499/AIM-245.ps
ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-245.pdf
The activities of the Artificial Intelligence
laboratory can be viewed under three main
aspects; (1) Artificial Intelligence-
understanding the principles of making
intelligent machines along the lines
discusses in previous proposals, and
elaborated below. (2) Natural Intelligence- As
we understand intelligence better we see
fewer differences between the problems of
understanding human and machine
intelligence.
We have been increasingly able to translate
our ideas about programming machines into
ideas about educating children, and are
currently developing systematic methods in
elementary education. And conversely, we
attribute to our observations and experience in
the latter activities much of what we believe
are important new conceptions of how to
organize knowledge for programs that really
understand. (3) mathematical theories; This
aspect is relevant not because we often need
to solve specific mathematical problems but
especially because we are firmly committed to
maintaining a mathematical style in the
laboratory. In many centers we have seen
decline and deterioration following apparently
successful "experiment" in artificial
intelligence because the principles behind the
performance were not understood, hence the
limitations unseen.
Author[s]: Seymour Papert
A Computer Laboratory for Elementary Schools
October 1971
ftp://publications.ai.mit.edu/ai-publications/0-499/AIM-246.ps
ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-246.pdf
This is a research project on elementary education whose immediate objective is the development of new methods and materials for teaching in an environment of computers and computer-controlled devices. Longer term objectives are related to theories of cognitive processes and to conjectures about the possibility of producing much larger changes than are usually thought possible in the expected intellectual achievement of children. This proposal is formulated in terms of the self-sufficient immediate objectives.
Author[s]: Seymour Papert
Teaching Children Thinking
October 1971
ftp://publications.ai.mit.edu/ai-publications/0-499/AIM-247.ps
ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-247.pdf
This paper is dedicated to the hope that someone with power to act will one day see that contemporary research on education is like the following experiment by a nineteenth century engineer who worked to demonstrate that engines were better than horses. This he did by hitching a 1/8 HP motor in parallel with his team of four strong stallions. After a year of statistical research he announced a significant difference. However, it was generally thought that there was a Hawthorne effect on the horses.
Author[s]: Seymour Papert and Cynthia Solomon
Twenty Things To Do With A Computer
June 1971
ftp://publications.ai.mit.edu/ai-publications/0-499/AIM-248.ps
ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-248.pdf
When people talk about computers in education they do not all have the same image in mind. Some think of using the computer to program the kid; others think of using the kid to program the computer. But most of them have at least this in common: the transaction between the computer and the kid will be some kind of "conversation" or "questions and answers" in words or numbers.
Author[s]: Seymour Papert
Teaching Children to be Mathematicians vs. Teaching About Mathematics
July 1971
ftp://publications.ai.mit.edu/ai-publications/0-499/AIM-249.ps
ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-249.pdf
Being a mathematician is no more definable as 'knowing' a set of mathematical facts than being a poet is definable as knowing a set of linguistic facts. Some modern math ed reformers will give this statement a too easy assent with the comment: 'Yes, they must understand, not merely know.' But this misses the capital point that being a mathematician, again like being a poet, or a composer or an engineer, means doing, rather than knowing or understanding. This essay is an attempt to explore some ways in which one might be able to put children in a better position to do mathematics rather than merely to learn about it.
Author[s]: Carl Hewitt
Planner Implementation Proposal to ARPA 1972-1973
December 1971
ftp://publications.ai.mit.edu/ai-publications/0-499/AIM-250.ps
ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-250.pdf
The task objective is the generalization and implementation of the full power of the problem solving formalism PLANNER in the next two years. We will show how problem solving knowledge can be effectively incorporated into the formalism. Several domains will be explored to demonstrate how PLANNER enhances problem solving.
Author[s]: Marvin Minsky
Mini-Robot Proposal to ARPA
January 1972
ftp://publications.ai.mit.edu/ai-publications/0-499/AIM-251.ps
ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-251.pdf
During the next decade it will become practical to use more and more sophisticated techniques of automation--we shall call this "robotics"--both in established industries and in new areas. The rate at which these techniques become available will depend very much on the way research programs are organized to pursue them. The issues involved are rather large and touch not only on technical matters but also on aspects of national economic policy and attitudes toward world trade positions. The project herein proposed is concerned with the development of two particular aspects of Robotics, namely; 1.) Development of a miniature hand-eye system 2.) Development of remote, ARPA-NETWORK style operation of robotic systems, in which simple jobs are handled locally while more complex computations are done on a larger scale.
Author[s]: Marvin Minsky and Seymour Papert
Artificial Intelligence Progress Report
January 1, 1972
ftp://publications.ai.mit.edu/ai-publications/0-499/AIM-252.ps
ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-252.pdf
Research at the Laboratory in vision,
language, and other problems of intelligence.
This report is an attempt to combine a
technical progress report with an
exposition of our point of view about certain
problems in the Theory of Intelligence.
Author[s]: Matthew J. Hillsman, R. Wade Williams and John S. Roe
The Computer-Controlled Oculometer: A Prototype Interactive Eye Movement Tracking System
September 1970
ftp://publications.ai.mit.edu/ai-publications/0-499/AIM-253.ps
ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-253.pdf
One kind of eye movement tracking device
which has great potential is the digital
computer-controlled Oculometer, an
instrument which non-invasively measures
point of regard of the subject, as well as pupil
diameter and blink occurrence. In conjunction
with a computer-generated display which can
change in real time as a function of the
subject's eye motions, the computer-
controlled Oculometer makes possible a
variety of interactive measurement and control
systems. Practical applications of such
schemes have had to await the development
of an instrument design which does not
inconvenience the subject, and which
conveniently interfaces with a digital computer
(see ref. 1).
This report describes an Oculometer
subsystem and an eye-tracking/control
program designed for use with the PDP-6
computer of the MIT Project MAC Artificial
Intelligence Group. The oculometer electro-
optic subsystem utilizes near-infrared light
reflected specularly off the front surface of the
subject's cornea and diffusely off the retina,
producing a bright pupil with an overriding
corneal highlight. An electro-optic scanning
aperture vidissector within the unit, driven by a
digital eye-tracking algorithm programmed
into the PDP-6 computer, detects and tracks
the centers of the corneal highlight and the
bright pupil to give eve movement
measurements. A computer-controlled,
moving mirror head motion tracker directly
coupled to the vidissector tracker permits the
subject reasonable freedom of movement.
Various applications of this system, which are
suggested by the work reported here, include;
(a) using the eye as a control device, (b)
recording eye fixation and exploring patterns,
(c) game playing, (d) training machines, and
(e) psychophysiological testing and recording.
Author[s]: Seymour Papert and Cynthia Solomon
NIM: A Game-Playing Program
January 1970
ftp://publications.ai.mit.edu/ai-publications/0-499/AIM-254.ps
ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-254.pdf
This note illustrates some ideas about how to
initiate beginning students into the art of
planning and writing a program complex
enough to be considered a project rather than
an exercise on using the language or simple
programming ideas. The project is to write a
program to play a simple game ("one-pile
NIM" or "21") as invincibly as possible. We
developed the project for a class of seventh
grader children we taught in 1968-69 at the
Muzzey Junior High School in Lexington,
Massachusetts. This was the longest
programming project these children had
encountered, and our intention was to give
them a model of how to go about working
under these conditions.
Author[s]: Gerald Jay Sussman and Drew Vincent McDermott
Why Conniving is Better than Plannng
April 1972
ftp://publications.ai.mit.edu/ai-publications/0-499/AIM-255a.ps
ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-255a.pdf
This paper is a critique of a computer
programming language, Carl Hewitts
PLANNER, a formalism designed especially
to cope with the problems that Artificial
Intelligence encounters. It is our contention
that the backtrack control structure that is the
backbone of PLANNER is particular,
automatic backtracking encourages inefficient
algorithms, conceals what is happening from
the user, and misleads him with primitives
having powerful names whose power is only
superficial. An alternative, a programming
language called CONNIVER which avoids
these problems, is presented from the point
of view of this critique.
Author[s]: Gerald Jay Sussman
Why Conniving is Better than Planning
February 1972
ftp://publications.ai.mit.edu/ai-publications/0-499/AIM-255.ps
ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-255.pdf
A higher level language derives its great
power form the fact that it tends to impose
structure on the problem solving behavior for
the user. Besides providing a library of useful
subroutines with a uniform calling sequence,
the author of a higher level language imposes
his theory of problem solving on the user. By
choosing what primitive data structures,
control structures, and operators he presents
to the user, he makes the implementation of
some algorithms more difficult than others,
thus discouraging some techniques and
encouraging others. So, to be "good", a
higher level language must not only simplify
the job of programming, by providing features
which package programming structures
commonly found in the domain for which the
language was designed, it must also do its
best to discourage the use of structures which
lead to "bad" algorithms.
Author[s]: Michael J. Fischer
Efficiency of Equivalence Algorithms
April 1972
ftp://publications.ai.mit.edu/ai-publications/0-499/AIM-256.ps
ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-256.pdf
This paper was first presented at the Symposium on Complexity of Computer Computations, IBM Thomas J. Watson Research Center, Yorktown Heights, New York, on March 22, 1972.
The equivalence problem is to determine the finest partition on a set that is consistent with a sequence of assertions of the form "x == y". A strategy for doing this on a computer processes the assertions serially, maintaining always in storage a representation of the partition defined by the assertions so far encountered. To process the command "x == y", the equivalence classes of x and y are determined. If they are the same, nothing further is done; otherwise the two classes are merged together.
Galler and Fischer (1964A) give an algorithm for solving this problem based on tree structures, and it also appears in Knuth (1968A). The items in each equivalent class are arranged in a tree, and each item except for the root contains a pointer to its father. The root contains a flag indicating that it is a root, and it may also contain other information relevant to the equivalence class as a whole.
Author[s]: Rich Schroeppel
A Two Counter Machine Cannot Calculate 2N
May 1972
ftp://publications.ai.mit.edu/ai-publications/0-499/AIM-257.ps
ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-257.pdf
This note proves that a two counter machine cannot calculate 2N.
Author[s]: Carl Hewitt
Description and Theoretical Analysis (Using Schemata) of Planner: A Language for Proving Theorems and Manipulating Models in a Robot
April 1972
ftp://publications.ai.mit.edu/ai-publications/0-499/AITR-258.ps
ftp://publications.ai.mit.edu/ai-publications/pdf/AITR-258.pdf
Planner is a formalism for proving theorems
and manipulating models in a robot. The
formalism is built out of a number of problem-
solving primitives together with a hierarchical
multiprocess backtrack control structure.
Statements can be asserted and perhaps
later withdrawn as the state of the world
changes. Under BACKTRACK control
structure, the hierarchy of activations of
functions previously executed is maintained
so that it is possible to revert to any previous
state. Thus programs can easily manipulate
elaborate hypothetical tentative states. In
addition PLANNER uses multiprocessing so
that there can be multiple loci of changes in
state. Goals can be established and
dismissed when they are satisfied. The
deductive system of PLANNER is subordinate
to the hierarchical control structure in order to
maintain the desired degree of control. The
use of a general-purpose matching language
as the basis of the deductive system
increases the flexibility of the system. Instead
of explicitly naming procedures in calls,
procedures can be invoked implicitly by
patterns of what the procedure is supposed to
accomplish. The language is being applied to
solve problems faced by a robot, to write
special purpose routines from goal oriented
language, to express and prove properties of
procedures, to abstract procedures from
protocols of their actions, and as a semantic
base for English.
Author[s]: Drew V. McDermott and Gerald Jay Sussman
The Conniver Reference Manual
May 1972
ftp://publications.ai.mit.edu/ai-publications/0-499/AIM-259.ps
ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-259.pdf
This manual is intended to be a guide to the philosophy and use of the programming language CONNIVER, which is "complete," and running at the AI Lab now. It assumes good knowledge of LISP, but no knowledge of Micro-Planner, in whose implementation many design decisions were made that are not to be expected to have consequences in CONNIVER. Those not familiar with LISP should consult Weissmans (1967) Primer, the LISP 1.5 Programmer's Manual (McCarthy et.al., 1962), or Jon L. Whites (1970) and others (PDP-6, 1967) excellent memos here at our own lab
Author[s]: Drew V. McDermott and Gerald Jay Sussman
The Conniver Reference Manual
May 1972 (Updated January 1974)
ftp://publications.ai.mit.edu/ai-publications/0-499/AIM-259a.ps
ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-259a.pdf
This manual is an introduction and reference to the latest version of the Conniver programming language, an AI language wit general control and data-base structures.
Author[s]: Donald E. Eastlake
Lock
June 1972
ftp://publications.ai.mit.edu/ai-publications/0-499/AIM-260.ps
ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-260.pdf
LOCK is a miscellaneous utility program operating under the ITS system. It allows the user to easily and conveniently perform a variety of infrequently required tasks. Most of these relate to console input-output or the operation of the ITS system.
Author[s]: Donald E. Eastlake
PEEK
February 1974
ftp://publications.ai.mit.edu/ai-publications/0-499/AIM-261a.ps
ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-261a.pdf
PEEK is a utility program designed to operate under the ITS time sharing system. It enables a user to monitor a variety of aspects of the time sharing system by providing, to the user, various periodically updated displays.
Author[s]: Donald E. Eastlake
PEEK
May 1973
ftp://publications.ai.mit.edu/ai-publications/0-499/AIM-261.ps
ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-261.pdf
PEEK is a utility program designed to operate under the ITS time sharing system. It enables a user to monitor a variety of aspects of the time sharing system by providing periodically updated display output or periodic printer output to teletype or line printer.
just what information is being presented to the user is controlled by PEEKs information mode. The available modes are listed in section 3 below. Section 5 describes how PEEK determines which device to output on. Section 2 describes, in general, how the user can input commands to PEEK.
Author[s]: Mitchell Wand
A Concrete Approach to Abstract Recursive Definitions
June 1972
ftp://publications.ai.mit.edu/ai-publications/0-499/AIM-262.ps
ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-262.pdf
We introduce a non-categorical alternative to Wagner's Abstract Recursive Definitions [Wg-1,2] using a generalization of the notion of clone called a u-clone. Our more concrete approach yields two new theorems: 1.) the free u-clone generated by a ranked set is isomorphic to the set of loop-representable flow diagrams with function symbols in the set, 2.) For every element of a u-clone there is an expression analogous to a regular expression. Several well-known theorems of language and automata theory are drawn as special cases of this theorem.
Author[s]: Yoshiaki Shirai
A Heterarchical Program for Recognition of Polyhedra
June 1972
ftp://publications.ai.mit.edu/ai-publications/0-499/AIM-263.ps
ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-263.pdf
Recognition of polyhedra by a heterarchical program is presented. The program is based on the strategy of recognizing objects step by step, at each time making use of the previous results. At each stage, the most obvious and simple assumption is made and the assumption is tested. To find a line segment, a range of search is proposed. Once a line segment is found, more of the line is determined by tracking along it. Whenever a new fact is found, the program tries to reinterpret the scene taking the obtained information into consideration. Results of the experiment using an image dissector are satisfactory for scenes containing a few blocks and wedges. Some limitations of the present program and proposals for future development are described.
Author[s]: Jeanne Bamberger
Developing a Musical Ear: A New Experiment
July 1972
ftp://publications.ai.mit.edu/ai-publications/0-499/AIM-264.ps
ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-264.pdf
I would like to report on some ideas we have
been developing at M.I.T. for self-paced,
independent music study. The aim of our
approach is to nurture in students that
enigmatic quality called, "musical"-- be it a
"musical ear" or an individual's capacity to
give a "musical performance". While all of us
cherish these qualities, rarely do we come to
grips with them directly in teaching. More often
we rely on our magical or mystical faith in the
inspiration of music, itself, and its great
artists, to do the teaching. And for some
(maybe ultimately all) this is the best course.
But what about the others to whom we teach
only the techniques of playing instruments or
some "facts" about music--its forms, its
history and its apparent elements? How often
do we have or take the time to examine the
assumptions underlying these "facts" we
teach, or to question the relation between
what we teach and what we do as musicians?
Author[s]: Garry S. Meyer
Infants in Children Stories - Toward a Model of Natural Language Comprehension
August 1972
ftp://publications.ai.mit.edu/ai-publications/0-499/AIM-265.ps
ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-265.pdf
How can we construct a program that will
understand stories that children would
understand? By understand we mean the
ability to answer questions about the story.
We are interested here with understanding
natural language in a very broad area. In
particular how does one understand stories
about infants? We propose a system which
answers such questions by relating the story
to background real world knowledge. We
make use of the general model proposed by
Eugene Charniak in his Ph.D. thesis
(Charniak 72). The model sets up
expectations which can be used to help
answer questions about the story. There is a
set of routines called BASE-routines that
correspond to our "real world knowledge" and
routines that are "put-in" which are called
DEMONs that correspond to contextual
information. Context can help to assign a
particular meaning to an ambiguous word, or
pronoun.
Author[s]: Eugene Charniak
Toward A Model Of Children's Story Comprehension
December 1972
ftp://publications.ai.mit.edu/ai-publications/0-499/AITR-266.ps
ftp://publications.ai.mit.edu/ai-publications/pdf/AITR-266.pdf
How does a person answer questions about
children's stories? For example, consider
'Janet wanted Jack's paints. She looked at the
picture he was painting and said 'Those
paints make your picture look funny.' The
question to ask is 'Why did Janet say that?'.
We propose a model which answers such
questions by relating the story to background
real world knowledge. The model tries to
generate and answer important questions
about the story as it goes along. Within this
model we examine two questions about the
story as it goes along. Within this model we
examine two problems, how to organize this
real world knowledge, and how it enters into
more traditional linguistic questions such as
deciding noun phrase reference.
Author[s]: Marvin Minsky
Manipulator Design Vignettes
October 1972
ftp://publications.ai.mit.edu/ai-publications/0-499/AIM-267.ps
ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-267.pdf
This memo is about mechanical arms. The
literature on robotics seems to be deficient in
such discussions, perhaps because not
enough sharp theoretical problems have
been formulated to attract interest. I’m sure
many of these matters have been discussed
in other literatures – prosthetics, orthopedics,
mechanical engineering, etc., and references
to such discussions would be welcome. We
raise these issues in the context of designing
the “mini-robot” system in the A.I. Laboratory
in 1972-1973. But we would like to attract the
interest of the general heuristic programming
community to such questions.
Author[s]: Marvin Minsky
Manipulator Design Vignettes
October 1981
ftp://publications.ai.mit.edu/ai-publications/0-499/AIM-267a.ps
ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-267a.pdf
This memo is about mechanical arms. The literature on robotics seems to be deficient in such discussions, perhaps because not enough sharp theoretical problems have been formulated to attract interest. I"m sure many of these matters have been discussed in other literatures-- prosthetics, orthopedics, mechanical engineering, etc., and references to such discussions would be welcome. We raise these issues in the context of designing the mini-robot" system in the A.I. Laboratory in 1972-1973. But we would like to attract the interests of the general heuristic programming community to such questions.
Author[s]: Arthur J. Nevins
A Human Oriented Logic for Automatic Theorem Proving
October 1972
ftp://publications.ai.mit.edu/ai-publications/0-499/AIM-268.ps
ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-268.pdf
The automation of first order logic has received comparatively little attention from researcher intent upon synthesizing the theorem proving mechanism used by humans. The dominant point of view [15], [18] has been that theorem proving on the computer should be oriented to the capabilities of the computer rather than to the human mind and therefore one should not be afraid to provide the computer with a logic that humans might find strange and uncomfortable. The preeminence of this point of view is not hard to explain since until now the most successful theorem proving programs have been machine oriented. Nevertheless, there are at least two reasons for being dissatisfied with the machine oriented approach. First, a mathematician often is interested more in understanding the proof of a proposition than in being told that the propositions true, for the insight gained from an understanding of the proof can lead to the proof of additional propositions and the development of new mathematical concepts. However, machine oriented proofs can appear very unnatural to a human mathematician thereby providing him with little if any insight. Second, the machine oriented approach has failed to produce a computer program which even comes close to equaling a good human mathematician in theorem proving ability; this leads one to suspect that perhaps the logic being supplied to the machine is not as efficient as the logic used by humans. The approach taken in this paper has been to develop a theorem proving program as a vehicle for gaining a better understanding of how humans actually prove theorems. The computer program which has emerged from this study is based upon a logic which appears more "natural" to a human (i.e., more human oriented). While the program is not yet the equal of a top flight human mathematician, it already has given indication (evidence of which is presented in section 9) that it can outperform the best machine oriented theorem provers.
Author[s]: Marvin Minsky
Proposal to ARPA for Continued Research on A.I.
October 1972
ftp://publications.ai.mit.edu/ai-publications/0-499/AIM-269.ps
ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-269.pdf
The Artificial Intelligence Laboratory proposes
to continue its work on a group of closely
interconnected projects, all bearing on
questions about how to make computers able
to use more sophisticated kinds of knowledge
to solve difficult problems. This proposal
explains what we expect to come of this work,
and why it seems to us the most profitable
direction for research at this time. The core of
this proposal is about well-defined specific
tasks such as extending the computer's ability
to understand information presented as visual
scenes--or in natural, human language.
Although these specific goals are important
enough in themselves, we see their pursuit
also as tightly bound to the development of a
general theory of the computations needed to
produce intelligent processes. Obviously, a
certain amount of theory is needed to achieve
progress in this and we maintain that the
steps toward a deep theory in this domain
must include thorough analysis of a very
specific phenomena. Our confidence in this
strategy is based both on past successes
and on our current theory of knowledge
structure. These bases are discussed both
below and in the appendices.
Author[s]: Gerald Jay Sussman
Teaching of Procedures-Progress Report
October 1972
ftp://publications.ai.mit.edu/ai-publications/0-499/AIM-270.ps
ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-270.pdf
The idea of building a programmer is very seductive in that it holds the promise of massive bootstrapping and thus ties in with many ideas about learning and teaching. I will avoid going into those issues here. It is necessary, however, to explain what I am not working on. I am not interested in developing new and better languages for expressing algorithms. When FORTRAN was invented, it was touted as an automatic programmer, and indeed it was, as it relieved the user of complete specification of the details of implementation. Newer programming languages are just elaborations (usually better) of that basic idea. I am, however, interested in the problem of implementation of a partially specified algorithm rather tan a complete algorithm and a partially specified implementation. This problem is truly in the domain of Artificial Intelligence because the system which "solves" this problem needs a great deal of knowledge about the problem domain for which te algorithm is being constructed in order to "reasonably" complete the specification. Indeed, a programmer is not told exactly the algorithm to be implemented, he is told the problem which his program is expected to solve.
Author[s]: David L. Waltz
Generating Semantic Descriptions From Drawings of Scenes With Shadows
November 1972
ftp://publications.ai.mit.edu/ai-publications/0-499/AITR-271.ps
ftp://publications.ai.mit.edu/ai-publications/pdf/AITR-271.pdf
The research reported here concerns the
principles used to automatically generate
three-dimensional representations from line
drawings of scenes. The computer programs
involved look at scenes which consist of
polyhedra and which may contain shadows
and various kinds of coincidentally aligned
scene features. Each generated description
includes information about edge shape
(convex, concave, occluding, shadow, etc.),
about the type of illumination for each region
(illuminated, projected shadow, or oriented
away from the light source), and about the
spacial orientation of regions. The methods
used are based on the labeling schemes of
Huffman and Clowes; this research provides
a considerable extension to their work and
also gives theoretical explanations to the
heuristic scene analysis work of Guzman,
Winston, and others.
Author[s]: Michael Speciner
How the GAS Program Works with a Note on Simulating Turtles with Touch Sensors
December 1972
ftp://publications.ai.mit.edu/ai-publications/0-499/AIM-272.ps
ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-272.pdf
The GAS program is a display simulation of a
2 dimensional ideal gas. Barriers, or walls,
are line segments, and molecules, alias
particles or balls, are circles. Collisions occur
between balls and other balls as well as
between balls and walls. All collisions are
elastic. Global gravitational, electric, and
magnetic fields can be imposed to act on the
articles. The following is a description of
some of the inner workings on the program.
Author[s]: David Silver
The Little Robot System
January 1973
ftp://publications.ai.mit.edu/ai-publications/0-499/AIM-273.ps
ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-273.pdf
The Little Robot System provides for the I.T.S. user a medium size four degree of freedom six axis robot which is controlled by the PDP-6 computer through the programming language Lisp. The robot includes eight force feedback channels which when interpreted by the PDP-6 are read by Lisp as the signed force applied to the end of the fingers. The first six forces are the X,Y, and Z forces and the torques around X, Y, and Z. the other two forces are the grippers and the vice grippers. The three X, Y, and Z forces and three torques are computed from six numbers read in from six L.V.D.Ts (Linear Variable Differential Transformers) arranged three in the vertical and three in the horizontal plane within a stress strain spring loaded wrist. The grip is read in from a strain gauge mounted on the stationary reference finger. The relative position between the motor shaft and the vice shaft is determined through means of two potentiometers to measure the vice force. The two shafts are coupled by a spring.
Author[s]: Marvin Minsky and Seymour Papert
Proposal to ARPA for Continuation of Micro-Automation Development
January 1973
ftp://publications.ai.mit.edu/ai-publications/0-499/AIM-274.ps
ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-274.pdf
This proposal discusses practical aspects of our project to produce a replicable research tool for development of real-world computer-controlled hand-eye systems. If this proposal is read out of context, it will not seem very sophisticated because it is concerned mainly with the practical aspects of putting together an engineering system. The theoretical and conceptual context is described more thoroughly in the memo, supplementary to our main ARPA contract proposal, that describes in detail robotics reasearch at the MIT A.I. Laboratory.
Author[s]: Martin Brooks and Jerrold Ginsparg
Differential Perceptrons
January 1973
ftp://publications.ai.mit.edu/ai-publciations/0-499/AIM-275.ps
ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-275.pdf
As originally proposed, perceptrons were machines that scanned a discrete retina and combined the data gathered in a linear fashion to make decisions about the figure presented on the retina. This paper considers differential perceptions, which view a continuous retina. Thus, instead of summing the results of predicates, we must now integrate. This involves setting up a predicate space which transforms the typical perceptron sum, Ea(p)a(f), into Esacp,f(p)dp, where f is the figure on the retina, i.e. in the differential case, the figure is viewed as a function on the predicate space. We show that differential perceptrons are equivalent to perceptrons on the class of figures that fit exactly onto a sufficiently small square grid. By investigating predicates of various geometric transformations, we discover that translation and symmetry can be computed in finite order using finite coefficients in both continuous and discrete cases. We also note that in the perceptron scheme, combining data linearly implies the ability to combine data in a polynomial fashion.
Author[s]: Michael Beeler
The Making of the Film, SOLAR CORONA
February 1973
ftp://publications.ai.mit.edu/ai-publications/0-499/AIM-276.ps
ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-276.pdf
The film SOLAR CORONA was made from data taken from August 14, 1969 through May 7, 1970, by OSO-VI, one of the Orbiting Satellite Observatories. One of the experiments on board scanned across and up and down the image of the sun, as we read a printed page. Each line of the scan was broken up into several distinct measurement points, similar to our eyes fixating as we read a line of text.
Author[s]: Vaughan R. Pratt
A Linguistics Oriented Programming Language
February 1973
ftp://publications.ai.mit.edu/ai-publications/0-499/AIM-277.ps
ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-277.pdf
A programming language for natural language processing programs is described. Examples of the output of programs written using it are given. The reasons for various design decisions are discussed. An actual session with the system is presented, in which a small fragment of an English-to-French translator is developed. Some of the limitations of the system are discussed, along with plans for further development.
Author[s]: Robert C. Moore
D-SCRIPT: A Computational Theory of Descriptions
February 1973
ftp://publications.ai.mit.edu/ai-publications/0-499/AIM-278.ps
ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-278.pdf
This paper descries D-SCRIPT, a language for representing knowledge in artificial intelligence programs. D-SCRIPT contains a powerful formalism for descriptions, which permits the representation of statements that are problematical for other systems. Particular attention is paid to problems of opaque contexts, time contexts, knowledge about knowledge. The design of a theorem prover for this language is also considered.
Author[s]: Ira Goldstein
Pretty-Printing, Converting List to Linear Structure
February 1973
ftp://publications.ai.mit.edu/ai-publications/0-499/AIM-279.ps
ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-279.pdf
Pretty-printing is the conversion of the list structure to a readable format. This paper outlines the computational problems encountered in such a task and documents the current algorithm in use.
Author[s]: Ira Goldstein
Elementary Geometry Theorem Proving
April 1973
ftp://publications.ai.mit.edu/ai-publications/0-499/AIM-280.ps
ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-280.pdf
An elementary theorem prover for a small part of plane Euclidean geometry is presented. The purpose is to illustrate important problem solving concepts that naturally arise in building procedural models for mathematics.
Author[s]: Patrick H. Winston (Editor)
Progress in Vision and Robotics
May 1973
ftp://publications.ai.mit.edu/ai-publications/0-499/AITR-281.ps
ftp://publications.ai.mit.edu/ai-publications/pdf/AITR-281.pdf
The Vision Flashes are informal working
papers intended primarily to stimulate internal
interaction among participants in the A.I.
Laboratory’s Vision and Robotics group. Many
of them report highly tentative conclusions or
incomplete work. Others deal with highly
detailed accounts of local equipment and
programs that lack general interest. Still
others are of great importance, but lack the
polish and elaborate attention to proper
referencing that characterizes the more formal
literature. Nevertheless, the Vision Flashes
collectively represent the only documentation
of an important fraction of the work done in
machine vision and robotics. The purpose of
this report is to make the findings more
readily available, but since they are not
revised as presented here, readers should
keep in mind the original purpose of the
papers!
Author[s]: Andee Rubin
Grammar for the People: Flowcharts of SHRDLU's Grammar
March 1973
ftp://publications.ai.mit.edu/ai-publications/0-499/AIM-282.ps
ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-282.pdf
The grammar which SHRDLU uses to parse sentences is outlined in a series of flowcharts which attempt to modularize and illuminate its structure. In addition, a short discussion of systemic grammar is included.
Author[s]: Scott E. Fahlman
A Planning System for Robot Construction Tasks
May 1973
ftp://publications.ai.mit.edu/ai-publications/0-499/AITR-283.ps
ftp://publications.ai.mit.edu/ai-publications/pdf/AITR-283.pdf
This paper describes BUILD, a computer
program which generates plans for building
specified structures out of simple objects
such as toy blocks. A powerful heuristic
control structure enables BUILD to use a
number of sophisticated construction
techniques in its plans. Among these are the
incorporation of pre-existing structure into the
final design, pre-assembly of movable sub-
structures on the table, and use of the extra
blocks as temporary supports and
counterweights in the course of construction.
BUILD does its planning in a modeled 3-
space in which blocks of various shapes and
sizes can be represented in any orientation
and location. The modeling system can
maintain several world models at once, and
contains modules for displaying states,
testing them for inter-object contact and
collision, and for checking the stability of
complex structures involving frictional forces.
Various alternative approaches are
discussed, and suggestions are included for
the extension of BUILD-like systems to other
domains. Also discussed are the merits of
BUILD's implementation language,
CONNIVER, for this type of problem solving.
Author[s]: Marvin Minsky and Seymour Papert
Proposal to ARPA for Continued Research on A.I. for 1973
June 1973
ftp://publications.ai.mit.edu/ai-publications/0-499/AIM-284.ps
ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-284.pdf
The Artificial Intelligence Laboratory proposes
to continue its work on a group of closely
interconnected projects, all bearing on
questions about how to make computers able
to use more sophisticated kinds of knowledge
to solve difficult problems. This proposal
explains what we expect to come of this work,
and why it seems to us the most profitable
direction for research at this time. The core of
this proposal is about well-defined specific
tasks such as extending the computer"s
ability to understand information presented as
visual scenes, or in natural, human language.
Although these specific goals are important
enough in themselves, we see their pursuit
also as tightly bound to the development of a
general theory of the computations needed to
produce intelligent processes. Obviously, a
certain amount of theory is needed to achieve
progress in this and we maintain tha the
steps toward a comprehensive theory in this
domain muyst include thorough analysis of
very specific phenomena. Our confidence in
this strategy is based both on past successes
and on our current theory of knowledge
structure. Our proposed solutions are still
evolving, but they all seem to revolve around
new methods of programming and new ways
to represent knowledge about programming.
Author[s]: Berthold K.P. Horn
The Binford-Horn LINE-FINDER
December 1973
ftp://publications.ai.mit.edu/ai-publications/0-499/AIM-285.ps
ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-285.pdf
This paper briefly describes the processing
performed in the course of producing a line
drawing from an image obtained through an
image dissector camera. The edge-marking
pahse uses a non-linear parallel line-follower.
Complicated statistical measures are not
used. The line and vertex generating phases
use a number of heuristics to guide the
transition from edge-fragments to cleaned up
line-drawing. Higher-level understanding of
the blocks-world is not used. Sample line-
drawings produced by the program are
included.
Author[s]: Gerald J. Sussman
The FINDSPACE Problem
March 1973
ftp://publications.ai.mit.edu/ai-publications/0-499/AIM-286.ps
ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-286.pdf
The FINDSPACE problem is that of establishing a volume in space where an object of specified dimensions will fit. The problem seems to have two subproblems: the hypothesis generation problem of finding a likely spot to try, and the verification problem of testing that spot for occupancy by other objects. This paper treats primarily the verification problem.
Author[s]: Tim Finin
Finding the Skeleton of a Brick
March 1973
ftp://publications.ai.mit.edu/ai-publications/0-499/AIM-287.ps
ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-287.pdf
TC-SKELETONs duty is to help find the
dimensions of brick shaped objects by
searching for sets of three complete edges,
one for each dimension. The program was
originally written by Patrick Winston, and then
was refined and improved by Tim Finin.
Author[s]: Daniel W. Corwin
Visual Position Extraction using Stereo Eye Systems with a Relative Rotational Motion Capability
March 1973
ftp://publications.ai.mit.edu/ai-publications/0-499/AIM-289.ps
ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-289.pdf
This paper discusses the problem of context-
free position estimation using a stereo vision
system with moveable eyes. Exact and
approximate equations are developed linking
position to measureable quantities of the
image-space, and an algorithm for finding
these quantities is suggested in rough form.
An estimate of errors and resolution limits is
provided.
Author[s]: Michael Beeler
Paterson's Worm
June 1973
ftp://publications.ai.mit.edu/ai-publications/0-499/AIM-290.ps
ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-290.pdf
A description of a mathematical idealization of the feeding pattern of a kind of worm is given.
Author[s]: Drew V. McDermott
Assimilation of New Information by a Natural Language Understanding System
February 1974
ftp://publications.ai.mit.edu/ai-publications/0-499/AITR-291.ps
ftp://publications.ai.mit.edu/ai-publications/pdf/AITR-291.pdf
This work describes a program, called
TOPLE, which uses a procedural model of the
world to understand simple declarative
sentences. It accepts sentences in a modified
predicate calculus symbolism, and uses
plausible reasoning to visualize scenes,
resolve ambiguous pronoun and noun phrase
references, explain events, and make
conditional predications. Because it does
plausible deduction, with tentative
conclusions, it must contain a formalism for
describing its reasons for its conclusions and
what the alternatives are. When an
inconsistency is detected in its world model, it
uses its recorded information to resolve it,
one way or another. It uses simulation
techniques to make deductions about
creatures motivation and behavior, assuming
they are goal-directed beings like itself.
Author[s]: Donald E. Eastlake
U.T.: Telnet Reference Manual
April 1974
ftp://publications.ai.mit.edu/ai-publications/0-499/AIM-292.ps
ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-292.pdf
UT is a user telnet program designed to run under the ITS time sharing system. It implements the relatively recent ARPA network negotiating protocol for telnet connections.
Author[s]: Ira P. Goldstein
Understanding Simple Picture Programs
April 1974
ftp://publications.ai.mit.edu/ai-publications/0-499/AITR-294.ps
ftp://publications.ai.mit.edu/ai-publications/pdf/AITR-294.pdf
What are the characteristics of the process by which an intent is transformed into a plan and then a program? How is a program debugged? This paper analyzes these questions in the context of understanding simple turtle programs. To understand and debug a program, a description of its intent is required. For turtle programs, this is a model of the desired geometric picture. a picture language is provided for this purpose. Annotation is necessary for documenting the performance of a program in such a way that the system can examine the procedures behavior as well as consider hypothetical lines of development due to tentative debugging edits. A descriptive framework representing both causality and teleology is developed. To understand the relation between program and model, the plan must be known. The plan is a description of the methodology for accomplishing the model. Concepts are explicated for translating the global intent of a declarative model into the local imperative code of a program. Given the plan, model and program, the system can interpret the picture and recognize inconsistencies. The description of the discrepancies between the picture actually produced by the program and the intended scene is the input to a debugging system. Repair of the program is based on a combination of general debugging techniques and specific fixing knowledge associated with the geometric model primitives. In both the plan and repairing the bugs, the system exhibits an interesting style of analysis. It is capable of debugging itself and reformulating its analysis of a plan or bug in response to self-criticism. In this fashion, it can qualitatively reformulate its theory of the program or error to account for surprises or anomalies.
Author[s]: Berthold K.P. Horn
On Lightness
October 1973
ftp://publications.ai.mit.edu/ai-publications/0-499/AIM-295.ps
ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-295.pdf
The intensity at a point in an image is the product of the reflectance at the corresponding object point and the intensity of illumination at that point. We are able to perceive lightness, a quantity closely correlated with reflectance. How then do we eliminate the component due to illumination from the image on our retina? The two components of image intensity differ in their spatial distribution. A method is presented here which takes advantage of this to compute lightness from image intensity in a layered, parallel fashion.
Author[s]: David Marr
An Essay on the Primate Retina
Jaunary 1974
ftp://publications.ai.mit.edu/ai-publications/0-499/AIM-296.ps
ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-296.pdf
This essay is considerably longer than the published version of the same theory and is designed for readers who have only elementary knowledge of the retina. It is organized into four parts. The first is a review that consists of four sections: retinal anatomy, physiology, psychophysics, and the retinex theory. The main exposition starts with Part II, which deals with the operation of the retina in conditions of moderate ambient illumination. The account is limited to an analysis of a single cone channel -- like the red or the green one -- the red channel being referred to frequently during the account. Part III considers various interesting properties of retinal signals, including those from the fully dark-adapted retina; and finally the thorny problem of bleaching adaptation is dealt with in Part IV. The general flow of the account will be from the receptors to the ganglion cells, and an analysis of each of the retinal cells and synapses is given in the appropriate place.
Author[s]: Gerald J. Sussman
A Computational Model of Skill Acquisition
August 1973
ftp://publications.ai.mit.edu/ai-publications/0-499/AITR-297.ps
ftp://publications.ai.mit.edu/ai-publications/pdf/AITR-297.pdf
This thesis confronts the nature of the
process of learning an intellectual skill, the
ability to solve problems efficiently in a
particular domain of discourse. The
investigation is synthetic; a computational
performance model, HACKER, is displayed.
Hacker is a computer problem-solving system
whose performance improves with practice.
HACKER maintains performance knowledge
as a library of procedures indexed by
descriptions of the problem types for which
the procedures are appropriate. When applied
to a problem, HACKER tries to use a
procedure from this “Answer Library”. If no
procedure is found to be applicable, HACKER
writes one using more general knowledge of
the problem domain and of programming
techniques. This new program may be
generalized and added to the Answer Library.
Author[s]: Seymour Papert
Uses of Technology to Enhance Education
June 1973
ftp://publications.ai.mit.edu/ai-publications/0-499/AIM-298.ps
ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-298.pdf
Section 1: Schematic outline of project and what we want. Hardly any intellectual content. Section 2: Statement of our goals in general terms. This statement is intended to have serious intellectual content but lacks meaty examples. Readers who find it too abstract for comfort might like to read at least part of #3 first. Section 3: A series fo extended examples intended to give more concrete substance to the generalities in #2. Section4: This is the real "proposal". It sets out specifically a list of concrete "goals" on which we want to work in the immediate future. Appendix: Papers by Jeanne Bamberger, Marvin Minsky, Seymour Papert and Cynthia Solomon.
Author[s]: P. Winston, B.K.P. Horn, G.J. Sussman, et al.
Proposal to ARPA for Research on Intelligent Automata and Micro-Automation
September 1973
ftp://publications.ai.mit.edu/ai-publications/0-499/AIM-299.ps
ftp://publications.ai.mit.edu/ai-publicaitons/pdf/AIM-299.pdf
The results of a decade of work in Artificial Intelligence have brought us to the threshold of a new phase of knowledge-based programming -- in which we can design computer systems that (1) react reasonably to significantly complicated situations and (2) perhaps more important for the future -- interact intelligently with their operators when they encounter limitations, bugs or insufficient information. This proposal lays out programmes for bringing several such systems near to the point of useful application. These include: A physical "micro-automation" system for maintenance and repair of electronic circuits. A related "expert" problem-solving program for diagnosis and modification of electronic circuits. A set of advanced "Automatic Programming" techniques and systems for aid in developing and debugging large computer programs. Some Advanced Natural Language application methods and sustems for use with these and other interactive projects. A series of specific "expert" problem solvers, including Chess analysis. Steps toward a new generation of more intelligent Information Retrieval and Management Assistance systems.
|
|