CSAIL Publications and Digital Archive header
bullet Technical Reports bullet Work Products bullet Research Abstracts bullet Historical Collections bullet

link to publications.csail.mit.edu link to www.csail.mit.edu horizontal line

 

Research Abstracts - 2006
horizontal line

horizontal line

vertical line
vertical line

Leveraging Language into Learning

Jacob Beal

Problem

Although AI has had great success in solving nicely represented problems, our systems are still terrible at problems that we can't describe well with a simple representation. We simply have not yet developed practical algorithms that can learn or invent representations to deal with an unprecedented situation. Partly as a result of this lack, computers perform badly at unbounded tasks like understanding pictures, interpreting conversations, and participating in creative brainstorming---all of which people do very well.

Anecdotes from human cooperative learning, combined with evidence from cognitive science, lead me to conjecture a novel engineering principle: a system composed of components which collaborate by constructing a shared vocabulary can adapt to novel or confusing situations.

Motivation

How is it that people can deal with confusing, unfamiliar situations? One thing they do is talk with other people! People brainstorm, free-associating ideas and trying to make sense of the detritus they produce. Teachers explain material to their students and end up understanding it better themselves. And being wrong is sometimes more important than being right: there are stories about people going to tell Marvin Minsky about an idea, which he then misunderstood as a much better idea than they'd started with!

Interestingly, there is some evidence that the human brain might operate on the same principle. Cognitive science research finds that the human brain may be composed of modules[6,7] which learn throughout childhood to communicate and to reason cooperatively [4,10]. Even in adults, collaboration between regions of the brain may depend on a language faculty. For example, ``jamming'' language prevents an adult from using color information to reorient[5] and native language biases how vision affects reasoning about time [3].

Although decomposing a system into independent components and forcing them to build their own communications will generally add significant complexity and overhead to a system, such an architecture provides three useful properties which may prove to greatly outweigh the increased costs:

  • Creative Misunderstanding: Translating between components with dissimilar representations will tend to mutate the data being translated. Since mistranslation is systematic, however, the mutation tends to produce different sense rather than nonsense (Kirby exploits this in his work on language evolution [8,9], in which a coincidence in the speaker's utterances becomes a pattern in the listener's interpretation).
  • Mistake Filtering: Components with dissimilar representations will tend to make different mistakes. A group of components comparing possible conclusions can then collectively filter out most incorrect conclusions, greatly reducing the search space.
  • Reified Relations: Communication by constructing a shared vocabulary means that relations between features are instantiated as vocabulary words. The presence of a vocabulary word can, itself, be a feature, allowing recursive learning on the reified relations (Winston's Macbeth system [11] demonstrates the power of reified relations in metaphoric story understanding).
Previous Work

In previous work, I demonstrated two agents creating a shared vocabulary and inflections, for the purpose of communicating thematic role frames [1,2]. In that case, the two agents used very similar representations, so the vocabulary contains only identity relations.

There are other relations, however, such as cause and effect, or proximity, which should be nearly as easy to acquire. Incorporating non-identity relations and exploiting the advantages of creative misunderstanding, mistake filtering, and reified relations, a system which builds a shared vocabulary becomes potentially very powerful indeed.

Approach

I hypothesize that learning a shared vocabulary to communicate between components of a system is equivalent to general learning. Further, I hypothesize that such a system, given a novel or confusing situation, will be able to quickly learn to solve problems in that situation.

Toward these ends, I have identified the following goals:

  • Develop examples illustrating how learning shared vocabulary can be leveraged into other types of knowledge acquisition
  • Identify scenarios in which creative misunderstanding, mistake filtering, and reified relations can be exploited to simplify learning representations.
  • Construct a testbed and tasks to test knowledge acquisition and learning representations.
  • Determine conditions, based on experimental results, in which shared vocabulary learning provides equal or superior capabilities to unified learning techniques.
References:

[1] Jacob Beal. An algorithm for bootstrapping communications. In International Conference on Complex Systems (ICCS), June 2002.

[2] Jacob Beal. Generating communications systems through shared context. Technical Report AITR 2002-002, MIT Artificial Intelligence Laboratory, January 2002.

[3] Lera Boroditsky. Does language shape thought? english and mandarin speakers' conceptions of time. Cognitive Psychology, 43:1--22, 2001.

[4] Linda Hermer and Elizabeth Spelke. Modularity and development: the case of spatial reorientation. Cognition, 61:195--232, 1996.

[5] Linda Hermer-Vasquez, Elizabeth Spelke, and Alla Katznelson. Sources of flexibility in human cognition: Dual-task studies of space and language. Cognitive Psychology, 39:3--36, 1999.

[6] Nancy Kanwisher. Advances in Psychological Science. Volume 2: Biological and Cognitive Aspects, chapter The Modular Structure of Human Visual Recognition: Evidence from Functional Imaging., pages 199--214. 1998.

[7] Nancy Kanwisher and Ewa Wojciulik. Visual attention: Insights from brain imaging. Natural Reviews Neuroscience, 1:91--100, 2000.

[8] Simon Kirby. Language evolution without natural selection: From vocabulary to syntax in a population of learners. Technical Report EOPL-98-1 Edinburgh Occasional Paper in Linguistics, University of Edinburgh Department of Linguistics, 1998.

[9] Simon Kirby. Linguistic Evolution through Language Acquisition: Formal and Computational Models, chapter Learning, Bottlenecks and the Evolution of Recursive Syntax. Cambridge University Press, 2000.

[10] Elizabeth Spelke, Peter Vishton, and Claes von Hofsten. The Cognitive Neurosciences, chapter Object perception, object-directed action, and physical knowledge in infancy. MIT Press, 1994.

[11] Patrick Winston. Learning by augmenting rules and accumulating censors. Technical Report 678, MIT Artificial Intelligence Laboratory, May 1982.

vertical line
vertical line
 
horizontal line

MIT logo Computer Science and Artificial Intelligence Laboratory (CSAIL)
The Stata Center, Building 32 - 32 Vassar Street - Cambridge, MA 02139 - USA
tel:+1-617-253-0073 - publications@csail.mit.edu