|
Research
Abstracts - 2007 |
Leveraging Language into LearningJacob BealProblemAlthough AI has had great success in solving nicely represented problems, our systems are still terrible at problems that we can't describe well with a simple representation. We simply have not yet developed practical algorithms that can learn or invent representations to deal with an unprecedented situation. Partly as a result of this lack, computers perform badly at unbounded tasks like understanding pictures, interpreting conversations, and participating in creative brainstorming---all of which people do very well. Anecdotes from human cooperative learning, combined with evidence from cognitive science, lead me to conjecture a novel engineering principle: MotivationHow is it that people can deal with confusing, unfamiliar situations? One thing they do is talk with other people! People brainstorm, free-associating ideas and trying to make sense of the detritus they produce. Teachers explain material to their students and end up understanding it better themselves. And being wrong is sometimes more important than being right: there are stories about people going to tell Marvin Minsky about an idea, which he then misunderstood as a much better idea than they'd started with! Interestingly, there is some evidence that the human brain might operate on the same principle. Cognitive science research finds that the human brain may be composed of modules[6,7] which learn throughout childhood to communicate and to reason cooperatively [4,10]. Even in adults, collaboration between regions of the brain may depend on a language faculty. For example, ``jamming'' language prevents an adult from using color information to reorient[5] and native language biases how vision affects reasoning about time [3]. Although decomposing a system into independent components and forcing them to build their own communications will generally add significant complexity and overhead to a system, such an architecture provides three useful properties which may prove to greatly outweigh the increased costs:
Previous WorkIn previous work, I demonstrated two agents creating a shared vocabulary and inflections, for the purpose of communicating thematic role frames [1,2]. In that case, the two agents used very similar representations, so the vocabulary contains only identity relations. There are other relations, however, such as cause and effect, or proximity, which should be nearly as easy to acquire. Incorporating non-identity relations and exploiting the advantages of creative misunderstanding, mistake filtering, and reified relations, a system which builds a shared vocabulary becomes potentially very powerful indeed. ApproachI hypothesize that learning a shared vocabulary to communicate between components of a system is equivalent to general learning. Further, I hypothesize that such a system, given a novel or confusing situation, will be able to quickly learn to solve problems in that situation. Toward these ends, I have identified the following goals:
References:[1] Jacob Beal. An algorithm for bootstrapping communications. In International Conference on Complex Systems (ICCS), June 2002. [2] Jacob Beal. Generating communications systems through shared context. Technical Report AITR 2002-002, MIT Artificial Intelligence Laboratory, January 2002. [3] Lera Boroditsky. Does language shape thought? english and mandarin speakers' conceptions of time. Cognitive Psychology, 43:1--22, 2001. [4] Linda Hermer and Elizabeth Spelke. Modularity and development: the case of spatial reorientation. Cognition, 61:195--232, 1996. [5] Linda Hermer-Vasquez, Elizabeth Spelke, and Alla Katznelson. Sources of flexibility in human cognition: Dual-task studies of space and language. Cognitive Psychology, 39:3--36, 1999. [6] Nancy Kanwisher. Advances in Psychological Science. Volume 2: Biological and Cognitive Aspects, chapter The Modular Structure of Human Visual Recognition: Evidence from Functional Imaging., pages 199--214. 1998. [7] Nancy Kanwisher and Ewa Wojciulik. Visual attention: Insights from brain imaging. Natural Reviews Neuroscience, 1:91--100, 2000. [8] Simon Kirby. Language evolution without natural selection: From vocabulary to syntax in a population of learners. Technical Report EOPL-98-1 Edinburgh Occasional Paper in Linguistics, University of Edinburgh Department of Linguistics, 1998. [9] Simon Kirby. Linguistic Evolution through Language Acquisition: Formal and Computational Models, chapter Learning, Bottlenecks and the Evolution of Recursive Syntax. Cambridge University Press, 2000. [10] Elizabeth Spelke, Peter Vishton, and Claes von Hofsten. The Cognitive Neurosciences, chapter Object perception, object-directed action, and physical knowledge in infancy. MIT Press, 1994. [11] Patrick Winston. Learning by augmenting rules and accumulating censors. Technical Report 678, MIT Artificial Intelligence Laboratory, May 1982. |
||||
|