CSAIL Publications and Digital Archive header
bullet Technical Reports bullet Work Products bullet Research Abstracts bullet Historical Collections bullet

link to publications.csail.mit.edu link to www.csail.mit.edu horizontal line


Research Abstracts - 2006
horizontal line

horizontal line

vertical line
vertical line

Discourse and Dialog in the START Question Answering System

Boris Katz & Sue Felshin

The Problem

Question answering systems based on language enable expressive and concise communication: the user can pose natural questions and receive natural and relevant responses. However, language can be ambiguous and vague. We should not force the user to adapt to the computer and formulate precise and unambiguous queries. Instead, the computer should adapt itself to ambiguity and missing data as a human does: by engaging in conversation, by inferring information missing from the question, and by giving intelligent related answers when the exact answer is not available (“near-miss” answers [2]).

The START system (see [1] and related abstracts) provides users with convenient access to information through its ability to retain conversational state, recognize ellipsis, give appropriate near-miss answers, and report intelligently on ambiguity and failure to find information.


Conversational and interactive abilities allow START to make assumptions about information missing from the question or query the user about it, choose amongst multiple answers to questions, and provide intelligent near-miss answers. These conversational capabilities allow the user to interact with the system with convenient, natural brevity.


START operates by parsing user questions into structural representations, matching these representations against its knowledge base, and retrieving information in order to return high-precision answers to questions. START's use of linguistic processing gives it several opportunites to incorporate discourse and dialog techniques to improve its operation:

  • START tracks multiple exchanges between user and computer as a conversation and chooses between analyzing a non-sentential question as a fragmentary question or as an elliptical question related to the preceding question.
  • START chooses between responding to ambiguity with all available answers, or with intelligently-chosen selected answers. When it presents selected answers, it indicates to the user what additional answers are available.
  • When an exact answer is not available, START seeks related information that can be considered a partial answer, and indicates that it has done so.
  • START is able to comment informatively on its own answers. Examples of such answer commentary appear in the examples below and in our abstract on syntactic decomposition.

Using the structural representation of the preceding question, START identifies what material in the preceding question should be replaced by the new, elliptical question phrase and chooses among multiple potential antecedents by examining their lexical features to find the closest semantic match.

⇒ What Asian country has the eighth largest population?
Vietnam has the eighth largest population among countries in Asia.
Population: 83,535,576 (July 2005 est.)
⇒ Sixth lowest birth rate?
I assume that you wanted to know which Asian country has the sixth lowest birth rate.
Georgia has the sixth lowest birth rate among countries in Asia.
Birth rate: 10.25 births/1,000 population (2005 est.)
Source: The World Factbook 2005

Selecting Among Multiple Results

The more understanding a system has of the structure and intent of the question, the better it is able to select among multiple results. Because START performs a linguistic analysis of questions, it can distinguish types of ambiguity and multiplicity; it can distinguish whether multiple replies are different answers to the same interpretation of the question, or answers to different interpretations of the question. For example, when word(s) in the question can match more than one entity in a class, START chooses to respond about all entities or to query the user for clarification. Some entities are marked as important (manually assigned or heuristically calculated) and are preferred over others in the same class. Thus START presents the information which is most likely wanted, yet remains fully informative.

screenshot of reply in browser
Figure 1: START's answer to the question “When was Barrymore born?”.
Near Misses, Partial Answers, and Recognizable Failures

For structured and semi-structured databases indexed by START [3] (see related abstract), START can be confident that if no answer is found in the source, it is because the source does not contain the answer.

START uses knowledge of real-world properties of entities in order to provide near-miss and partial answers. This requires ontological knowledge of how properties and entities relate within and across types of properties and entities, and therefore can only be implemented in the general case by building a complete ontology. In practice, however, the bulk of actual user questions address a relatively small number of types and properties, so that a small amount of ontology building can improve a comparatively large proportion of questions.

⇒ Show weather for Mt. Etna.
I don't have this information about Mt Etna. Instead, I can show you the weather for Caltanissetta, Italy, which is 4.0 miles from Mt Etna.
Source: START KB and The Weather Channel

⇒ How far is Mt. Everest from Massachusetts?
The distance between Kathmandu, the capital of Nepal (where Mount Everest is located) and Boston, the capital of Massachusetts, is 7,376 miles (11,897 kilometers).
Source: START KB

⇒ How far is Toronto from Canada?
As far as I know, the city of Toronto, Ontario is located in Canada.

Research Support

This work is supported in part by the Advanced Research and Development Activity as part of the AQUAINT Phase II research program.


[1] Boris Katz. Annotating the World Wide Web Using Natural Language. In Proceedings of the 5th RIAO Conference on Computer Assisted Information Searching on the Internet (RIAO '97), Montreal, Canada, 1997.

[2] Boris Katz and Sue Felshin. Discourse and Dialog in the START Question Answering System. In Proceedings of the 5th SIGdial Workshop on Discourse and Dialogue (SIGdial '04), Demos during the Workshop, Cambridge, Massachusetts, 2004.

[3] Boris Katz, Sue Felshin, Deniz Yuret, Ali Ibrahim, Jimmy Lin, Gregory Marton, Alton Jerome McFarland, and Baris Temelkuran. Omnibase: Uniform Access to Heterogeneous Data for Question Answering. In Proc. of the 7th Int. Workshop on Applications of Natural Language to Information Systems (NLDB '02), Stockholm, Sweden, June 2002.


vertical line
vertical line
horizontal line

MIT logo Computer Science and Artificial Intelligence Laboratory (CSAIL)
The Stata Center, Building 32 - 32 Vassar Street - Cambridge, MA 02139 - USA
tel:+1-617-253-0073 - publications@csail.mit.edu