CSAIL Publications and Digital Archive header
bullet Research Abstracts Home bullet CSAIL Digital Archive bullet Research Activities bullet CSAIL Home bullet

link to publications.csail.mit.edu link to www.csail.mit.edu horizontal line

 

Research Abstracts - 2007
horizontal line

horizontal line

vertical line
vertical line

Automatic Generation of Clozes on Prepositions

John Lee & Stephanie Seneff

Introduction

Fill-in-the-blank questions with multiple choices --- also known as clozes --- are widely used for assessing language proficiency. Typically, one word is removed from a sentence, and the subject is asked to choose from a number of candidate words to fill in the blank. In general, the steps for producing a cloze are:

1. Select a seed sentence (also called the source clause) from a source corpus.
2. Determine the key, that is, the word to be removed from the seed sentence.
3. Generate distractors, or incorrect choices, for the key.

which yield a cloze such as the following, taken from [1]:

The child's misery would move even the most [blank] heart.
(a) torpid
(b) invidious
(c) stolid
(d) obdurate

Motivation

A more effective learning experience may be facilitated by seed sentences drawn from source corpora that are of interest to the student, say, in the travel or business domain. It is clear, however, that such personalized clozes are too labor-intensive to design manually. This has motivated research on automatic generation of clozes.

To the best of our knowledge, past research has addressed only clozes whose keys are of an open-class part-of-speech (POS), e.g., nouns, verbs, or adjectives. Words that occur relatively infrequently are selected as keys, with the intention of improving or evaluating the vocabulary level of the student.

While vocabulary build-up is important in language learning, the usage of determiners and prepositions turns out to be among the most frequent type of errors for non-native speakers, according to a corpus of transcripts of spoken English [2]. In this work, we investigate various techniques for generating clozes on prepositions.

Prepositions present some new challenges in automatic cloze generation. Like other closed-class POS, the set of prepositions is much smaller than that of nouns, verbs or adjectives. As a result, most prepositions are already familiar to the student, and good choices for distractors are more difficult to determine. Word frequency, the criterion that has been successfully applied to open-class POS, is unlikely to perform well for prepositions.

Related Work

Past research has encompassed both key and distractor selection (steps 2 and 3). The key is often chosen according to word frequency [3], so as to match the student's vocabulary level. Machine learning methods are applied in [4] to determine the best key, using clozes in a standard language test as training material.

A good distractor must satisfy two requirements: it must be similar enough to the key to be a viable alternative, and yet it must also, obviously, be an incorrect choice. For the first requirement, various criteria have been proposed: similarity in frequency to the key (e.g., [1] and [3]); similarity in meaning to the key, with respect to a thesaurus [5] or an ontology in a narrow domain [6]; or matching patterns hand-crafted by experts [7].

As for the second requirement, the distractor must yield a sentence with zero hits on the web in [5]; or result in a rare collocation with other important words in the sentence in [8].

Approach

A preposition is a link between two words, typically nouns or verbs. When a preposition prep links the words A and B, we represent it with the triplet [A, prep, B]. A set of such triplets can be harvested from a parallel native/non-native corpus, by selecting examples where the preposition is misused.

These triplets can be matched against sentences from any new corpus of interest to the student (step 2). Our task, then, is to produce distractors for each matched triplet/sentence. Three methods are proposed:

(A) With a parallel native/non-native corpus: Use the original (hence, incorrect) preposition in the non-native sentence. This method directly models mistakes made by non-native speakers.

(B) With a native corpus:

  1. Context-dependent: Use a preposition that appears in similar contexts in the native corpus. First, we reject all prepositions that occur as [A, prep, B] in the corpus; for the remaining candidates, we count the number of times they occur in the triplets [A, prep, *] and [*, prep, B]. The candidate that occurs most frequently is selected as the distractor.
  2. Word Frequency only: Use the preposition whose frequency in the native corpus is closest to that of the key. This criterion, commonly used in generating open-class POS clozes, serves as a baseline.

Below is an example cloze generated from the seed sentence "I am a freshman at XYZ University", with distractors produced by all three methods:

I am a freshman [blank] XYZ University.
(a) at [key]
(b) in [method A]
(c) to [method B1]
(d) by [method B2]

Evaluation and Future Work

We have generated clozes based on the following corpora:

  • Source corpus: about 20,000 transcripts in the travel domain, obtained from the International Workshop on Spoken Language Translation (IWSLT);
  • Parallel native/non-native corpus: about 1,300 instances of preposition mistakes in the Japanese Learners of English (JLE) corpus [2];
  • Native corpus: 10 million sentences from the New York Times.

Evaluations with students of English as a second language are under way.

In the future, we plan to generate clozes tailored for other error classes. One class of interest to us, which also occurs frequently in the JLE corpus, is the confusion between verb forms, e.g., the infinitive, participle, -ing and base forms.

This work was supported by Lincoln Laboratory.

References:

[1] J. C. Brown, G. A. Frishkoff and M. Eskenazi. Automatic Question Generation for Vocabulary Assessment. In Proc. HLT-EMNLP, Vancouver, Canada, 2005.

[2] E. Izumi, K. Uchimoto, T. Saiga, T. Supnithi, and H. Isahara. Automatic Error Detection in the Japanese Learners' English Spoken Data. In Proc. ACL, 2003.

[3] C.-C. Shei. FollowYou!: An Automatic Language Lesson Generation System. In Computer Assisted Language Learning, 14(2):129-144, 2001.

[4] A. Hoshino and H. Nakagawa. A Real-Time Multiple-Choice Question Generator for Language Testing: A Preliminary Study. In Proc. 2nd Workshop on Building Educational Applications using NLP, Ann Arbor, MI, 2005.

[5] E. Sumita, F. Sugaya and S. Yamamoto. Measuring Non-native Speakers' Proficiency of English by Using a Test with Automatically-Generated Fill-in-the-Blank Questions. In Proc. 2nd Workshop on Building Educational Applications using NLP, Ann Arbor, MI, 2005.

[6] N. Karamanis, L. A. Ha, and R. Mitkov. Generating Multiple-Choice Test Items from Medical Text: A Pilot Study. In Proc. 4th International Natural Language Generation Conference, Sydney, Australia, 2006.

[7] C.-Y. Chen, H.-C. Liou and J. S. Chang. FAST --- An Automatic Generation System for Grammar Tests. In Proc. COLING/ACL Interactive Presentation Sessions, Sydney, Australia, 2006.

[8] C.-L. Liu, C.-H. Wang, Z.-M. Gao and S.-M. Huang. Applications of Lexical Information for Algorithmically Composing Multiple-Choice Cloze Items. In Proc. 2nd Workshop on Building Educational Applications using NLP, Ann Arbor, MI, 2005.

 

vertical line
vertical line
 
horizontal line

MIT logo Computer Science and Artificial Intelligence Laboratory (CSAIL)
The Stata Center, Building 32 - 32 Vassar Street - Cambridge, MA 02139 - USA
tel:+1-617-253-0073 - publications@csail.mit.edu