|
Research
Abstracts - 2007 |
A Discriminative Model for Tree-to-Tree TranslationBrooke Cowan, Ivona Kučerová & Michael CollinsMotivationWe develop a framework for tree-to-tree based statistical translation [1]. Our goal is to learn a model that maps parse trees in the source language to parse trees in the target language. The model is learned from a corpus of translation pairs, where each sentence in the source or target language has an associated parse tree. We see two major benefits of tree-to-tree based translation. First, it is possible to explicitly model the syntax of the target language, thereby improving grammaticality. Second, we can build a detailed model of the correspondence between the source and target parse trees, thereby attempting to construct translations that preserve the meaning of source language sentences. Background: Aligned Extended ProjectionsOur approach involves the prediction of a syntactic object called an aligned extended projection or AEP. AEPs are based on work in tree-adjoining grammars, particularly on extended projections [2][3], and synchronous tree-adjoining grammar [4]. An extended projection is a tree containing a single content word (e.g., verb, noun, adjective) and one or more function words. Figure 1 shows some example extended projections. Figure 1: Two example extended projections. The first contains the content word note. The second contains the content word been and the function words that and have. An aligned extended projection for translation consists of a parse tree in the source language, an extended projection in the target language, and an alignment between them. The alignment is a mapping from indices representing the modifiers in the source tree to positions in the extended projection. Figure 2 shows an example AEP. Figure 2: An example aligned extended projection. The two modifiers in the source-language tree, labeled 1 and 2, are mapped to positions in the target-language extended projection. A Translation Framework Using AEPsThe process of translation using AEPs is carried out in the following steps:
These steps are depicted graphically below. Steps 1 & 2: Step 3: Step 4: Step 5: Training the AEP Model with the Perceptron AlgorithmA principal contribution of this work is the AEP prediction stage (Step 3). The AEP prediction model is a linear discriminative model trained using the perceptron algorithm. The particular variant we use is related to work done on incremental parsing by Collins and Roark [5]. For AEP prediction, we represent each AEP y as a sequence of decisions: y=<d1,...,dn>, where dj is the jth decision. We predict the best AEP y* to be where Φ∈ℜd is a feature vector, and ϑ∈ℜd is a parameter vector. SupportThis work was funded by NSF grant #IIS-0415030, and by a grant from NTT, Agmt. Dtd. 6/21/1998. References[1] Brooke Cowan, Ivona Kučerová & Michael Collins. A Discriminative Model for Tree-to-Tree Translation. In Proceedings of Empirical Methods in Natural Language Processing. Sidney, Australia, July 2006. [2] Jane Grimshaw. Extended Projection. Masters Thesis, Brandeis University, 1991. [3] Robert Frank. Phrase Structure Composition and Syntactic Dependencies. Cambridge, MA: MIT Press, 2002. [4] Stuart Shieber and Yves Schabes. Synchronous Tree-Adjoining Grammars. In Proceedings of the 13th International Conference on Computational Linguistics. 1990. [5] Michael Collins and Brian Roark. Incremental Parsing with the Perceptron Algorithm. In Proceedings of the Association for Computational Linguistics. 2004. |
||||
|