This file provides the abstracts of the papers. Click here to go back to the paper index page.
In this paper, we describe an interactive Chinese to English retrieval system that takes in Chinese queries and retrieves relevant documents in Chinese and English from document collections of both languages. We first describe the components for Chinese language processing and for Chinese-to-English query translation. Then we present the complete interactive system, through which a user is able to select and edit query terms manually in both Chinese and English, review the retrieval results in different presentation modes, and conduct retrieval using enhanced queries based on relevant documents or document clusters in either language. The system integrates CLARIT advanced technologies of natural language processing and information management, and is intended as a prototype and evaluation environment for identifying effective strategies to facilitate users with different levels of language proficiency in retrieving relevant documents from multilingual data collections.
In this paper, we identify factors that affect machine translation (MT) of a source query for cross-language information retrieval (CLIR) and empirically evaluate the effect of pseudo relevance feedback on cross-language retrieval performance. Our experiments demonstrate that, by using pseudo relevance feedback, we can significantly improve cross-language retrieval performance and achieve the level of monolingual retrieval.
In the TREC-8 cross-language information retrieval (CLIR) track, we adopted the approach of using machine translation to prepare a source-language query for use in a target-language retrieval task. We empirically evaluated (1) the effect of pseudo relevance feedback on retrieval performance with two feedback vector length control methods in CLIR, and (2) the effect of multilingual data merging either before or after retrieval. Our experiments show that, in general, pseudo relevance feedback significantly improves cross-language retrieval performance, and that post-retrieval merging of retrieval results can outperform pre-retrieval merging of multilingual data collections.
This paper presents a constraint-based model for cooperative response generation for information systems dialogues, with an emphasis on detecting and resolving situations in which the user's information needs have been over-constrained. Our model integrates and extends the AI techniques of constraint satisfaction, solution synthesis and constraint hierarchy to provide an incremental computational mechanism for constructing and maintaining partial parallel solutions. Such a mechanism supports immediate detection of over-constrained situations. In addition, we explore using the knowledge in the solution synthesis network to support different relaxation strategies.
The ability for a system to recognize when to take initiative and when to let the other party take the initiative is essential to the design of an effective human-computer dialogue system. However, there are not yet general guidelines on where a system should place the turn transition relevance points (TRPs). In this paper, we report some observations on the relationship between TRPs and the amount of information presented in human dialogues and in existing dialogue systems. The feasibility of planning TRPs based on the measure of information is investigated in a user study. Preliminary results of the user study are discussed.
Cumulative error limits the usefulness of context in applications utilizing contextual information. It is especially a problem in spontaneous speech systems where unexpected input, out-of-domain utterances and missing information are hard to fit into the standard structure of the contextual model. In this paper we discuss how our approaches to recognizing speech acts address the problem of cumulative error. We demonstrate the advantage of the proposed approaches over those that do not address the problem of cumulative error. The experiments are conducted in the context of Enthusiast, a large Spanish-to-English speech-to-speech translation system in the appointment scheduling domain.
In this paper we discuss how we apply discourse predictions along with non context-based predictions to the problem of parse disambiguation in Enthusiast, a Spanish-to-English translation system. We discuss extensions to our plan-based discourse processor in order to make this possible. We evaluate those extensions and demonstrate the advantage of exploiting context-based predictions over a purely non context-based approach.
Attempts at discourse processing of spontaneously spoken dialogue face several difficulties: multiple hypotheses that result from the parser's attempts to make sense of the output from the speech recognizer, ambiguity that results from segmentation of multi-sentence utterances, and cumulative error --- errors in the discourse context which cause further errors when subsequent sentences are processed. In this paper we will describe our robust parsers, our procedures for segmenting long utterances, and two approaches to discourse processing that attempt to deal with ambiguity and cumulative error.
We report on techniques for using discourse context to reduce ambiguity and improve translation accuracy in a multi-lingual (Spanish, German, and English) spoken language translation system. The techniques involve statistical models as well as knowledge-based models including discourse plan inference. This work is carried out in the context of the Janus project at Carnegie Mellon University and the University of Karlsruhe.
A direct abstract machine implementation of the core attribute-value logic operations is shown to decrease the number of operations and conserve the amount of storage required when compared to interpreters or indirect compilers. In this paper, we describe the fundamental data structures and compilation techniques that we have employed to develop a unification and constraint-resolution engine capable of performance rivaling that of directly compiled Prolog terms while greatly exceeding Prolog in flexibility, expressiveness and modularity.
In this paper, we will discuss the core architecture of our machine. We begin with a survey of the data structures supporting the small set of attribute-value logic instructions. These instructions manipulate feature structures by means of features, equality, and typing, and manipulate the program state by search and sequencing operations. We further show how these core operations can be integrated with a broad range of standard parsing techniques. Feature structures improve upon Prolog terms by allowing data to be organized by feature rather than by position. This encourages modular program development through the use of sparse structural descriptions which can be logically conjoined into larger units and directly executed. Standard linguistic representations, even of relatively simple local syntactic and semantic structures, typically run to hundreds of substructures. The type discipline we impose organizes information in an object-oriented manner by the multiple inheritance of classes and their associated features and type value constraints. In practice, this allows the construction of large-scale grammars in a relatively short period of time.
At run-time, eager copying and structure-sharing is replaced with lazy, incremental, and localized branch and write operations. In order to allow for applications with parallel search, incremental backtracking can be localized to disjunctive choice points within the description of a single structure, thus supporting the kind of conditional mutual consistency checks used in modern grammatical theories such as HPSG, GB, and LFG. Further attention is paid to the byte-coding of instructions and their efficient indexing and subsequent retrieval, all of which is keyed on type information.
In this paper we discuss how we apply predictions from our plan-based discourse processor discussed to the problem of disambiguation. The work we report here has been done in the context of the Enthusiast Spanish to English translation system. The Enthusiast system is part of the JANUS Speech-to-Speech translation system. We discuss and evaluate two different methods for combining context based predictions with non context based predictions, namely a genetic programming approach and a neural network approach. We demonstrate the advantage of incorporating context-based predictions over the purely non context-based approach. The results presented here show a significant improvement over our previous results reported in Levin et al 1995.
This paper describes a computational approach for automatically acquiring discourse regularities or discourse rules from naturally occuring dialogs. We describe decision tree learning approach for learning discourse regularities for the task of determining speech acts of sentences from a corpus of multi-lingual scheduling dialogs. The automatically acquired rules, which represent linguistic, semantic, pragmatic, and contextual information of the sentences, are human-intelligible and can be integrated directly into natural language systems or be used as a basis for human discourse analyzers.
Back to Yan Qu's Paper Index Page
Back to Yan Qu's Home PageYan Qu / firstname.lastname@example.org / Carnegie Mellon University