EUREKA: Dialogue-based Information Retrieval
for Very Low Bandwidth Environments

Antoine Raux

11-743 Advanced Information Retrieval Seminar and Lab - Fall 2003

This is the web page of my project for the Advanced IR Seminar and Lab.

Problem Background

Traditional web search engines (e.g. Google) rely on the high bandwidth of desktop/laptop computer monitors to provide very large amounts of information (e.g. long lists of retrieved documents with summaries). This has two advantages. First, it ensures good efficiency, since humans can browse through relatively large amounts of visual information (say, the contents of one or two screens) to get the specific answer to their information need. Second, it gives maximum flexibility to the users in terms of search strategies. For instance, one user, searching for a specific known document, might just look at the titles of the retrieved pages, whereas a different user, looking for an overview of a new topic, could first skim through the three lines summaries that are often included in the results. In the standard web search paradigm, the search engine itself is not aware of the user's prefered strategy, it simply provide a large amount of information, from which the user selects the aspects she considers relevant to her information need.
Unfortunately, this approach is not applicable to low bandwidth devices. For example, using the small-scale display of a cell phone, it takes a long time to browse through a complete list of documents. Another example is speech-only interaction, either phone-based services or devices for the blind. Synthesizing the titles of the full list of retrieved documents is not only time-consuming, it is also counter-productive since the user is likely to forget what was said. In such cases, the amount of information that can effectively be transmitted to the user is necessarily limited. Therefore, instead of completely relying on the user to select relevant information, the system must preselect a limited amount of information to convey.

Proposed Solution

The solution proposed in this project is to resort to (quasi) natural language dialogue to explore the search space in order to fulfill the user's information need. Given an initial query from the user, the system sends it to a search engine and gets the results. According to the query, the results and its knowledge of the user (if any), the system selects a strategy among the following ones:

This list is temptative and non-exhaustive, which means it is likely to change in the course of the project, as I get a better grasp of the problem and the desirable strategies. Initially, the system will have a fixed default behavior (e.g. always give the list of clusters except if no more clusters are found, in which case, give the list of document titles). In this case, most of the initiative is left to the user, who can explicitly specify one of the above strategies if the default behavior is not satisfying. Ultimately, the goal is to shift to a mixed-initiative system, which should be able to establish the user's specific information need (eventually by asking questions to the user such as "are you looking for a specific document or an overview of the field?"), and choose the search strategy accordingly. Given the time frame of this project, I don't intend to explore this last aspect in details, it will be left for future research. Hence, the focus will on designing and implementing a set of efficient strategies for low bandwidth dialogue-based information retrieval.

Implementation

I propose to design and implement Eureka, a dialogue system for information retrieval based on extensions of the CMU Communicator project. The dialogue manager will use RavenClaw, a dialogue management architecture designed at CMU by Dan Bohus. The information retrieval backend will be provided by the Vivisimo search engine. For this project, I will use text input to eliminate the issues purely related to speech recognition. Spoken output, however, will be provided using a standard general purpose voice of the Festival speech synthesis system. The modules I need to write in order to build the whole system are:

Documents

Slides of the project's initial talk
(this is, to a large extent, outdated. For updated information, you should refer to the description of the project on this web page.)
Slides of the project's mid-term talk

Final report

Timetable

September - October Design of the system
Implementation of the backend (scraping from Vivisimo.com)
Integration in the RavenClaw framework
November Implementation of different search strategies
December Evaluation (user studies)
Final Report

Related works

Belkin, N.J., Cool, C., Stein, A. & Thiel, U. (1995) Cases, scripts and information-seeking strategies: On the design of interactive information retrieval systems. Expert Systems with Applications, 9 (3): 379-395
This paper describes a dialogue approach to IR. The authors propose a multidimensional representation of a user's information seeking strategies, richer than the keyword based search where the user is solely represented by a list of keywords (the query). This representation opens the way to a classification of search strategies and the analysis of the dialogue patterns used to apply each strategy. They argue that these dialogue patterns (implemented as scripts) can be derived from data, following a Case-Based Reasoning approach. However, ultimately, their system uses a graphical interface instead of natural language dialogue. I believe that the complexity and rigidity of the interface might deter users (this is to be contrasted to standard web search engines that have a simplistic interface and provide large amounts of information that the user can exploit freely). Nevertheless, this paper is an excellent example of dialogue-based approaches to IR and their approach, extended for improved flexibility, could be used in environments where traditional high bandwidth approaches are not applicable.