Traditional web search engines (e.g. Google) rely on the high bandwidth of
desktop/laptop computer monitors to provide very large amounts of information (e.g. long lists of retrieved
documents with summaries). This has two advantages. First, it ensures good efficiency, since humans can browse
through relatively large amounts of visual information (say, the contents of one or two screens) to get the
specific answer to their information need. Second, it gives maximum flexibility to the users in terms of search
strategies. For instance,
one user, searching for a specific known document, might just look at the titles of the retrieved pages, whereas a
different user, looking for an overview of a new topic, could first skim through the three lines summaries that are
often included in the results. In the standard web search paradigm, the search engine itself is not aware of the
user's prefered strategy, it simply provide a large amount of information, from which the user selects the aspects
she considers relevant to her information need.
Unfortunately, this approach is not applicable to low bandwidth devices. For example, using the small-scale display of a cell phone, it takes a long time to browse through a complete list of documents. Another example is speech-only interaction, either phone-based services or devices for the blind. Synthesizing the titles of the full list of retrieved documents is not only time-consuming, it is also counter-productive since the user is likely to forget what was said. In such cases, the amount of information that can effectively be transmitted to the user is necessarily limited. Therefore, instead of completely relying on the user to select relevant information, the system must preselect a limited amount of information to convey.
The solution proposed in this project is to resort to (quasi) natural language dialogue to explore the search space in order to fulfill the user's information need. Given an initial query from the user, the system sends it to a search engine and gets the results. According to the query, the results and its knowledge of the user (if any), the system selects a strategy among the following ones:
I propose to design and implement Eureka, a dialogue system for information retrieval based on extensions of the CMU Communicator project. The dialogue manager will use RavenClaw, a dialogue management architecture designed at CMU by Dan Bohus. The information retrieval backend will be provided by the Vivisimo search engine. For this project, I will use text input to eliminate the issues purely related to speech recognition. Spoken output, however, will be provided using a standard general purpose voice of the Festival speech synthesis system. The modules I need to write in order to build the whole system are:
Slides of the project's initial talk
(this is, to a large extent, outdated. For updated information, you should refer to the description of the project on this web page.)
Slides of the project's mid-term talk
|September - October||Design of the system
Implementation of the backend (scraping from Vivisimo.com)
Integration in the RavenClaw framework
|November||Implementation of different search strategies|
|December||Evaluation (user studies)
Belkin, N.J., Cool, C., Stein, A. & Thiel, U. (1995) Cases, scripts and information-seeking strategies: On the design of interactive information retrieval systems. Expert Systems with Applications, 9 (3): 379-395
This paper describes a dialogue approach to IR. The authors propose a multidimensional representation of a user's information seeking strategies, richer than the keyword based search where the user is solely represented by a list of keywords (the query). This representation opens the way to a classification of search strategies and the analysis of the dialogue patterns used to apply each strategy. They argue that these dialogue patterns (implemented as scripts) can be derived from data, following a Case-Based Reasoning approach. However, ultimately, their system uses a graphical interface instead of natural language dialogue. I believe that the complexity and rigidity of the interface might deter users (this is to be contrasted to standard web search engines that have a simplistic interface and provide large amounts of information that the user can exploit freely). Nevertheless, this paper is an excellent example of dialogue-based approaches to IR and their approach, extended for improved flexibility, could be used in environments where traditional high bandwidth approaches are not applicable.