next up previous
Next: Introduction

Journal of Artificial Intelligence Research 16 (2002), pp. 293-319. Submitted 11/01; published 5/02.
© 2002 AI Access Foundation and Morgan Kaufmann Publishers. All rights reserved.
Postscript and PDF versions of this document are available.

Automatically Training a Problematic Dialogue Predictor for a Spoken Dialogue System

Marilyn A. Walker alker@research.att.com
Irene Langkilde-Geary langkil@isi.edu
Helen Wright Hastie hastie@research.att.com
Jerry Wright wright@research.att.com
Allen Gorin lgor@research.att.com
AT&T Shannon Laboratory
180 Park Ave., Bldg 103, Room E103
Florham Park, NJ 07932

a

Abstract:

Spoken dialogue systems promise efficient and natural access to a large variety of information sources and services from any phone. However, current spoken dialogue systems are deficient in their strategies for preventing, identifying and repairing problems that arise in the conversation. This paper reports results on automatically training a Problematic Dialogue Predictor to predict problematic human-computer dialogues using a corpus of 4692 dialogues collected with the How May I Help YouSM spoken dialogue system. The Problematic Dialogue Predictor can be immediately applied to the system's decision of whether to transfer the call to a human customer care agent, or be used as a cue to the system's dialogue manager to modify its behavior to repair problems, and even perhaps, to prevent them. We show that a Problematic Dialogue Predictor using automatically-obtainable features from the first two exchanges in the dialogue can predict problematic dialogues 13.2% more accurately than the baseline.



 
next up previous
Next: Introduction
Helen Hastie
2002-05-09