Workshop Program 

 The workshop proceedings are now available for download.

8 - 9  Check-in and on-site registration
9 - 10  Welcome and introductions
10 - 10:30  Morning coffee break
10:30 - 12  Morning roundtable sessions
 Evaluation [Sala A]
Spoken dialog systems must be evaluated in order to assess performance and determine how well systems meet actual user needs and expectations. How can we best evaluate and compare different approaches to spoken dialog systems? Is it possible to collect corpora of dialogues for evaluation and use in competitions like those that appear in related fields such as ASR and MT? A key aspect of spoken dialog systems is their situation in the real world. How can evaluation strategies take factors such as the urgency of the information desired and the location of access (e.g. home, car, public kiosk) into account? Can we design application-independent spoken dialog system evaluation strategies?
 Empirical / Statistical Methods [Sala C]
Recent work has shown the benefit of learning (rather than designing) dialogue strategies -- for example, from corpora, from a user model, or from empirical user feedback. In what situations are these techniques appropriate? Based on work done so far, what are the benefits and costs of this approach? How can traditional techniques be combined with learning techniques? What optimization metric is appropriate? How can a resulting dialog strategy be documented in an easy-to-understand way? What barriers to commercial use remain?
12 - 12:30  Morning sessions' summary
12:30 - 2  Lunch
2 - 3:30  Afternoon roundtable sessions
 Applications [Sala A]
Spoken dialog systems come in many flavors and it is sometimes hard to identify common traits and crucial differences among them. What are the different classes of applications of spoken dialog systems? What tasks do spoken dialog systems support best, when should we prefer other means of interaction? What is the next spoken dialog system killer app?
 Automatic Speech Recognition and Language Understanding [Sala C]
In most current dialog systems, the speech recognizer works independently of the rest of the system. How can we integrate speech recognition more closely with the rest of the dialog system? In what ways can the dialog system use more of the information from the recognizer (such as full n-best parser output)? How can information in the dialog system help the recognizer? Can we use dialog to recover from speech recognition errors specifically? Can systems automatically ignore recognition errors that do not affect the semantics of the dialog? Instead of searching for the word sequence with maximum likelihood, can the recognizer search for the concepts with maximum likelihood?
3:30 - 4  Afternoon sessions' summary
4 - 4:30  Wrap-up
4:30 onwards  Afternoon coffee social

Please send comments to