My work is focused on various aspects of computer music, a field which poses many challenges for computer science. I also direct the Just-In-Time Lecture Consortium, which offers a new technology for low-cost, computer-based training and education.

A central problem in computer music is expressive control, that is, the detailed control of timing, gesture, nuance, and tone quality that is essential to music. This problem has many facets, resulting in a variety of research directions. The Computer Music Project has developed new languages, development tools for real-time systems, synthesis techniques, and music understanding systems. This research is more than intrinsically interesting. It can shed light on related problems in real-time systems, multimedia, human-computer interaction, and artificial intelligence. Moreover, new possibilities of control and interaction in music are changing the very nature of music composition, performance, and aesthetics.

One research example is the the development of new languages for expressing temporal behavior. One of these is Nyquist, a language that provides a single abstraction mechanism for the seemingly different notions of ``note,'' ``instrument,'' and ``musical score.'' Nyquist gives composers an elegant, uniform notation that spans the range from low-level digital signal processing to high-level music composition. At present, Nyquist does not generate sound in real-time, but as processors increase in speed, software for sound synthesis will replace special-purpose hardware synthesizers. Nyquist already provides one of the fastest sound synthesis implementations, and a future version will support real-time synthesis.

Expressive control of musical tones is another topic of research. A violin is expressive because there are many parameters under continuous control by the player, including bow pressure, finger and bow positions, and bow velocity. These give rise to variations in the resulting sound. My colleagues and I have developed a new synthesis technique, spectral interpolation, which allows us to synthesize tones with interesting variations in spectra. Spectral interpolation has been used to accurately synthesize a variety of instruments. In the future, we will use this technique to give composers and performers greater intuitive control over synthesized sound.

Real-time performance systems present interesting problems. I have developed several systems for real-time music understanding that ``listen'' to a live performance and derive some abstract information regarding rhythm, tempo, harmony, etc. The degree of understanding is usually demonstrated by having the system participate in the performance. One example is Computer Accompaniment, in which the computer follows a performance in a score and plays an accompaniment in synchronization with the performer. My computer accompaniment technology appeared in a commercial product in 1994. Another example is a system that listens to an improvisor playing 12-bar blues and plays the part of a rhythm section. Systems that perform both of these tasks have been built here, but this is only the beginning of what is possible. Current work includes more sophisticated systems for listening to improvisations, ensembles, and vocalists.

Education is another interest. Our country spends more for education than for health care, and as with health care, education costs are growing faster than the inflation rate. Computer-based tutoring systems have much to offer, but development costs and lack of expertise have made it difficult to produce good tutoring systems. Work on automating instructional design principles to lower the cost of intelligent tutoring systems led to an even simpler approach, ``Just-In-Time Lectures,'' in which presentations are captured using digital video, synchronized slides, a table-of-contents, and links to the Web. Just-In-Time Lectures have been used to deliver entire courses and by large corporations for training.

Back to Roger Dannenberg's home page.