PROBE onThe Performer, an Interactive Music System for Live Performance
Organized by Roger Dannenberg
More than half of American households have a practicing musician. Musicians enjoy an unprecedented level of music technology, including electronic instruments, music editors, and home studios. Notably lacking is technology that can perform live music. Many musicians play along with recordings, prepare “backing tracks” with MIDI, or use accompaniment generators such as Band-In-A-Box(tm), but all of these generate music that is insensitive to live performance variation and interaction. We are developing a new generation of music technology that can augment live music performance by amateurs and professional musicians alike.
Interactive computer music is not new, and Carnegie Mellon has been a leader in interactive music systems research. Previous work at Carnegie Mellon includes the development of Computer Accompaniment systems, commercialized as SmartMusic(tm), which follow musicians in a score and synchronizing an accompaniment. Computer Accompaniment has similar goals to the current project, but it makes different assumptions and applies to different kinds of music. In particular, the Performer is intended for "beat-based" music, where musicians synchronize through a shared sense of beat and tempo, and where the music may not be strictly notated as required by most Computer Accompaniment systems.
The Performer will offer a new model for music performance in which live acoustic instruments, prerecorded music, and computer-generated music are all combined to make performances. Computation is involved in preparing music materials, sensing performance gestures, giving feedback to musicians, listening to live music audio, processing and synthesizing music, and adding audio effects to sound to make them more lifelike when played over loudspeakers.
In order to participate in music making, the Performer must incorporate a computational model of the entire music making process. For example, music for this project can be modeled as a sequence of sections not necessarily played in the same order every time. Sections may be divided into measures, and measures into beats. The project must view the process of music performance from a computational standpoint, developing specific methods to solve problems that include identifying the tempo and beat, cuing entrances, generating high-quality digital audio in synchrony with the beat, and delivering this signal to multiple loudspeakers.
The Performer development will be divided into multiple components, each of which may be worked on by specialists. The beat acquisition component will use sensors such as foot pedals in conjunction with signal processing to detect the beat. Music cuing will explore ways for live musicians to signal the computer to coordinate entrances using video, accelerometers, foot switches, music recognition, and other techniques. The HCI component will study methods for human-computer communication, for example to allow musicians to monitor the computer's internal sense of music location, or to allow people to make adjustments in the computer performance. The music synthesis component will develop algorithms to produce high-quality music that is synchronized to the beat. This work will most likely use pre-recorded sound and manipulate it in real-time to accommodate tempo changes. Finally, the music diffusion component will explore the use of multiple loudspeakers to simulate the three-dimensional qualities of live music.
Impact on the Center and Microsoft Research
At least two concerts will be held to demonstrate initial results from the Performer. These will include at least one "classical" piece for chamber orchestra augmented by electronic sounds, and one piece for rock or jazz, for example augmenting a jazz ensemble with a synthetic string orchestra, or adding a Latin percussion section to a rock band. The project is also expected to draw input from various groups on campus, including statistics, machine learning, audio engineering, music, architecture, human-computer interaction, and computer science. At least one Microsoft researcher with a music background is being consulted on user interface issues, and we hope to involve others, particularly in the use of novel sensors for real-time interaction.