Interactive Performance

This set of papers addresses the software architecture issues of building interactive performance systems. The issues include controlling when things happen, writing event-driven applications, and obtaining low-latency response. There are also software engineering issues of portability, reliability, and ease of development. I tend to address all or at least many of these issues in each paper, since it is the combination of requirements that leads to the solutions I have found.

This work might be divided into 5 parts (so far):


See also Computer Accompaniment

Back to Bibliography by Subject


CMU Midi Toolkit

Dannenberg, ``The CMU MIDI Tookit,'' in Proceedings of the 1986 International Computer Music Conference, (October 1986), pp. 53-56.

This paper describes an early version of the CMU MIDI Toolkit. It is mainly a description of the system and how it works. I was not really aware at the time what a great way this was to program, so thoughts about software engineering and trying to understand what is good about CMT come later.

ABSTRACT:The CMU MIDI Toolkit is a collection of programs for experimental computer music education, composition, performance, and research. The programs are intended to allow low-cost commercial synthesizers to be used in experimental and educational applications. The CMU MIDI Toolkit features a text-based score language and translator, a real-time programming environment, and it supports arbitrary tuning and rhythm.

Postscript Version.


Dannenberg, ``Recent Developments in the CMU Midi Toolkit,'' Ano 2000 Symposium Proceedings, Julio Estrada, ed., University of Mexico, 1991.

I took the opportunity of this seminar in Mexico to update the 1986 ICMC paper with some of the new components of the CMU MIDI Toolkit, especially the integration of scores and programming. Much of this is covered in sections of the JNMR article, which should be easier to locate.

Introduction. The CMU Midi Toolkit is a software system that provides an easy-to-use interface to Midi, a standard interface for music synthesizers. The intent of the system is to support experimental computer music composition, research, and education, rather than to compete with commercial music systems. The CMU Midi Toolkit is especially useful for building interactive real-time systems and for creating non-traditional scores, which might include non-standard tunings, mulitiple tempi, or time-based rather than beat-based notation.

Postscript Version.


Support for Animation

Dannenberg, ``Real Time Control For Interactive Computer Music and Animation,'' in Proceedings of The Arts and Technology II: A Symposium, Connecticut College, (February 1989), pp. 85-94.

This is really a description of how the CMU MIDI Toolkit solves problems of writing interactive software. It also outlines an architecture for adding graphics, which has the problem of long-running subroutines (e.g. filling a large array of pixels). This would interfere with real-time processing without some extension to CMT. This paper also documents my experience developing "Assuming You Wish...," a work for trumpet, computer music, and computer graphics. The JNMR paper below contains some of this paper.

ABSTRACT: Real-time systems are commonly regarded as the most complex form of computer program due to parallelism, the use of special purpose input/output devices, and the fact that time-dependent errors are hard to reproduce. Several practical techniques can be used to limit the complexity of implementing real-time interactive music and animation programs. The techniques are: (1) a program structure in which input events are translated into procedure calls, (2) the use of non-preemptive programs where possible, (3) event-based programming which allows interleaved program execution, automatic storage management, and a single run-time stack, (4) multiple processes communicating by messages where task preemption is necessary, and (5) interface construction tools to facilitate experimentation and program refinement.

These techniques are supported by software developed by the author for real-time interactive music programs and more recently for animation. Although none of these techniques are new, they have never been used in this combination, nor have they been investigated in this context. Implementation details and examples that illustrate the advantage of these techniques are presented. Emphasis is placed on software organization. No specific techniques for sound and image generation are presented.

Postscript Version.


Dannenberg, ``Software Support for Interactive Multimedia Performance,'' in Proceedings The Arts and Technology 3, Connecticut College (April 1991), pp. 148-156.

This paper extends the paper presented at Arts and Technology II, and is based on my experience developing "Ritual of the Science Makers," for flute, violin, cello, computer music and computer animation. The new technology here is an integration of scores with interactive programming, so that scores or timed scripts can be used to update variables and call procedures. Also, the multitasking system (Amiga) and MIDI device driver that I used enabled live performance input to be recorded while the interaction was running. These sequences could be played back for testing and also transcribed to notated music for performers. The JNMR paper below contains some of this paper.

ABSTRACT: A set of techniques have been developed and refined to support the demanding software requirements of combined interactive computer music and computer animation. The techniques include a new programming environment that supports an integration of procedural and declarative score-like descriptions of interactive real-time behavior.

Postscript Version.


Dannenberg, ``Software Techniques for Interactive Performance Systems,'' in International Workshop on Man-Machine Interaction in Live Performance, Scuola di Studi Superiori Universitari e di Perfezionamento, Pisa, Italy, 1991, pp. 19-28.

Focuses on the software engineering issues of the CMU Midi Toolkit. Most of this is repeated in the JNMR article that followed.

ABSTRACT:The CMU MIDI Toolkit supports the development of complex interactive computer music performance systems. Four areas supported by the system are: input handling, memory management, synchronization, and timing (scheduling and sequencing). These features are described and then illustrated by their use in two performance systems.

Postscript Version.


Dannenberg, ``Software Design for Interactive Multimedia Performance,'' Interface - Journal of New Music Research, 22(3) (August 1993), pp. 213-228.

This paper combines elements of several previous conference and workshop presentations. It's a good summary of many issues encountered in interactive music performance systems and my solutions to them at this time. We created W shortly after this paper was written in an attempt to generalize and extend the approach described in this paper.

ABSTRACT:A set of techniques have been developed and refined to support the demanding software requirements of combined interactive computer music and computer animation. The techniques include a new programming environment that supports an integration of procedural and declarative score-like descriptions of interactive real-time behavior. Also discussed are issues of asynchronous input, concurrency, memory management, scheduling, and testing. Two examples are described.

[Postscript Version] ] [Adobe Acrobat (PDF) Version]


The W System

Dannenberg and Rubine, ``Toward Modular, Portable, Real-Time Software,'' in Proceedings of the 1995 International Computer Music Conference, International Computer Music Association, (September 1995), pp. 65-72.

W generalizes and extends the architecture presented in the JNMR paper listed above.

ABSTRACT: W is a systematic approach toward the construction of event-driven, interactive, real-time software. W is driven by practical concerns, including the desire to reuse existing code wherever possible, the limited real-time software support in popular operating systems, and system cost. W provides a simple, efficient, software interconnection system, giving objects a uniform external interface based on setting attributes to values via asynchronous messages. An example shows how W is used to implement real-time computer music programs combining graphics and MIDI.

[Postscript Version] [Adobe Acrobat (PDF) Version]


The Aura System

Dannenberg and Brandt, ``A Flexible Real-Time Software Synthesis System,'' in Proceedings of the 1996 International Computer Music Conference, International Computer Music Association, (August 1996), pp. 270-273.

Aura is a real-time software sound synthesis system built on the foundations of W.

ABSTRACT: Aura is a new sound synthesis system designed for portability and flexibility. Aura is designed to be used with W, a real-time object system. W provides asynchronous, priority-based scheduling, supporting a mix of control, signal, and user interface processing. Important features of Aura are its design for efficient synthesis, dynamic instantiation, and synthesis reconfiguration.

[Postscript Version] [Adobe Acrobat (PDF) Version]


Brandt and Dannenberg, ``Low-Latency Music Software Using Off-The-Shelf Operating Systems,'' in Proceedings of the International Computer Music Conference, San Francisco: International Computer Music Association, (1998), pp.137-141.

We did some measurements of Windows 95, Win98, WinNT, and IRIX, and speculate on what a disaster WDM will be for real-time on Windows2000. Originally, we were planning to publish results from using HyperKernel, which takes over PC hardware below the NT HAL layer and enables excellent real-time response. Unfortunately, we did not make much progress with HyperKernel beyond some interrupt latency measurements. My opinion now is that without a good debugger, address space protection, and other features of typical application development environments and operating systems, you just waste too much time in development. Commercial application developers really have no choice now but to go the device-driver route, but researchers should avoid low-level solutions to real-time problems. The real-time situation for audio research looks pretty bleak right now (1999).

But progress is being made! Since we wrote this paper, Linux has improved dramatically. We did not even consider it in our paper because of past experience with monolithic Unix kernels. It turns out we were right at the time, but perhaps for the wrong reason -- Linux had by then (I think) moved to a preemptable kernel designed to support multiprocessing. This enabled responsiveness because now, high priority processes can interrupt a kernel operation in progress and devote computation to more important work. However, it was not until perhaps mid-1999 that this opportunity was turned into reality, and even now (Jan 2000) this capability has not made its way into the standard releases.

[Adobe Acrobat (PDF) Version] [HTML Version]


Brandt and Dannenberg, ``Time in Distributed Real-Time Systems,'' in Proceedings of the 1999 International Computer Music Conference, San Francisco: International Computer Music Association, (1999), pp. 523-526.

For some time, we struggled with the question of how to represent time in a distributed system. A difficult problem is that in a distributed system, sample clocks are not synchronized. Our solution has two parts. We describe the ``forward-synchronous'' model in which asynchronous messages carry timestamps and time is derived from a sample clock. This model ensures sample-accurate computation if messages are delivered in advance of sample-synchronous processing and if there is a single global sample clock. In the event of multiple sample clocks, we discuss how better-than-SMPTE synchronization can be obtained without SMPTE or any other special hardware.

An aside, not covered in the paper: A logical proposal is to represent time with a 64-bit sample count as in Open Sound Control among others. One problem with this is sub-sample times, but this can be solved by moving to double-precision floating point, which is sample-accurate for a very long time, and is sub-sample accurate to high precision for at least reasonable lengths of time. As mentioned above, the real difficulty is dealing with unsynchronized sample clocks. Our scheme derives from asking ourselves: "What would we do right now if we wanted to build a distributed audio system from off-the-shelf components, and how well would it work?" In the future, with more processing power, it will be possible to perform resampling so that a distributed system can use a single sample clock internally and rate-adjust just before output. This requires the implementation of a global clock, but that is exactly what we describe in our paper. In other words, our results work for cheap-and-dirty systems now, but extend naturally to fully-digital variable sample rate systems in the future.

[Adobe Acrobat (PDF) Version]


Dannenberg and van de Lageweg, ``A System Supporting Flexible Distributed Real-Time Music Processing,'' in Proceedings of the 2001 International Computer Music Conference, San Francisco: International Computer Music Association, (2001), pp. 267-270.

Local-area networks offer a means to interconnect personal computers to achieve more processing, input, and output for music and multimedia performances. The distributed, real-time object system, Aura, offers a carefully designed architecture for distributed real-time processing. In contrast to streaming audio or MIDI-over-LAN systems, Aura offers a general real-time message system capable of transporting audio, MIDI, or any other data between objects, regardless of whether objects are located in the same process or on different machines. Measurements of audio synthesis and transmission to another computer demonstrate about 20ms of latency. Practical experience with protocols, system timing, scheduling, and synchronization are discussed.

[Adobe Acrobat (PDF) Version]


Dannenberg, ``Aura as a Platform for Distributed Sensing and Control,'' in Symposium on Sensing and Input for Media-Centric Systems (SIMS 02), Santa Barbara: University of California Santa Barbara Center for Research in Electronic Art Technology, (2002), pp. 49-57.

Aura is an evolving software architecture and "real-time middleware" implementation that has been in use since 1994. As an integrated solution to many problems encountered in the design of distributed, real-time, interactive, multimedia programs, experience with Aura offers lessons for designers. By identifying common problems and evaluating how different systems solve them, we hope to learn how to design better systems in the future. Aspects of the Aura design considered here include message passing, how objects are interconnected, the avoidance of shared memory, the grouping of tasks and objects according to latency requirements, networking and communication issues, debugging, and the scripting language.

[Adobe Acrobat (PDF) Version]


Roger B. Dannenberg. ``A Language for Interactive Audio Applications.'' In Proceedings of the 2002 International Computer Music Conference. San Francisco: International Computer Music Association.

ABSTRACT. Interactive systems are difficult to program, but high-level languages can make the task much simpler. Interactive audio and music systems are a particularly interesting case because signal processing seems to favor a functional language approach while the handling of interactive parameter updates, sound events, and other real-time computation favors a more imperative or object-oriented approach. A new language, Serpent, and a new semantics for interactive audio have been implemented and tested. The result is an elegant way to express interactive audio algorithms and an efficient implementation.

[Acrobat (PDF) Version]


Distributed Performance Systems

Roger B. Dannenberg, Sofia Cavaco, Eugene Ang, Igor Avramovic, Barkin Aygun, Jinwook Baek, Eric Barndollar, Daniel Duterte, Jeffrey Grafton, Robert Hunter, Chris Jackson, Umpei Kurokawa, Daren Makuck, Timothy Mierzejewski, Michael Rivera, Dennis Torres, and Apphia Yu. ``The Carnegie Mellon Laptop Orchestra.'' In Proceedings of the 2007 International Computer Music Conference, Volume II. San Francisco: The International Computer Music Association, (August 2007), pp. II-340 - 343.

ABSTRACT. The Carnegie Mellon Laptop Orchestra (CMLO) is a collection of computers that communicate through a wireless network and collaborate to generate music. The CMLO is the culmination of a course on Computer Music Systems and Information Processing, where students learn and apply techniques for audio and MIDI programming, real-time synchronization and scheduling, music representation, and music information retrieval.

[Acrobat (PDF) Version]

Live Coding

Roger B. Dannneberg, ``Live Coding Using a Visual Pattern Composition Language,'' in Proceedings of the 12th Biennial Symposium on Arts and Technology, March 4-6, Ammerman Center for Art & Technology, Connecticut College, 2010.

This is the first and main paper on a system called Patterns that I wrote for live performance.

ABSTRACT. Live coding is a performance practice in which music is created by writing software during the performance. Performers face the difficult task of programming quickly and minimizing the amount of silence to achieve musical continuity. The Patterns visual programming language is an experimental system for live coding. Its graphical nature reduces the chance of programming errors that interfere with a performance. Patterns offers graphical editing to change parameters and modify programs on-the-fly so that compositions can be listened to while they are being developed. Patterns is based on the combination of pattern generators introduced in Common Music.

[Acrobat (PDF) Version]


Dannenberg, ``Patterns: A Graphical Language for Live Coding Music Performance,'' in Proceedings of the Second International Conference on Computational Creativity, Mexico City, Mexico, April 2011, p. 160.

This is a 1-page paper that gives background for a demo session at ICCC 2011.

ABSTRACT. Patterns is a live-coding performance piece using an experimental visual language. The key idea is that objects generate streams of data and notes according to parameters that can be adjusted on-the-fly. Many objects take other objects or even lists of objects as inputs allowing complex patterns to be composed from simpler ones. The interconnections of objects are indicated by nested circles in an animated graphical display. The composition is created by manipulating graphical structures in real-time to create a program that in turn generates the music. The audience sees the program while listening to the music it generates.

[Acrobat (PDF) Version]