Interactive Performance

This set of papers addresses the software architecture issues of building interactive performance systems. The issues include controlling when things happen, writing event-driven applications, and obtaining low-latency response. There are also software engineering issues of portability, reliability, and ease of development. I tend to address all or at least many of these issues in each paper, since it is the combination of requirements that leads to the solutions I have found.

This work might be divided into 5 parts (so far):

See also Computer Accompaniment

Back to Bibliography by Subject

CMU Midi Toolkit

Dannenberg, “The CMU MIDI Tookit,” in Proceedings of the 1986 International Computer Music Conference, (October 1986), pp. 53-56.

This paper describes an early version of the CMU MIDI Toolkit. It is mainly a description of the system and how it works. I was not really aware at the time what a great way this was to program, so thoughts about software engineering and trying to understand what is good about CMT come later.

ABSTRACT: The CMU MIDI Toolkit is a collection of programs for experimental computer music education, composition, performance, and research. The programs are intended to allow low-cost commercial synthesizers to be used in experimental and educational applications. The CMU MIDI Toolkit features a text-based score language and translator, a real-time programming environment, and it supports arbitrary tuning and rhythm.

Postscript Version.

Dannenberg, “Recent Developments in the CMU Midi Toolkit,” Ano 2000 Symposium Proceedings, Julio Estrada, ed., University of Mexico, 1991.

I took the opportunity of this seminar in Mexico to update the 1986 ICMC paper with some of the new components of the CMU MIDI Toolkit, especially the integration of scores and programming. Much of this is covered in sections of the JNMR article, which should be easier to locate.

Introduction. The CMU Midi Toolkit is a software system that provides an easy-to-use interface to Midi, a standard interface for music synthesizers. The intent of the system is to support experimental computer music composition, research, and education, rather than to compete with commercial music systems. The CMU Midi Toolkit is especially useful for building interactive real-time systems and for creating non-traditional scores, which might include non-standard tunings, mulitiple tempi, or time-based rather than beat-based notation.

Postscript Version.

Support for Animation

Dannenberg, “Real Time Control For Interactive Computer Music and Animation,” in Proceedings of The Arts and Technology II: A Symposium, Connecticut College, (February 1989), pp. 85-94.

This is really a description of how the CMU MIDI Toolkit solves problems of writing interactive software. It also outlines an architecture for adding graphics, which has the problem of long-running subroutines (e.g. filling a large array of pixels). This would interfere with real-time processing without some extension to CMT. This paper also documents my experience developing "Assuming You Wish...," a work for trumpet, computer music, and computer graphics. The JNMR paper below contains some of this paper.

ABSTRACT: Real-time systems are commonly regarded as the most complex form of computer program due to parallelism, the use of special purpose input/output devices, and the fact that time-dependent errors are hard to reproduce. Several practical techniques can be used to limit the complexity of implementing real-time interactive music and animation programs. The techniques are: (1) a program structure in which input events are translated into procedure calls, (2) the use of non-preemptive programs where possible, (3) event-based programming which allows interleaved program execution, automatic storage management, and a single run-time stack, (4) multiple processes communicating by messages where task preemption is necessary, and (5) interface construction tools to facilitate experimentation and program refinement.

These techniques are supported by software developed by the author for real-time interactive music programs and more recently for animation. Although none of these techniques are new, they have never been used in this combination, nor have they been investigated in this context. Implementation details and examples that illustrate the advantage of these techniques are presented. Emphasis is placed on software organization. No specific techniques for sound and image generation are presented.

Postscript Version.

Dannenberg, “Software Support for Interactive Multimedia Performance,” in Proceedings The Arts and Technology 3, Connecticut College (April 1991), pp. 148-156.

This paper extends the paper presented at Arts and Technology II, and is based on my experience developing "Ritual of the Science Makers," for flute, violin, cello, computer music and computer animation. The new technology here is an integration of scores with interactive programming, so that scores or timed scripts can be used to update variables and call procedures. Also, the multitasking system (Amiga) and MIDI device driver that I used enabled live performance input to be recorded while the interaction was running. These sequences could be played back for testing and also transcribed to notated music for performers. The JNMR paper below contains some of this paper.

ABSTRACT: A set of techniques have been developed and refined to support the demanding software requirements of combined interactive computer music and computer animation. The techniques include a new programming environment that supports an integration of procedural and declarative score-like descriptions of interactive real-time behavior.

Postscript Version.

Dannenberg, “Software Techniques for Interactive Performance Systems,” in International Workshop on Man-Machine Interaction in Live Performance, Scuola di Studi Superiori Universitari e di Perfezionamento, Pisa, Italy, 1991, pp. 19-28.

Focuses on the software engineering issues of the CMU Midi Toolkit. Most of this is repeated in the JNMR article that followed.

ABSTRACT: The CMU MIDI Toolkit supports the development of complex interactive computer music performance systems. Four areas supported by the system are: input handling, memory management, synchronization, and timing (scheduling and sequencing). These features are described and then illustrated by their use in two performance systems.

Postscript Version.

Dannenberg, “Software Design for Interactive Multimedia Performance,” Interface - Journal of New Music Research, 22(3) (August 1993), pp. 213-228.

This paper combines elements of several previous conference and workshop presentations. It's a good summary of many issues encountered in interactive music performance systems and my solutions to them at this time. We created W shortly after this paper was written in an attempt to generalize and extend the approach described in this paper.

ABSTRACT: A set of techniques have been developed and refined to support the demanding software requirements of combined interactive computer music and computer animation. The techniques include a new programming environment that supports an integration of procedural and declarative score-like descriptions of interactive real-time behavior. Also discussed are issues of asynchronous input, concurrency, memory management, scheduling, and testing. Two examples are described.

[Postscript Version] ] [Adobe Acrobat (PDF) Version]

The W System

Dannenberg and Rubine, “Toward Modular, Portable, Real-Time Software,” in Proceedings of the 1995 International Computer Music Conference, International Computer Music Association, (September 1995), pp. 65-72.

W generalizes and extends the architecture presented in the JNMR paper listed above.

ABSTRACT: W is a systematic approach toward the construction of event-driven, interactive, real-time software. W is driven by practical concerns, including the desire to reuse existing code wherever possible, the limited real-time software support in popular operating systems, and system cost. W provides a simple, efficient, software interconnection system, giving objects a uniform external interface based on setting attributes to values via asynchronous messages. An example shows how W is used to implement real-time computer music programs combining graphics and MIDI.

[Postscript Version] [Adobe Acrobat (PDF) Version]

The Aura System

Dannenberg and Brandt, “A Flexible Real-Time Software Synthesis System,” in Proceedings of the 1996 International Computer Music Conference, International Computer Music Association, (August 1996), pp. 270-273.

Aura is a real-time software sound synthesis system built on the foundations of W.

ABSTRACT: Aura is a new sound synthesis system designed for portability and flexibility. Aura is designed to be used with W, a real-time object system. W provides asynchronous, priority-based scheduling, supporting a mix of control, signal, and user interface processing. Important features of Aura are its design for efficient synthesis, dynamic instantiation, and synthesis reconfiguration.

[Postscript Version] [Adobe Acrobat (PDF) Version]

Brandt and Dannenberg, “Low-Latency Music Software Using Off-The-Shelf Operating Systems,” in Proceedings of the International Computer Music Conference, San Francisco: International Computer Music Association, (1998), pp.137-141.

We did some measurements of Windows 95, Win98, WinNT, and IRIX, and speculate on what a disaster WDM will be for real-time on Windows2000. Originally, we were planning to publish results from using HyperKernel, which takes over PC hardware below the NT HAL layer and enables excellent real-time response. Unfortunately, we did not make much progress with HyperKernel beyond some interrupt latency measurements. My opinion now is that without a good debugger, address space protection, and other features of typical application development environments and operating systems, you just waste too much time in development. Commercial application developers really have no choice now but to go the device-driver route, but researchers should avoid low-level solutions to real-time problems. The real-time situation for audio research looks pretty bleak right now (1999).

But progress is being made! Since we wrote this paper, Linux has improved dramatically. We did not even consider it in our paper because of past experience with monolithic Unix kernels. It turns out we were right at the time, but perhaps for the wrong reason -- Linux had by then -- (I think) moved to a preemptable kernel designed to support multiprocessing. This enabled responsiveness because now, high priority processes can interrupt a kernel operation in progress and devote computation to more important work. However, it was not until perhaps mid-1999 that this opportunity was turned into reality, and even now (Jan 2000) this capability has not made its way into the standard releases. Addendum: now it is 2016 and we have been enjoying real time extensions in standard versions of the linux kernel for years now.

ABSTRACT. Operating systems are often the limiting factor in creating low-latency interactive computer music systems. Real-time music applications require operating system support for memory management, process scheduling, media I/O, and general development, including debugging. We present performance measurements for some current operating systems, including NT4, Windows95, and Irix 6.4. While Irix was found to give rather good real-time performance, NT4 and Windows95 suffer from both process scheduling delays and high audio output latency. The addition of WDM Streaming to NT and Windows offers some promise of lower latency, but WDM Streaming may actually make performance worse by circumventing priority-based scheduling.

[Adobe Acrobat (PDF) Version] [HTML Version]

Brandt and Dannenberg, “Time in Distributed Real-Time Systems,” in Proceedings of the 1999 International Computer Music Conference, San Francisco: International Computer Music Association, (1999), pp. 523-526.

For some time, we struggled with the question of how to represent time in a distributed system. A difficult problem is that in a distributed system, sample clocks are not synchronized. Our solution has two parts. We describe the “forward-synchronous” model in which asynchronous messages carry timestamps and time is derived from a sample clock. This model ensures sample-accurate computation if messages are delivered in advance of sample-synchronous processing and if there is a single global sample clock. In the event of multiple sample clocks, we discuss how better-than-SMPTE synchronization can be obtained without SMPTE or any other special hardware.

An aside, not covered in the paper: A logical proposal is to represent time with a 64-bit sample count as in Open Sound Control among others. One problem with this is sub-sample times, but this can be solved by moving to double-precision floating point, which is sample-accurate for a very long time, and is sub-sample accurate to high precision for at least reasonable lengths of time. As mentioned above, the real difficulty is dealing with unsynchronized sample clocks. Our scheme derives from asking ourselves: "What would we do right now if we wanted to build a distributed audio system from off-the-shelf components, and how well would it work?" In the future, with more processing power, it will be possible to perform resampling so that a distributed system can use a single sample clock internally and rate-adjust just before output. This requires the implementation of a global clock, but that is exactly what we describe in our paper. In other words, our results work for cheap-and-dirty systems now, but extend naturally to fully-digital variable sample rate systems in the future.

ABSTRACT. A real-time music system is responsible for deciding what happens when each task runs and each message takes effect. This question becomes acute when there are several classes of tasks running and intercommunicating: user interface, control processing, and audio, for example. We briefly examine and classify past approaches and their applicability to distributed systems, then propose and discuss an alternative. The shared access to a sample clock that it requires is not trivial to achieve in a distributed system, so we describe and assess a way to do so.

[Adobe Acrobat (PDF) Version]

Dannenberg and van de Lageweg, “A System Supporting Flexible Distributed Real-Time Music Processing,” in Proceedings of the 2001 International Computer Music Conference, San Francisco: International Computer Music Association, (2001), pp. 267-270.

ABSTRACT. Local-area networks offer a means to interconnect personal computers to achieve more processing, input, and output for music and multimedia performances. The distributed, real-time object system, Aura, offers a carefully designed architecture for distributed real-time processing. In contrast to streaming audio or MIDI-over-LAN systems, Aura offers a general real-time message system capable of transporting audio, MIDI, or any other data between objects, regardless of whether objects are located in the same process or on different machines. Measurements of audio synthesis and transmission to another computer demonstrate about 20ms of latency. Practical experience with protocols, system timing, scheduling, and synchronization are discussed.

[Adobe Acrobat (PDF) Version]

Dannenberg, “Aura as a Platform for Distributed Sensing and Control,” in Symposium on Sensing and Input for Media-Centric Systems (SIMS 02), Santa Barbara: University of California Santa Barbara Center for Research in Electronic Art Technology, (2002), pp. 49-57.

ABSTRACT. Aura is an evolving software architecture and “real-time middleware;” implementation that has been in use since 1994. As an integrated solution to many problems encountered in the design of distributed, real-time, interactive, multimedia programs, experience with Aura offers lessons for designers. By identifying common problems and evaluating how different systems solve them, we hope to learn how to design better systems in the future. Aspects of the Aura design considered here include message passing, how objects are interconnected, the avoidance of shared memory, the grouping of tasks and objects according to latency requirements, networking and communication issues, debugging, and the scripting language.

[Adobe Acrobat (PDF) Version]

Roger B. Dannenberg. “A Language for Interactive Audio Applications.” In Proceedings of the 2002 International Computer Music Conference. San Francisco: International Computer Music Association.

ABSTRACT. Interactive systems are difficult to program, but high-level languages can make the task much simpler. Interactive audio and music systems are a particularly interesting case because signal processing seems to favor a functional language approach while the handling of interactive parameter updates, sound events, and other real-time computation favors a more imperative or object-oriented approach. A new language, Serpent, and a new semantics for interactive audio have been implemented and tested. The result is an elegant way to express interactive audio algorithms and an efficient implementation.

[Acrobat (PDF) Version]

Roger B. Dannenberg. “Combining Visual and Textual Representations for Flexible Interactive Signal Processing,” in The ICMC 2004 Proceedings, San Francisco: The International Computer Music Association, (2004).
ABSTRACT.Interactive computer music systems pose new challenges for audio software design. In particular, there is a need for flexible run-time reconfiguration for interactive signal processing. A new version of Aura offers a graphical editor for building fixed graphs of unit generators. These graphs, called instruments, can then be instantiated, patched, and reconfigured freely at run time. This approach combines visual programming with traditional text-based programming, resulting in a structured programming model that is easy to use, is fast in execution, and offers low audio latency through fast instantiation time. The graphical editor has a novel type resolution system and automatically generates graphical interfaces for instrument testing.

[Acrobat (PDF) Version]

Roger B. Dannenberg. “Abstract Behaviors for Structured Music Programming,” in Proceedings of the 2007 International Computer Music Conference. San Francisco: The International Computer Music Association, (August 2007).
This paper solves an interesting programming problem and is not specific to Aura or even computer music. In music, we repeatedly encounter temporal structures of sequence and parallel behaviors. Sequence can be handled easily by sequential programming languages, but parallel is not well-supported by languages. You can spawn threads, but the notation is often cumbersome, and accurately timed behavior cannot be provided easily across true threads. Coroutines are a solution (c.f. Formula and ChucK), but few languages provide coroutines. So, faced with the problem of composing abstract behaviors into parallel and sequential structures, and assuming a sequential programming language, what objects and abstractions will solve the problem? Read the paper to see my solution.

ABSTRACT. Music Behaviors are introduced as a way to conceptually organize computation for music generation. In this abstraction, music is organized hierarchically by combining substructures either in sequence or parallel. While such structures are not new to either computer music or computer science, an efficient and simple real-time implementation that does not require threads or translation to data structures is offered, making this abstraction more appropriate in a variety of languages and systems where efficiency is a concern or where existing support is lacking.

[Acrobat (PDF) Version]

Roger Dannenberg and Tomas Laurenzo. “Critical point, a composition for cello and computer.” In CHI Extended Abstracts 2010, pp. 2985-2988.

This is a short paper, essentially program notes, for a performance at CHI 2010.

ABSTRACT. Critical Point is written for solo cello and interactive computer music system with two to four channel sound system and computer animation. The cellist plays from a score, and the computer records and transforms the cello sounds in various ways. Graphics and video are also projected. The computer-generated graphics are affected by audio from the live cellist. Critical Point is written in memory of the artist Rob Fisher.

[Acrobat (PDF) Version]

Roger Dannenberg and Robert Kotcher, “AuraFX: A Simple and Flexible Approach to Interactive Audio Effect-Based Composition and Performance,” in Proceedings of the 2010 International Computer Music Conference, San Francisco: The International Computer Music Association, (August 2010), pp. 147-152.

ABSTRACT. An interactive sound processor is an important tool for just about any modern composer. Performers and composers use interactive computer systems to process sound from live instruments. In many cases, audio processing could be handled using off-the-shelf signal processors. However, most composers favor a system that is more open-ended and extensible. Programmable systems are open-ended, but they leave many details to the composer, including graphical control interfaces, mixing and cross-fade automation, saving and restoring parameter settings, and sequencing through configurations of effects. Our work attempts to establish an architecture that provides these facilities without programming. It factors the problem into a framework, providing common elements for all compositions, and custom modules, extending the framework with unique effects and signal processing capabilities. Although we believe the architecture could be supported by many audio programming systems, we have created a particular instantiation (AuraFX) of the architecture using the Aura system.

[Acrobat (PDF) Version]

Yi, Lazzarini, Dannenberg and Fitch, “Extending Aura with Csound Opcodes,” in Proceedings of the 11th Sound & Music Computing joint with the 40th International Computer Music Conference, Athens, Greece, September 2014, pp. 1542 - 1549.

ABSTRACT: Languages for music audio processing typically offer a large assortment of unit generators. There is great duplication among different language implementations, as each language must implement many of the same (or nearly the same) unit generators. Csound has a large library of unit generators and could be a useful source of reusable unit generators for other languages or for direct use in applications. In this study, we consider how Csound unit generators can be exposed to direct access by other audio processing languages. Using Aura as an example, we modified Csound to allow efficient, dynamic allocation of individual unit generators without using the Csound compiler or writing Csound instruments. We then extended Aura using automatic code generation so that Csound unit generators can be accessed in the normal way from within Aura. In this scheme, Csound details are completely hidden from Aura users. We suggest that these techniques might eliminate most of the effort of building unit generator libraries and could help with the implementation of embedded audio systems where unit generators are needed but a full embedded Csound engine is not required.

[Acrobat (PDF) Version]

Distributed Performance Systems

Roger B. Dannenberg, Sofia Cavaco, Eugene Ang, Igor Avramovic, Barkin Aygun, Jinwook Baek, Eric Barndollar, Daniel Duterte, Jeffrey Grafton, Robert Hunter, Chris Jackson, Umpei Kurokawa, Daren Makuck, Timothy Mierzejewski, Michael Rivera, Dennis Torres, and Apphia Yu. “The Carnegie Mellon Laptop Orchestra.” In Proceedings of the 2007 International Computer Music Conference, Volume II. San Francisco: The International Computer Music Association, (August 2007), pp. II-340 - 343.

ABSTRACT. The Carnegie Mellon Laptop Orchestra (CMLO) is a collection of computers that communicate through a wireless network and collaborate to generate music. The CMLO is the culmination of a course on Computer Music Systems and Information Processing, where students learn and apply techniques for audio and MIDI programming, real-time synchronization and scheduling, music representation, and music information retrieval.

[Acrobat (PDF) Version]

Dannenberg and Neuendorffer, “Scaling Up Live Internet Performance with the Global Net Orchestra,rdquo; in Proceedings of the 11th Sound & Music Computing joint with the 40th International Computer Music Conference, Athens, Greece, September 2014, pp. 730-736.

ABSTRACT: Networked or “telematic” music performances take many forms, ranging from small laptop ensembles using local area networks to long-distance musical collaborations using audio and video links. Two important concerns for any networked performance are: (1) what is the role of communication in the music performance? In particular, what are the esthetic and pragmatic justifications for performing music at a distance, and (2) how are the effects of communication latency ameliorated or incorporated into the performance? A recent project, the Global Net Orchestra, is described. In addition to addressing these two concerns, the technical aspects of the project, which achieved a coordinated performance involving 68 computer musicians, each with their own connection to the network, are described.

[Acrobat (PDF) Version]

Live Coding

Roger B. Dannneberg, “Live Coding Using a Visual Pattern Composition Language,” in Proceedings of the 12th Biennial Symposium on Arts and Technology, March 4-6, Ammerman Center for Art & Technology, Connecticut College, 2010.

This is the first and main paper on a system called Patterns that I wrote for live performance.

ABSTRACT. Live coding is a performance practice in which music is created by writing software during the performance. Performers face the difficult task of programming quickly and minimizing the amount of silence to achieve musical continuity. The Patterns visual programming language is an experimental system for live coding. Its graphical nature reduces the chance of programming errors that interfere with a performance. Patterns offers graphical editing to change parameters and modify programs on-the-fly so that compositions can be listened to while they are being developed. Patterns is based on the combination of pattern generators introduced in Common Music.

[Acrobat (PDF) Version]

Dannenberg, “Patterns: A Graphical Language for Live Coding Music Performance,” in Proceedings of the Second International Conference on Computational Creativity, Mexico City, Mexico, April 2011, p. 160.

This is a 1-page paper that gives background for a demo session at ICCC 2011.

ABSTRACT. Patterns is a live-coding performance piece using an experimental visual language. The key idea is that objects generate streams of data and notes according to parameters that can be adjusted on-the-fly. Many objects take other objects or even lists of objects as inputs allowing complex patterns to be composed from simpler ones. The interconnections of objects are indicated by nested circles in an animated graphical display. The composition is created by manipulating graphical structures in real-time to create a program that in turn generates the music. The audience sees the program while listening to the music it generates.

[Acrobat (PDF) Version]