Interactive Performance

This set of papers addresses the software architecture issues of building interactive performance systems. The issues include controlling when things happen, writing event-driven applications, and obtaining low-latency response. There are also software engineering issues of portability, reliability, and ease of development. I tend to address all or at least many of these issues in each paper, since it is the combination of requirements that leads to the solutions I have found.

This work might be divided into 7 parts (so far):


See also Computer Accompaniment

Back to Bibliography by Subject


CMU Midi Toolkit

Dannenberg, “The CMU MIDI Tookit,” in Proceedings of the 1986 International Computer Music Conference, Den Haag, Netherlands, October 1986. San Francisco: International Computer Music Association, 1986. pp. 53-56.

This paper describes an early version of the CMU MIDI Toolkit. It is mainly a description of the system and how it works. I was not really aware at the time what a great way this was to program, so thoughts about software engineering and trying to understand what is good about CMT come later.

ABSTRACT: The CMU MIDI Toolkit is a collection of programs for experimental computer music education, composition, performance, and research. The programs are intended to allow low-cost commercial synthesizers to be used in experimental and educational applications. The CMU MIDI Toolkit features a text-based score language and translator, a real-time programming environment, and it supports arbitrary tuning and rhythm.

[
Postscript Version][Adobe Acrobat (PDF) Version]

Dannenberg, “Recent Developments in the CMU Midi Toolkit,” in Proceedings of the International Seminar Ano 2000: Theoretical, Technological and Compositional Alternatives, Mexico City, Mexico. Julio Estrada, ed., University of Mexico, 1991. pp. 52-62.

I took the opportunity of this seminar in Mexico to update the 1986 ICMC paper with some of the new components of the CMU MIDI Toolkit, especially the integration of scores and programming. Much of this is covered in sections of the JNMR article, which should be easier to locate.

Introduction. The CMU Midi Toolkit is a software system that provides an easy-to-use interface to Midi, a standard interface for music synthesizers. The intent of the system is to support experimental computer music composition, research, and education, rather than to compete with commercial music systems. The CMU Midi Toolkit is especially useful for building interactive real-time systems and for creating non-traditional scores, which might include non-standard tunings, mulitiple tempi, or time-based rather than beat-based notation.

[
Postscript Version] [Adobe Acrobat (PDF) Version]

Support for Animation

Dannenberg, “Real Time Control For Interactive Computer Music and Animation,” in Proceedings of The Arts and Technology II: A Symposium, New London, CT, February 1989. pp. 85-94.

This is really a description of how the CMU MIDI Toolkit solves problems of writing interactive software. It also outlines an architecture for adding graphics, which has the problem of long-running subroutines (e.g. filling a large array of pixels). This would interfere with real-time processing without some extension to CMT. This paper also documents my experience developing "Assuming You Wish...," a work for trumpet, computer music, and computer graphics. The JNMR paper below contains some of this paper.

ABSTRACT: Real-time systems are commonly regarded as the most complex form of computer program due to parallelism, the use of special purpose input/output devices, and the fact that time-dependent errors are hard to reproduce. Several practical techniques can be used to limit the complexity of implementing real-time interactive music and animation programs. The techniques are: (1) a program structure in which input events are translated into procedure calls, (2) the use of non-preemptive programs where possible, (3) event-based programming which allows interleaved program execution, automatic storage management, and a single run-time stack, (4) multiple processes communicating by messages where task preemption is necessary, and (5) interface construction tools to facilitate experimentation and program refinement.

These techniques are supported by software developed by the author for real-time interactive music programs and more recently for animation. Although none of these techniques are new, they have never been used in this combination, nor have they been investigated in this context. Implementation details and examples that illustrate the advantage of these techniques are presented. Emphasis is placed on software organization. No specific techniques for sound and image generation are presented.

[
Postscript Version][Adobe Acrobat (PDF) Version]

Dannenberg, “Software Support for Interactive Multimedia Performance,” in Proceedings The 3rd Symposium on Arts and Technology, New London, CT, April 4-7, 1991. pp. 148-156.

This paper extends the paper presented at Arts and Technology II, and is based on my experience developing "Ritual of the Science Makers," for flute, violin, cello, computer music and computer animation. The new technology here is an integration of scores with interactive programming, so that scores or timed scripts can be used to update variables and call procedures. Also, the multitasking system (Amiga) and MIDI device driver that I used enabled live performance input to be recorded while the interaction was running. These sequences could be played back for testing and also transcribed to notated music for performers. The JNMR paper below contains some of this paper.

ABSTRACT: A set of techniques have been developed and refined to support the demanding software requirements of combined interactive computer music and computer animation. The techniques include a new programming environment that supports an integration of procedural and declarative score-like descriptions of interactive real-time behavior.

[
Postscript Version.] [Adobe Acrobat (PDF) Version]


Dannenberg, “Software Techniques for Interactive Performance Systems,” in International Workshop on Man-Machine Interaction in Live Performance, Scuola di Studi Superiori Universitari e di Perfezionamento, Pisa, Italy, 1991, pp. 19-28.

Focuses on the software engineering issues of the CMU Midi Toolkit. Most of this is repeated in the JNMR article that followed.

ABSTRACT: The CMU MIDI Toolkit supports the development of complex interactive computer music performance systems. Four areas supported by the system are: input handling, memory management, synchronization, and timing (scheduling and sequencing). These features are described and then illustrated by their use in two performance systems.

Postscript Version.


Dannenberg, “Software Design for Interactive Multimedia Performance,” Interface - Journal of New Music Research, 22(3) (August 1993), pp. 213-228.

This paper combines elements of several previous conference and workshop presentations. It's a good summary of many issues encountered in interactive music performance systems and my solutions to them at this time. We created W shortly after this paper was written in an attempt to generalize and extend the approach described in this paper.

ABSTRACT: A set of techniques have been developed and refined to support the demanding software requirements of combined interactive computer music and computer animation. The techniques include a new programming environment that supports an integration of procedural and declarative score-like descriptions of interactive real-time behavior. Also discussed are issues of asynchronous input, concurrency, memory management, scheduling, and testing. Two examples are described.

[Postscript Version] [Adobe Acrobat (PDF) Version]


The W System

Dannenberg and Rubine, “Toward Modular, Portable, Real-Time Software,” in Proceedings of the 1995 International Computer Music Conference, International Computer Music Association, (September 1995), pp. 65-72.

W generalizes and extends the architecture presented in the JNMR paper listed above.

ABSTRACT: W is a systematic approach toward the construction of event-driven, interactive, real-time software. W is driven by practical concerns, including the desire to reuse existing code wherever possible, the limited real-time software support in popular operating systems, and system cost. W provides a simple, efficient, software interconnection system, giving objects a uniform external interface based on setting attributes to values via asynchronous messages. An example shows how W is used to implement real-time computer music programs combining graphics and MIDI.

[Postscript Version] [Adobe Acrobat (PDF) Version]


The Aura System

Dannenberg and Brandt, “A Flexible Real-Time Software Synthesis System,” in Proceedings of the 1996 International Computer Music Conference, International Computer Music Association, (August 1996), pp. 270-273.

Aura is a real-time software sound synthesis system built on the foundations of W.

ABSTRACT: Aura is a new sound synthesis system designed for portability and flexibility. Aura is designed to be used with W, a real-time object system. W provides asynchronous, priority-based scheduling, supporting a mix of control, signal, and user interface processing. Important features of Aura are its design for efficient synthesis, dynamic instantiation, and synthesis reconfiguration.

[Postscript Version] [Adobe Acrobat (PDF) Version]


Brandt and Dannenberg, “Low-Latency Music Software Using Off-The-Shelf Operating Systems,” in Proceedings of the International Computer Music Conference, San Francisco: International Computer Music Association, (1998), pp.137-141.

We did some measurements of Windows 95, Win98, WinNT, and IRIX, and speculate on what a disaster WDM will be for real-time on Windows2000. Originally, we were planning to publish results from using HyperKernel, which takes over PC hardware below the NT HAL layer and enables excellent real-time response. Unfortunately, we did not make much progress with HyperKernel beyond some interrupt latency measurements. My opinion now is that without a good debugger, address space protection, and other features of typical application development environments and operating systems, you just waste too much time in development. Commercial application developers really have no choice now but to go the device-driver route, but researchers should avoid low-level solutions to real-time problems. The real-time situation for audio research looks pretty bleak right now (1999).

But progress is being made! Since we wrote this paper, Linux has improved dramatically. We did not even consider it in our paper because of past experience with monolithic Unix kernels. It turns out we were right at the time, but perhaps for the wrong reason -- Linux had by then -- (I think) moved to a preemptable kernel designed to support multiprocessing. This enabled responsiveness because now, high priority processes can interrupt a kernel operation in progress and devote computation to more important work. However, it was not until perhaps mid-1999 that this opportunity was turned into reality, and even now (Jan 2000) this capability has not made its way into the standard releases. Addendum: now it is 2016 and we have been enjoying real time extensions in standard versions of the linux kernel for years now.

ABSTRACT. Operating systems are often the limiting factor in creating low-latency interactive computer music systems. Real-time music applications require operating system support for memory management, process scheduling, media I/O, and general development, including debugging. We present performance measurements for some current operating systems, including NT4, Windows95, and Irix 6.4. While Irix was found to give rather good real-time performance, NT4 and Windows95 suffer from both process scheduling delays and high audio output latency. The addition of WDM Streaming to NT and Windows offers some promise of lower latency, but WDM Streaming may actually make performance worse by circumventing priority-based scheduling.

[Adobe Acrobat (PDF) Version] [HTML Version]


Brandt and Dannenberg, “Time in Distributed Real-Time Systems,” in Proceedings of the 1999 International Computer Music Conference, San Francisco: International Computer Music Association, (1999), pp. 523-526.

For some time, we struggled with the question of how to represent time in a distributed system. A difficult problem is that in a distributed system, sample clocks are not synchronized. Our solution has two parts. We describe the “forward-synchronous” model in which asynchronous messages carry timestamps and time is derived from a sample clock. This model ensures sample-accurate computation if messages are delivered in advance of sample-synchronous processing and if there is a single global sample clock. In the event of multiple sample clocks, we discuss how better-than-SMPTE synchronization can be obtained without SMPTE or any other special hardware.

An aside, not covered in the paper: A logical proposal is to represent time with a 64-bit sample count as in Open Sound Control among others. One problem with this is sub-sample times, but this can be solved by moving to double-precision floating point, which is sample-accurate for a very long time, and is sub-sample accurate to high precision for at least reasonable lengths of time. As mentioned above, the real difficulty is dealing with unsynchronized sample clocks. Our scheme derives from asking ourselves: "What would we do right now if we wanted to build a distributed audio system from off-the-shelf components, and how well would it work?" In the future, with more processing power, it will be possible to perform resampling so that a distributed system can use a single sample clock internally and rate-adjust just before output. This requires the implementation of a global clock, but that is exactly what we describe in our paper. In other words, our results work for cheap-and-dirty systems now, but extend naturally to fully-digital variable sample rate systems in the future.

ABSTRACT. A real-time music system is responsible for deciding what happens when each task runs and each message takes effect. This question becomes acute when there are several classes of tasks running and intercommunicating: user interface, control processing, and audio, for example. We briefly examine and classify past approaches and their applicability to distributed systems, then propose and discuss an alternative. The shared access to a sample clock that it requires is not trivial to achieve in a distributed system, so we describe and assess a way to do so.

[Adobe Acrobat (PDF) Version]


Dannenberg and van de Lageweg, “A System Supporting Flexible Distributed Real-Time Music Processing,” in Proceedings of the 2001 International Computer Music Conference, San Francisco: International Computer Music Association, (2001), pp. 267-270.

ABSTRACT. Local-area networks offer a means to interconnect personal computers to achieve more processing, input, and output for music and multimedia performances. The distributed, real-time object system, Aura, offers a carefully designed architecture for distributed real-time processing. In contrast to streaming audio or MIDI-over-LAN systems, Aura offers a general real-time message system capable of transporting audio, MIDI, or any other data between objects, regardless of whether objects are located in the same process or on different machines. Measurements of audio synthesis and transmission to another computer demonstrate about 20ms of latency. Practical experience with protocols, system timing, scheduling, and synchronization are discussed.

[Adobe Acrobat (PDF) Version]


Dannenberg, “Aura as a Platform for Distributed Sensing and Control,” in Symposium on Sensing and Input for Media-Centric Systems (SIMS 02), Santa Barbara: University of California Santa Barbara Center for Research in Electronic Art Technology, (2002), pp. 49-57.

ABSTRACT. Aura is an evolving software architecture and “real-time middleware;” implementation that has been in use since 1994. As an integrated solution to many problems encountered in the design of distributed, real-time, interactive, multimedia programs, experience with Aura offers lessons for designers. By identifying common problems and evaluating how different systems solve them, we hope to learn how to design better systems in the future. Aspects of the Aura design considered here include message passing, how objects are interconnected, the avoidance of shared memory, the grouping of tasks and objects according to latency requirements, networking and communication issues, debugging, and the scripting language.

[Adobe Acrobat (PDF) Version]


Roger B. Dannenberg. “A Language for Interactive Audio Applications.” In Proceedings of the 2002 International Computer Music Conference. San Francisco: International Computer Music Association.

ABSTRACT. Interactive systems are difficult to program, but high-level languages can make the task much simpler. Interactive audio and music systems are a particularly interesting case because signal processing seems to favor a functional language approach while the handling of interactive parameter updates, sound events, and other real-time computation favors a more imperative or object-oriented approach. A new language, Serpent, and a new semantics for interactive audio have been implemented and tested. The result is an elegant way to express interactive audio algorithms and an efficient implementation.

[Acrobat (PDF) Version]


Roger B. Dannenberg. “Combining Visual and Textual Representations for Flexible Interactive Signal Processing,” in The ICMC 2004 Proceedings, San Francisco: The International Computer Music Association, (2004).
ABSTRACT.Interactive computer music systems pose new challenges for audio software design. In particular, there is a need for flexible run-time reconfiguration for interactive signal processing. A new version of Aura offers a graphical editor for building fixed graphs of unit generators. These graphs, called instruments, can then be instantiated, patched, and reconfigured freely at run time. This approach combines visual programming with traditional text-based programming, resulting in a structured programming model that is easy to use, is fast in execution, and offers low audio latency through fast instantiation time. The graphical editor has a novel type resolution system and automatically generates graphical interfaces for instrument testing.

[Acrobat (PDF) Version]


Roger B. Dannenberg. “A Virtual Patchbay for Robust Distributed Interactive Music Systems,” in Proceedings of the 2005 International Computer Music Conference, San Francisco: International Computer Music Association, (2005), pp. 571-574.
Aura began with concept of end-user-programming by patching ready-made objects or modules such as sliders and buttons, audio/video players, signal processors, etc. The independence of modules and their connections turned out to be a bit of a burden (especially without a highly developed visual interface such as in Max), and programs moved to a remote method invocation model, where most of the time when you send a message, you know where it is going and want to specify the destination, making the code easier to understand. But we lost the possibility of patching and reconnecting, which is sometimes exactly what you want (consider patching devices with MIDI or patching a chain of audio signal processors). The project reported here extended Aura with a distributed patching facility. One of the interesting features was that if you patched from machine A to multiple objects in machine B, the patchbay could arrange for one network transmission from A to B, where it would fan out to multiple objects in B. Similarly, a connection within machine A would never have to go over the network to a central patchbay, possibly in Machine B, and then back to A. All of this fell out of the design quite naturally and the system was also quite easy to implement on top of Aura's distributed object framework. It did not get used extensively because setting up multiple machines, or taking networks to a concert, especially in those days, was burdensome, but the ideas are quite elegant. The dynamic and automatic connection after one component reboots forshadows similar behaviors in O2 which I am only now really beginning to appreciate.

ABSTRACT. Multiple computers and/or processors offer interactive music systems more processing power, more inputs and outputs, and more tolerance to failure. Systems based on multiple computers require careful design paying particular attention to communication and configuration. The virtual patchbay is a new structure in Aura that simplifies the configuration and interconnection of objects in a distributed computer music application. The virtual patchbay also optimizes configurations to reduce the number of duplicate messages sent over the network and helps the system tolerate crashes and rebooting while other components continue to function.

[Acrobat (PDF) Version]


Roger B. Dannenberg. “Abstract Behaviors for Structured Music Programming,” in Proceedings of the 2007 International Computer Music Conference. San Francisco: The International Computer Music Association, (August 2007).
This paper solves an interesting programming problem and is not specific to Aura or even computer music. In music, we repeatedly encounter temporal structures of sequence and parallel behaviors. Sequence can be handled easily by sequential programming languages, but parallel is not well-supported by languages. You can spawn threads, but the notation is often cumbersome, and accurately timed behavior cannot be provided easily across true threads. Coroutines are a solution (c.f. Formula and ChucK), but few languages provide coroutines. So, faced with the problem of composing abstract behaviors into parallel and sequential structures, and assuming a sequential programming language, what objects and abstractions will solve the problem? Read the paper to see my solution.

ABSTRACT. Music Behaviors are introduced as a way to conceptually organize computation for music generation. In this abstraction, music is organized hierarchically by combining substructures either in sequence or parallel. While such structures are not new to either computer music or computer science, an efficient and simple real-time implementation that does not require threads or translation to data structures is offered, making this abstraction more appropriate in a variety of languages and systems where efficiency is a concern or where existing support is lacking.

[Acrobat (PDF) Version]


Roger Dannenberg and Tomas Laurenzo. “Critical point, a composition for cello and computer.” In CHI Extended Abstracts 2010, pp. 2985-2988.

This is a short paper, essentially program notes, for a performance at CHI 2010.

ABSTRACT. Critical Point is written for solo cello and interactive computer music system with two to four channel sound system and computer animation. The cellist plays from a score, and the computer records and transforms the cello sounds in various ways. Graphics and video are also projected. The computer-generated graphics are affected by audio from the live cellist. Critical Point is written in memory of the artist Rob Fisher.

[Acrobat (PDF) Version]


Roger Dannenberg and Robert Kotcher, “AuraFX: A Simple and Flexible Approach to Interactive Audio Effect-Based Composition and Performance,” in Proceedings of the 2010 International Computer Music Conference, San Francisco: The International Computer Music Association, (August 2010), pp. 147-152.

ABSTRACT. An interactive sound processor is an important tool for just about any modern composer. Performers and composers use interactive computer systems to process sound from live instruments. In many cases, audio processing could be handled using off-the-shelf signal processors. However, most composers favor a system that is more open-ended and extensible. Programmable systems are open-ended, but they leave many details to the composer, including graphical control interfaces, mixing and cross-fade automation, saving and restoring parameter settings, and sequencing through configurations of effects. Our work attempts to establish an architecture that provides these facilities without programming. It factors the problem into a framework, providing common elements for all compositions, and custom modules, extending the framework with unique effects and signal processing capabilities. Although we believe the architecture could be supported by many audio programming systems, we have created a particular instantiation (AuraFX) of the architecture using the Aura system.

[Acrobat (PDF) Version]


Yi, Lazzarini, Dannenberg and Fitch, “Extending Aura with Csound Opcodes,” in Proceedings of the 11th Sound & Music Computing joint with the 40th International Computer Music Conference, Athens, Greece, September 2014, pp. 1542 - 1549.

ABSTRACT: Languages for music audio processing typically offer a large assortment of unit generators. There is great duplication among different language implementations, as each language must implement many of the same (or nearly the same) unit generators. Csound has a large library of unit generators and could be a useful source of reusable unit generators for other languages or for direct use in applications. In this study, we consider how Csound unit generators can be exposed to direct access by other audio processing languages. Using Aura as an example, we modified Csound to allow efficient, dynamic allocation of individual unit generators without using the Csound compiler or writing Csound instruments. We then extended Aura using automatic code generation so that Csound unit generators can be accessed in the normal way from within Aura. In this scheme, Csound details are completely hidden from Aura users. We suggest that these techniques might eliminate most of the effort of building unit generator libraries and could help with the implementation of embedded audio systems where unit generators are needed but a full embedded Csound engine is not required.

[Acrobat (PDF) Version]


Distributed Performance Systems

Roger B. Dannenberg, Sofia Cavaco, Eugene Ang, Igor Avramovic, Barkin Aygun, Jinwook Baek, Eric Barndollar, Daniel Duterte, Jeffrey Grafton, Robert Hunter, Chris Jackson, Umpei Kurokawa, Daren Makuck, Timothy Mierzejewski, Michael Rivera, Dennis Torres, and Apphia Yu. “The Carnegie Mellon Laptop Orchestra.” In Proceedings of the 2007 International Computer Music Conference, Volume II. San Francisco: The International Computer Music Association, (August 2007), pp. II-340 - 343.

ABSTRACT. The Carnegie Mellon Laptop Orchestra (CMLO) is a collection of computers that communicate through a wireless network and collaborate to generate music. The CMLO is the culmination of a course on Computer Music Systems and Information Processing, where students learn and apply techniques for audio and MIDI programming, real-time synchronization and scheduling, music representation, and music information retrieval.

[Acrobat (PDF) Version]


Dannenberg and Neuendorffer, “Scaling Up Live Internet Performance with the Global Net Orchestra,rdquo; in Proceedings of the 11th Sound & Music Computing joint with the 40th International Computer Music Conference, Athens, Greece, September 2014, pp. 730-736.

ABSTRACT: Networked or “telematic” music performances take many forms, ranging from small laptop ensembles using local area networks to long-distance musical collaborations using audio and video links. Two important concerns for any networked performance are: (1) what is the role of communication in the music performance? In particular, what are the esthetic and pragmatic justifications for performing music at a distance, and (2) how are the effects of communication latency ameliorated or incorporated into the performance? A recent project, the Global Net Orchestra, is described. In addition to addressing these two concerns, the technical aspects of the project, which achieved a coordinated performance involving 68 computer musicians, each with their own connection to the network, are described.

[Acrobat (PDF) Version]


Roger B. Dannenberg, Huan Zhang, Amit Meena, Ankit Joshi, Josh Patel, Jorge Sastre, “Collaborative Music Creation and Performance with Soundcool Online,” in Web Audio Conference (WAC-2021), Online, July 2021.

ABSTRACT: Soundcool Online is a Web Audio re-implementation of the original Max/MSP implementation of Soundcool, a system for collaborative music creation. Soundcool has many educational applications, and because Linux has been adopted in many school systems, we turned to Web Audio to enable Soundcool to run on Linux as well as many other platforms. An additional advantage of Soundcool Online is the elimination of a large download, allowing beginners to try the system more easily. Another advantage is the support for sharing provided by a centralized server, where projects can be stored and accessed by others. A cloud-based server also facilitates collaboration at a distance where multiple users can control the same project. In this scenario, local sound synthesis provides high-quality sound without the large bandwidth requirements of shared audio streams. Experience with Web Audio and latency measurements are reported.

[Acrobat (PDF) Version]


Live Coding

Roger B. Dannneberg, “Live Coding Using a Visual Pattern Composition Language,” in Proceedings of the 12th Biennial Symposium on Arts and Technology, New London, CT, March 4-6, 2010. New London: Ammerman Center for Art & Technology, Connecticut College, 2010.

This is the first and main paper on a system called Patterns that I wrote for live performance.

ABSTRACT. Live coding is a performance practice in which music is created by writing software during the performance. Performers face the difficult task of programming quickly and minimizing the amount of silence to achieve musical continuity. The Patterns visual programming language is an experimental system for live coding. Its graphical nature reduces the chance of programming errors that interfere with a performance. Patterns offers graphical editing to change parameters and modify programs on-the-fly so that compositions can be listened to while they are being developed. Patterns is based on the combination of pattern generators introduced in Common Music.

[Acrobat (PDF) Version]


Dannenberg, “Patterns: A Graphical Language for Live Coding Music Performance,” in Proceedings of the Second International Conference on Computational Creativity, Mexico City, Mexico, April 2011, p. 160.

This is a 1-page paper that gives background for a demo session at ICCC 2011.

ABSTRACT. Patterns is a live-coding performance piece using an experimental visual language. The key idea is that objects generate streams of data and notes according to parameters that can be adjusted on-the-fly. Many objects take other objects or even lists of objects as inputs allowing complex patterns to be composed from simpler ones. The interconnections of objects are indicated by nested circles in an animated graphical display. The composition is created by manipulating graphical structures in real-time to create a program that in turn generates the music. The audience sees the program while listening to the music it generates.

[Acrobat (PDF) Version]


Soundcool

(See also “
Collaborative Music Creation and Performance with Soundcool Online”)

Sastre, J. Murillo, A. Carrascosa, E. García, R. Dannenberg, R. B. Lloret, N. Morant, R. Scarani, S. Muñoz. A., “Soundcool: New Technologies for Music Education,” in Proceedings of the International Conference of Education, Research, and Innovation, Seville, Spain, November 18-20, 2015, Ed. L. Gómez Chova, A. López Martínez, I. Candel Torres. Valencia: IATED Academy, 2015. pp. 5974-5982.

ABSTRACT: This paper proposes a new model for music education based on the use of the application Soundcool, a modular system for music education with smartphones, tablets and Kinect developed by Universitat Politècnica de València (UPV) through UPV (2013, Spain) and Generalitat Valenciana (2015-2016, Spain) projects. Soundcool has been programmed in Max, a modular graphical programming environment for music and interactive multimedia creation, and uses Open Sound Control, designed to share information in real time over a network with several media devices. Our application is a creative development environment in its own right, but for running Max patches it requires only the free application Max Runtime/Max player. The pedagogical architecture of Soundcool is based on three music education scenarios that allow interaction between the various agents involved in the classroom. Soundcool is going to be used as a music education tool in several European countries through an Erasmus+ European project.

[Adobe Acrobat (PDF) Version]


Scarani, Munoz, Serquera, Sastre and Dannenberg, “Software for Interactive and Collaborative Creation in the Classroom and Beyond: An Overview of the Soundcool Software,” Computer Music Journal, Vol. 43, No. 4 (Winter, 2019), pp. 12-24.

ABSTRACT. Soundcool is a free framework for collaborative creation of interactive and experimental computer music. Soundcool is designed to fill a gap between rigid ready-to-use applications and flexible programming languages. Soundcool offers easy-to-use sound generating and processing elements, much like ready-made applications, but it enables flexible configuration and control, more like programming languages. The system runs on personal computers with an option for control via smartphones, tablets, and other devices using the Open Sound Control (OSC) protocol. Originally developed to support a new music curriculum, Soundcool is being used at different educational institutions in Spain, Portugal, Italy and Romania through EU-funded Erasmus+ projects. In this paper we present our system and showcase three different scenarios as examples of how Soundcool meets its objectives as an easy-to-use, versatile, and creative tool.

[Acrobat (PDF) Version]


Sastre and Dannneberg, “Soundcool: collaborative sound and visual creation,” Sonic Ideas, Vol. 12, No. 22 (June 2020), pp. 75-86. (Also published in Spanish as “Soundcool: creacion sonora y visual colaborativa,” Ideas Sonicas, Vol. 12, No. 22 (June 2020), pp. 63-74.) ISSN 2317-9694.

ABSTRACT. Soundcool is a free system for musical, sound and visual collaborative creation through mobile phones, tablets and other interfaces developed by the Performing Arts and Technology Group (PerformingARTech) of the Universitat Politècnica de València (UPV) with the collaboration of Carnegie Mellon University. The PerformingARTech group is a multidisciplinary team of researchers with artistic and technical expertise led by Dr. Sastre, see team at http://soundcool.org.

[Acrobat (PDF) English Version]
[Acrobat (PDF) Spanish Version]


Sastre, Lloret, Scarani, Dannenberg, and Jara, “Collaborative Creation with Soundcool for Socially Distanced Education,” in Korean Electro-Acoustic Music Society's 2020 Annual Conference Proceedings, 2020, pp. 47-51.

ABSTRACT: Soundcool is a flexible, modular computer music software system created for music education. Moreover, Soundcool is an educational approach that embraces collaboration and discovery in which the teacher serves as a mentor for project-based learning. To enable collaboration, Soundcool was designed from the beginning to allow individual modules to be controlled over WiFi using smartphone and tablet apps. This collaborative feature has enabled network-based performance over long distances. In particular, the recent demand for social distancing motivated further explorations to use Soundcool for distance education and to enable young musicians to perform together in a creative way. We describe the educational approach of Soundcool, experience with network performances with children, and future plans for a web-based social-network-inspired collaborative music creation system.

[Adobe Acrobat (PDF) Version]


Scarani, Lloret, Sastre, and Dannenberg, “Soundcool: Creatividad Colaborativa a Distancia,” Tsantsa. Revista de Investigciones Artísticas, No. 12 (Dec 2021), ISSN 1390-8448.

RESUMEN. Creatividad audiovisual en tiempo real, colaborativa y a distancia. Con estas pocas palabras se resumen las principales característicasque el proyecto Soundcool ha alcanzado integrar en su sistema informático, transformando el “aquí y en este momento” en “en cualquier sitio en este momento.”

ABSTRACT. Real time audiovisual creation, collaborative and at a distance. These few words summarize the main characteristics that the Soundcool project has managed to integrate into its software system, transforming the “here and now” into “anywhere and now.”

[Acrobat (PDF) Version]


Dannenberg, Sastre, Scarani, Lloret, and Carrascosa, “Mobile Devices and Sensors for an Educational Multimedia Opera Project,” Sensors, Vol. 23, No. 4378 (2023).

ABSTRACT. Interactive computer-based music systems form a rich area for the exploration of collaborative systems where sensors play an active role and are important to the design process. The Soundcool system is a collaborative and educational system for sound and music creation as well as multimedia scenographic projects, allowing students to produce and modify sounds and images with sensors, smartphones and tablets in real time. As a real-time collaborative performance system, each performance is a unique creation. In a comprehensive educational project, Soundcool is used to extend the sounds of traditional orchestral instruments and opera singers with electronics. A multidisciplinary international team participates, resulting in different performances of the collaborative multimedia opera The Mother of Fishes in countries such as Spain, Romania, Mexico and the USA.

[Acrobat (PDF) Version]


See also “Time-Flow Concepts and Architectures For Music and Media Synchronization.”