Interactive Computer Music Systems
A session of the Acoustical Society of
America (ASA) conference in Pittsburgh, PA,
Tuesday Morning, 8:25-11:30am, June 4, 2002,
chaired by Roger B. Dannenberg,
School
of Computer Science, Carnegie Mellon University,
Pittsburgh
This session explores computer music performance systems and virtual
instruments, especially the use of sensors and computing to control real-time
sound generation processes. The presenters are an internationally distinguished
collection of scientists, composers, and musicians, and most are confortable
in two or more of these roles.
8:25 Chair's comments: Roger Dannenberg
8:30 AM
The structural implications of interactive creativity
Joel Chadabe, Electronic Music Foundation, Ltd.
The functioning of any particular electronic musical instrument can be
placed somewhere along a line that extends from deterministic to indeterministic.
Bearing in mind this author's conviction that the goal of technology should
be to better human existence, we ask: In what ways does an electronic musical
instrument function for the benefit of its performer? Although deterministic
instruments may offer more powerful controls than traditional instruments,
they typically put a performer in the traditional situation of making a
gesture and expecting a predictable effect. Indeterministic instruments,
on the other hand, put a performer in an interactive role of improvising
relative to an unpredictable output. By 'interactive', I mean 'mutually
influential'. The performer influences the instrument and the instrument
influences the performer. The unique advantage of such interactive instruments
is that they foster 'interactive creativity'.
The design of a traditional instrument is fundamentally different from
the design of an interactive instrument. A traditional instrument
is structured as a single cause and effect, articulated as a synchronous
linear path through a hierarchy of controls from a performer operating
an input device to the multiple variables of a sound generator. An interactive
instrument, on the other hand, is structured as a network of many causes
and effects at various levels of importance, with a performer's input as
only one of the causes of the instrument's output in sound.
The author will present several historical examples of interactive electronic
musical instruments and offer some speculations on the future.
9:00 AM
Music scene description: Toward audio-based real-time music understanding
Masataka Goto, PRESTO, JST. / National Institute of Advanced Industrial Science and Technology (AIST, Japan)
Music understanding is an important component of audio-based interactive
music systems. A real-time music scene description system for the
computational modeling of music understanding is proposed. This research
is based on the assumption that a listener understands music without deriving
musical scores or even fully segregating signals. In keeping with this
assumption, our music scene description system produces intuitive
descriptions of music, such as the beat structure and the melody and bass
lines. Two real-time subsystems have been developed, a beat tracking subsystem
and a melody-and-bass detection subsystem, which can deal with real-world
monaural audio signals sampled from popular-music CDs. The beat tracking
subsystem recognizes a hierarchical beat structure comprising the quarter-note,
half-note, and measure levels by using three kinds of musical knowledge:
of onset times, of chord changes, and of drum patterns. The melody-and-bass
detection subsystem estimates the F0 (fundamental frequency) of melody
and bass lines by using a predominant-F0 estimation method called PreFEst,
which does not rely on the F0's unreliable frequency component and obtains
the most predominant F0 supported by harmonics within an intentionally
limited frequency range. Several applications of music understanding are
described, including a beat-driven, real-time computer graphics and lighting
controller.
9:30 AM
Making the computer ``listen'' to music
Christopher Raphael, Dept. of Mathematics and Statistics,
University of Massachusetts, Amherst
A computer system is discussed that provides real-time accompaniment to
a live musician playing a non-improvisatory piece of music. Particular
attention is devoted to the ``listening'' process, in which the computer
must follow the soloist's progress through the musical score by interpreting
the sampled acoustic signal. The process is complicated by the significant
variation and occasional errors from the live player during performance.
A hidden Markov model is introduced providing a principled, trainable and
fast solution to the listening problem. The system is capable of
assessing its own level of uncertainty about score position, as well as
accomodating the sometimes strong signal component from the accompaniment
instrument. A live demonstration will be provided.
10:00 AM
Machine musicianship
Robert Rowe, Department of Music and Performing Arts Professions,
School of Education, New York University
The training of musicians begins by teaching basic musical concepts, a
collection of knowledge commonly known as musicianship. Computer programs
designed to implement musical skills (e.g., to make sense of what they
hear, perform music expressively, or compose convincing pieces) can similarly
benefit from access to a fundamental level of musicianship. Recent research
in music cognition, artificial intelligence, and music theory has produced
a repertoire of techniques that can make the behavior of computer programs
more musical. Many of these were presented in a recently published book/CD-ROM
entitled “Machine Musicianship”. For use in interactive music systems,
we are interested in those which are fast enough to run in real time and
that need only make reference to the material as it appears in sequence.
This talk will review several applications that are able to identify the
tonal center of musical material during performance. Beyond this specific
task, the design of real-time algorithmic listening through the concurrent
operation of several connected analyzers is examined. The presentation
includes discussion of a library of C++ objects that can be combined to
perform interactive listening and a demonstration of their capability.
10:30 AM
The IMUTUS interactive music tuition system
George D. Tambouratzis, Stelios Bakamidis, Ioannis Dologlou, George Carayannis,
Markos Dendrinos, Institute for Language and Speech Processing, Greece
This presentation focuses on the IMUTUS project, which concerns the creation
of an innovative method for training users on traditional musical instruments
with no MIDI (Musical Instrument Digital Interface) output. The entities
collaborating in IMUTUS are ILSP (co-ordinator), EXODUS, SYSTEMA, DSI,
SMF, GRAME and KTH.
The IMUTUS effectiveness is enhanced via an advanced user interface
incorporating multimedia techniques. Internet plays a pivotal role during
training, the student receiving guidance over the net from a specially-created
teacher group. Interactiveness is emphasised via automatic-scoring tools,
which provide fast yet accurate feedback to the user, while virtual reality
methods assist the student in perfecting his technique. IMUTUS incorporates
specialised recognition technology for the transformation of acoustic signals
and music scores to MIDI format and incorporation in the training process.
This process is enhanced by periodically enriching the score database,
while customisation to each user's requirements is supported.
This work is partially supported by European Community under the Information
Society Technology (IST) RTD programme. The authors are solely responsible
for the content of this communication. It does not represent the opinion
of the European Community, and the European Community is not responsible
for any use that might be made of data appearing therein.
11:00 AM
Interactive systems research at CNMAT
David Wessel, CNMAT, UC Berkeley
A live-performance musical instrument can be assembled around current lap-top
computer technology. One adds a controller such as a keyboard or other
gestural input device, a sound diffusion system, some form of connectivity
processor(s) providing for audio I/O and gestural controller
input, and reactive real-time native signal processing software.
A system consisting of a hand gesture controller; software for gesture
analysis and mapping, machine listening, composition, and sound synthesis;
and a controllable radiation pattern loudspeaker are described. Interactivity
begins in the set up wherein the speaker-room combination is tuned with
an LMS procedure. This system was designed for improvisation.
It is argued that software suitable for carrying out an improvised musical
dialog with another performer poses special challenges. The processes
underlying the generation of musical material must be very adaptable, capable
of rapid changes in musical direction. Machine listening techniques
are used to help the performer adapt to new contexts. Machine learning
can play an important role in the development of such systems. In
the end, as with any musical instrument, human skill is essential.
Practice is required not only for the development of musically appropriate
human motor programs but for the adaptation of the computer-based instrument
as well.