Blattner and Dannenberg, eds., Multimedia Interface Design, ACM Press, 1992. (Also published in Chinese, 1994.)
This book resulted from a workshop at CHI'90. It's a collection of papers organized by general topic. At the time, corporations (esp. Apple) were pushing what we thought was a very narrow view of multimedia and we thought it would be a good idea to present a collection that spanned a wide range of technology and applications.
Rob Fisher, Paul Vanouse, Roger Dannenberg, and Jeff Christensen, “Audience Interactivity: A Case Study in Three Perspectives Including Remarks About a Future Production,” in Proceedings of the Sixth Biennial Symposium for Arts and Technology, Connecticut College, (February 1997).
Audience interactivity was a primary element of a major planetarium production about cell biology entitled “Journey into the Living Cell." The artist/authors were directly involved with the design of the production from concept to realization. Rob Fisher was the Project, Artistic and Technical Director. Paul Vanouse was Assistant Director responsible for the design and production of the interactive visual portions of the show. Roger Dannenberg directed the interactive audio portions and was responsible for the interactive audio system with the assistance of Jeff Christensen. The following paper provides background about the production and our varied perspectives on the use of the innovative interactive system. In addition, a future production currently pending approval of an NSF grant will be described. This new show about the brain builds upon the experiences gained in the cell project and sheds light on features of audience interactivity that point to some startling conclusions about group behavior.[Acrobat Version] [Postscript Version] [HTML Version]
Dannenberg, Witkin, and Fisher, Method and apparatus for interactive audience participation by audio command. US Patent #798382, 1997.
An interactive audience participation system which utilizes audio command signals, such as loudness or sound intensity, transmitted by different audience groups. These respective audio command signals are detected to determine the aggregate of the signals for each group and then the detected aggregates are converted into data. An audience sensed interactive communication medium device, such as a large stadium video screen, is correspondingly manipulated by the detected data so that the audience may play a competitive or collaborative game.
Roger Dannenberg and Rob Fisher, “An Audience-Interactive Multimedia Production on the Brain” in Proceedings of the Symposium for Arts and Technology, Connecticut College, (March 2001).
A multimedia planetarium show, “Gray Matters: The Brain Movie,” was created to teach fundamental scientific concepts about the human brain. During the show, the planetarium dome represents a giant brain enclosing the audience. Audience members play the role of neurons in various simulations and representations of brain function. This leads to new ways of thinking about audience interactivity in theaters, with many applications to art and entertainment. Some of the problems of large art/science collaborations are also discussed.[Acrobat Version] [HTML Version]
Dannenberg, “Interactive Visual Music: A Personal Perspective,” Computer Music Journal, 29(4) (Winter 2005), pp. 25-35.
Interactive performance is one of the most innovative ways computers can be used in music, and it leads to new ways of thinking about music composition and performance. Interactive performance also poses many technical challenges, resulting in new languages and special hardware including sensors, synthesis methods, and software techniques. As if there are not already enough problems to tackle, many composers and artists have explored the combination of computer animation and music within interactive performances. In this article, I describe my own work in this area, dating from around 1987, including discussions of artistic and technical challenges as they have evolved. I also describe the Aura system, which I now use to create interactive music, animation, and video.[Acrobat (PDF) Version]
Dannenberg and Bernstien, “ &lquo;Origin, Direction, Location&rquo;: An Installation” in Proceedings of the 10th Biennial Symposium on Arts and Technology, New London, Connecticut: Connecticut College, (2006).
An interactive installation uses microphones to capture sounds from participants. Sounds activate images, causing them to display and transform. Sounds are also processed, mixed, and fed back into the gallery space. Inspired by Buddist teachings, the piece emerges from and is created by the participants.[Acrobat Version]
and Interactive Performance,
and Real-Time Scheduling/Dispatching
and The Tactus Project and Related Synchronization
Dannenberg and Jameson, “Real-Time Issues in Computer Music,” in Proceedings
of the Real-Time Systems Symposium, IEEE Computer Society Press, (December
1993), pp. 258-261.
Dannenberg and P. Hibbard, “A Butler Process for Resource Sharing on Spice Machines,” Transactions on Office Information Systems, Vol. 3, No. 3 (July 1985), pp. 234-252.
A network of personal computers may contain a large amount of distributed computing resources. For a number of reasons it is desirable to share these resources, but sharing is complicated by issues of security and autonomy. A process known as the Butler addresses these problems and provides support for resource sharing. The Butler relies upon a capability-based accounting system called the Banker to monitor the use of local resources.[Acrobat Version]
Dannenberg, “Protection for Communication and Sharing in A Personal Computer Network,” in Proceedings of the Fifth International Conference on Distributed Computer Systems, (May 1985), pp. 88-98.
Dannenberg, “AMPL: Design, Implementation, and Evaluation of A Multiprocessing Language,” CMU Tech Report CMU-CS-82-116, 1982.
Dannenberg, “Resource Sharing In A Network Of Personal Computers,” CMU, 1982 (Ph.D. Thesis)
Brandt and Dannenberg, “Low-Latency
Music Software Using Off-The-Shelf Operating Systems,” in Proceedings
of the International Computer Music Conference, San Francisco: International
Computer Music Association, (1998), pp.137-141.
ABSTRACT: The Garnet research project, which is creating a set of tools to aid the design and implementation of highly interactive, graphical, direct-manipulation user interfaces, is discussed. Garnet also helps designers rapidly develop prototypes for different interfaces and explore various user-interface metaphors during early product design. It emphasizes easy specification of object behavior, often by demonstration and without programming. Garnet contains a number of different components grouped into two layers. The Garnet Toolkit (the lower layer) supplies the object-oriented graphics system and constraints, a set of techniques for specifying the objects' interactive behavior in response to the input devices, and a collection of interaction techniques. On top of the Garnet Toolkit layer are a number of tools to make creating user interfaces easier. The components of both layers are described.[Acrobat Version]
Dannenberg and Amon,
“A Gesture Based User Interface Prototyping System,”
in Proceedings of the ACM SIGGRAPH Symposium on User Interface Software
and Technology (November 1989), pp. 127-132.
Myers, Vander Zanden, and Dannenberg, “Creating Graphical Interactive
Application Objects by Demonstration,” in Proceedings of the ACM SIGGRAPH
Symposium on User Interface Software and Technology (November 1989),
ABSTRACT: Instructional Design aspires to define a sound curriculum by using instructional analysis and concept organization. Along with other criteria, the purpose of instructional design is to ensure integrity among instructional objectives, tasks that students must perform, and the evaluation of their performance. Currently, the methods used in instructional design models have a limited scientific basis. Even with many efforts towards a science of instruction, this goal remains elusive. Computers may provide a positive shift towards systematic and verifiable instructional analysis with the advent of intelligent tutoring systems and the byproducts of their development. One such system, the Piano Tutor, has led to a formal model for curriculum design and analysis and is described in detail.[Acrobat Version]
Frances K. Dannenberg, Roger B. Dannenberg, and Philip Miller, “Teaching Programming to Musicians,” in 1984 Proceedings of the Fourth Annual Symposium on Small Computers in the Arts (October 1984), pp. 114-122.
ABSTRACT: A new approach has been developed for teaching programming to musicians. The approach uses personal computers with music synthesis capabilities, and students write programs in order to realize musical compositions. Our curriculum emphasizes abstraction in programming by the early introduction of high-level concepts and the late introduction of programming language details. We also emphasize abstraction by relating programming concepts to musical concepts which are already familiar to our students. We have successfully used this curriculum to teach Pascal to children and we are presently using it in a university-level course for composers.[Acrobat Version]
See also: The Piano Tutor
Dannenberg, “On Machine Architecture for Structured Programs,” Communications
of the Association for Computing Machinery 22,5 (May 1979), p. 311,
Dannenberg and Bates, “A Model for Interactive Art,” in Proceedings of the Fifth Biennial Symposium for Arts and Technology, Connecticut College, (March 1995), pp. 103-111.
ABSTRACT: The new technologies of computer systems and artificial intelligence enable new directions in art. One new direction is the creation of highly interactive works based on computation. We describe several interactive artworks and show that there are strong similarities that transcend categories such as drama, music, and dance. Examining interactive art as a general approach, we identify one important dimension of variability having to do with the degree to which the art is focused on the process of interaction as opposed to generating a final product. The model that results from our analysis suggests future directions and forms of interactive art. We speculate what some of these new forms might be like.[Acrobat Version] [Postscript Version]
Thom and Dannenberg, “Predicting Chords in Jazz,” in Proceedings of the 1995 International Computer Music Conference, International Computer Music Association, (September 1995), pp. 237-8.
Woodruff, Pardo, and Dannenberg, “Remixing Stereo Music with Score-Informed Source Separation,” in ISMIR 2006 7th International Conference on Music Information Retrieval Proceedings, Victoria, BC, Canada: University of Victoria, October 2006, pp. 314-319.
Dannenberg, “An Intelligent Multi-Track Audio Editor,” in Proceedings of the 2007 International Computer Music Conference, Volume II. San Francisco: The International Computer Music Association, (August 2007), pp. II-89 - 94.
ABSTRACT. Audio editing software allows multi-track recordings to be manipulated by moving notes, correcting pitch, and making other fine adjustments,but this is a tedious process. An “intelligent audio editor” uses a machine-readable score as a specification for the desired performance and automatically makes adjustments to note pitch, timing, and dynamic level.
Adobe Acrobat (PDF) Version
See also Spectral Interpolation
This paper was selected for inclusion in a post-proceedings publication (see below), which contains an expanded version of the paper. Subsequent research (unpublished) revealed some important new results. First, our model of commercial convolution-based reverb was not complete. While we did find some use of the simple model in our paper (left channel convolved with a left impulse response and right channel convolved with a right impulse response), the best-sounding commercial reverb we found uses 4 impulse responses: left input to left output, left input to right output, right input to right output, and right input to right output. This is beginning to approximate our “placed” convolution reverb which does two convolutions (left and right) on each of N inputs (sound sources). In our tests, N = 3, which is not far from the commercial systems (N = 2). Our recent tests with only 10 subjects did not show significant preference, and to our ears, the results are only better with certain materials. We want to explore what conditions favor “placed” convolution and do a larger study in the future.
ABSTRACT. Current advances in techniques have made it possible to simulate reverberation effects in real world performance spaces by convolving dry instrument signals with physically measured impulse response data. Such reverberation effects have recently become commonplace; however, current techniques apply a single effect to an entire ensemble, and then separate individual instruments in the stereo field via panning. By measuring impulse response data from each instrument's desired location, it is possible to place instruments in the stereo field using their unique initial reflection and reverberation patterns. A pilot study compares the perceived quality of dry signals convolved to stereo center, convolved to stereo center and panned to desired placement, and convolved with measured impulse responses to simulate actual placement. The results of a single blind study show a conclusive preference for location-based reverberation effects.
Adobe Acrobat (PDF) Version
Please see my notes above about subsequent research.
ABSTRACT. Reverberation can be simulated by convolving dry instrument signals with physically measured impulse response data. Such reverberation effects have recently become commonplace; however, current techniques apply a single effect to an entire ensemble, and then separate individual instruments in the stereo field via panning. By measuring impulse response data from each desired instrument location, it is possible to place instruments in the stereo field using their unique early reflection and reverberation patterns without panning. A pilot study compares the perceived quality of dry signals convolved to stereo center, convolved to stereo center and panned to desired placement, and convolved with measured impulse responses to simulate placement. The results of a single blind study show a preference for location-based (as opposed to panning-based) reverberation effects.
Adobe Acrobat (PDF) Version
Xia, Tay, Dannenberg, and Veloso, “Autonomous Robot Dancing Driven by Beats and Emotions of Music” in Proceedings of the 11th International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2012), Conitzer, Winikoff, Padgham, and van der Hoek (eds.), 4-8 June 2012, Valencia, Spain.
ABSTRACT: Many robot dances are preprogrammed by choreographers for a particular piece of music so that the motions can be smoothly executed and synchronized to the music. We are interested in automating the task of robot dance choreogra- phy to allow robots to dance without detailed human plan- ning. Robot dance movements are synchronized to the beats and re ect the emotion of any music. Our work is made up of two parts: (1) The rst algorithm plans a sequence of dance movements that is driven by the beats and the emo- tions detected through the preprocessing of selected dance music. (2) We also contribute a real-time synchronizing al- gorithm to minimize the error between the execution of the motions and the plan. Our work builds on previous research to extract beats and emotions from music audio. We created a library of parameterized motion primitives, whereby each motion primitive is composed of a set of keyframes and du- rations and generate the sequence of dance movements from this library. We demonstrate the feasibility of our algorithms on the NAO humanoid robot to show that the robot is capa- ble of using the mappings dened to autonomously dance to any music. Although we present our work using a humanoid robot, our algorithm is applicable to other robots.[Adobe Acrobat (PDF) Version]
Dannenberg and Bookstein, “Practical Aspects of a Midi Conducting Program,” in Proceedings of the 1991 International Computer Music Conference, International Computer Music Association, (October 1991), pp. 537-540.
ABSTRACT: A MIDI-based conducting program was implemented to allow a conductor to control the tempo of a MIDI performance that accompanies a live performer. The tempo is controlled by tapping beats on a keyboard. A number of features were added in the process of preparing for a large-scale performance, a concerto for live piano and MIDI orchestra and chorus. This experience led to a number of practical suggestions.[Adobe Acrobat (PDF) Version]
Dannenberg, ed., Computer Music Video Review, International Computer Music Association (video), 1991.
Dannenberg, “Danger in Floating-Point-to-Integer Conversion,” (letter to editor), Computer Music Journal, 26(2), (Summer 2002), p4.
This is a letter describing the problems of rounding floating point samples to integers, which in C is a non-linear operation in terms of signal processing.
Lee, Dannenberg, and Chun. “Cancellation of Unwanted Audio to Support Interactive Computer Music,” in The ICMC 2004 Proceedings, San Francisco: The International Computer Music Association, (2004), pp. 692-698.
ABSTRACT: A real-time unwanted-audio cancellation system is developed. The system enhances recorded sound by canceling unwanted loudspeaker sounds picked up during the recording. After cancellation, the resulting sound gives an improved estimation of the live performer's sound. The cancellation works by estimating the unwanted audio signal and subtracting it from the recorded signal. The canceller is composed of a delay block and two adaptive digital filters. Our work extends conventional echo-cancellation methods to address problems we encountered in music applications. We describe a realtime implementation in Aura and present experimental results in which the proposed canceller enhances the performance of a real-time pitch detector. The cancellation ratio is measured and limitations of the system are discussed.
Dannenberg, “Book Review: David Cope, Computer Models of Musical Creativity,” Artificial Intelligence 170 (November 2006), pp. 1218-1221.
ABSTRACT: Computer Models of Musical Creativity describes its author's many-faceted approach to music composition by computer, emphasizing the nature of creativity, how it can be modeled, and how these models are applied. I find this book especially interesting because David Cope is one of the few composer/researchers to attempt to model traditional musical styles of composers such as Bach, Mozart, and Beethoven. While the musical quality of his computer-generated works has been the subject of much debate, the fact that there is any debate at all is quite an accomplishment. Just as computers and AI have raised many difficult questions about the nature of intelligence, Cope's work and this book raise questions about “creativity” - where does it come from?, can computers be creative?, and is creativity different from intelligence? While most work in computer generated music ultimately relies on human creativity or possibly serendipitous discovery filtered through human perception and selection, Cope's work is much more concerned with the construction of creative processes.[Adobe Acrobat (PDF) Version]
Dannenberg, Ben Brown, Garth Zeglin, Ron Lupish, “McBlare: A Robotic Bagpipe Player,”in Proceedings of the International Conference on New Interfaces for Musical Expression, Vancouver: University of British Columbia, (2005), pp. 80-84.
ABSTRACT: McBlare is a robotic bagpipe player developed by the Robotics Institute at Carnegie Mellon University. McBlare plays a standard set of bagpipes, using a custom air compressor to supply air and electromechanical exceeds the measured speed of expert human performers. On the other hand, human performers surpass McBlare in their ability to compensate for limitations and imperfections in reeds, and we discuss future enhancements to address these problems. McBlare has been used to perform traditional bagpipe music as well as experimental computer generated music.
Dannenberg and Benade, “An Automated Approach to Tuning,” in Proceedings of the 1983 International Computer Music Conference, (October 1983).
ABSTRACT: Conventional keyboard or computer tuning systems suffer from either a lack of “natural” harmonic intervals, or the inability to support modulation. In contrast to conventional fixed-pitch systems, a variable-pitch tuning system allows small changes in pitch to obtain desirable intervals within a framework where modulation is possible. Such a system is practical only when the correct pitch variations can be determined without elaborate notation by the composer. To solve this problem, an algorithm is proposed that computes pitch from a conventional score. A modification to the basic algorithm allows a controlled amount of “equal-temperedness,” and similar algorithms can be applied to microtonal scales.
Dannenberg, “A Structure for Efficient Update, Incremental Redisplay and Undo in Display-Oriented Editors,” Software: Practice and Experience, 20(2) (February 1990), pp. 109-132.
ABSTRACT: The design of a graphical editor requires a solution to a number of problems, including how to (1) support incremental redisplay, (2) control the granularity of display updates, (3) provide efficient access and modification to the underlying data structure, (4) handle multiple views of the same data and (5) support Undo operations. It is most important that these problems be solved without sacrificing program modularity. A new data structure, called an ItemList, provides a solution to these problems. ItemLists maintain both multiple views and multiple versions of data to simplify Undo operations and to support incremental display updates. The implementation of ItemLists is described and the use of ItemLists to create graphical editors is presented.[Adobe Acrobat (PDF) Version]
Dannenberg, “Music Representation Issues, Techniques, and Systems,” Computer Music Journal, 17(3) (Fall 1993), pp. 20-30.
This invited paper is a survey for a special issue of Computer Music Journal on music representation. It covers a lot of ground, including Levels of Representation, Hierarchy and Structure, Extensibility, Pitch, Tempo, Beat, Duration, Time, Timbre, Continuous and Discrete Data, Declarative and Procedural Representations, Resources, Instances and Streams, Protocols, and Coding. I try to describe the current practice and describe the many problems that exist. Many of these problems are still open today.[Postscript Version] [Adobe Acrobat (PDF) Version]
Dannenberg, “A Structure for Representing, Displaying and Editing Music,” in Proceedings of the 1986 International Computer Music Conference, (October 1986), pp. 153-60.
Dannenberg, “Music Representation: A Position Paper,” in 1989 International
Computer Music Conference, Computer Music Association, (October 1989),
Dannenberg, Rubine, and Neuendorffer, “The Resource-Instance Model of Music Representation,” in Proceedings of the 1991 International Computer Music Conference, International Computer Music Association, (October 1991). pp. 428-432.
Traditional software synthesis systems, such as Music V, utilize an instance model of computation in which each note instantiates a new copy of an instrument. An alternative is the resource model, exemplified by MIDI “mono mode,” in which multiple updates can modify a sound continuously, and where multiple notes share a single instrument. We have developed a unified, general model for describing combinations of instances and resources. Our model is a hierarchy in which resource-instances at one level generate output which is combined to form updates to the next level. The model can express complex system configurations in a natural way.[Postscript Version] [Adobe Acrobat (PDF) Version]
Dannenberg, “Abstract Time Warping of Compound Events and Signals,” in Proceedings of the 1994 International Computer Music Conference, International Computer Music Association, (September 1994), pp. 251-254.
Mazzoni and Dannenberg, “A Fast Data Structure for Disk-Based Audio Editing,” in Proceedings of the 2001 International Computer Music Conference, International Computer Music Association, (September 2001), pp. 107-110.
This is the first publication on Audacity. A somewhat expanded article was prepared for Computer Music Journal (see below).[Adobe Acrobat (PDF) Version]
ABSTRACT:Computer music research calls for a good tool to display and edit music and audio information. Finding no suitable tools available that are flexible enough to support various research tasks, we created an open source tool called Audacity that we can customize to support annotation, analysis, and processing. The editor displays large audio files as well as discrete data including MIDI. Our implementation introduces a new data structure for audio that combines the speed of non-destructive editing with the direct manipulation convenience of in-place editors. This paper describes the data structure, its performance, features, and its use in an audio editor.
Mazzoni and Dannenberg, “A Fast Data Structure for Disk-Based Audio Editing,” Computer Music Journal, 26(2), (Summer 2002), pp. 62-76.
This is the first (maybe the only) journal article on Audacity, an open source editor that is perhaps the most popular audio editor to date, having been downloaded hundreds of millions of times.[
ABSTRACT:This article examines how to combine the strengths of both in-place and non-destructive approaches to audio editing, yielding an editor that is almost as fast and reversible as a non-destructive editor, while almost as simple and space-efficient as an in-place editor. Although we create an interface that looks like that of an in-place editor, we also support multiple tracks with editable amplitude envelopes. This allows us to manipulate and combine many audio files efficiently.
Adobe Acrobat (PDF) Version]
Dannenberg, “The Interpretation of MIDI Velocity,” in Proceedings of the 2006 International Computer Music Conference, San Francisco, CA: The International Computer Music Association, (2006), pp. 193-196.
Real syntheizers are measured to find out how manufacturers interpret MIDI velocity.[Adobe Acrobat (PDF) Version]
ABSTRACT:The MIDI standard does not specify how MIDI key velocity is to be interpreted. Of course, individual synthetic instruments respond differently, but one would expect that on average, instruments will respond about the same. This study aims to determine empirically how hardware and software MIDI synthesizers translate velocity to peak RMS amplitude. Analysis shows synthesizers roughly follow an x squared rather than exponential mapping. Given a desired dynamic range (from velocity 1 to 127), a square-law mapping from velocity to RMS is uniquely determined, making dynamic range a convenient way to summarize behavior. Surprisingly, computed values of dynamic range for commercial synthesizers vary by more than 60dB.
See also The Piano Tutor
and Interactive Performance.
See Interactive Performance.
Dannenberg and Wasserman, “Estimating the Error Distribution of a Single Tap Sequence without Ground Truth” in Proceedings of the 10th International Conference on Music Information Retrieval (ISMIR 2009), (October 2009), pp. 297-302.
Abstract. Detecting beats, estimating tempo, aligning scores to audio, and detecting onsets are all interesting problems in the field of music information retrieval. In much of this research, it is convenient to think of beats as occuring at precise time points. However, anyone who has attempted to label beats by hand soon realizes that precise annotation of music audio is not possible. A common method of beat annotation is simply to tap along with audio and record the tap times. This raises the question: How accurate are the taps? It may seem that an answer to this question would require knowledge of “true” beat times. However, tap times can be characterized as a random distribution around true beat times. Multiple independent taps can be used to estimate not only the location of the true beat time, but also the statistical distribution of measured tap times around the true beat time. Thus, without knowledge of true beat times, and without even requiring the existence of precise beat times, we can estimate the uncertainty of tap times. This characterization of tapping can be useful for estimating tempo variation and evaluating alternative annotation methods.
[Adobe Acrobat (PDF) Version]
Dannenberg, Siewiorek, and Zahler, “Exploring Meaning and Intention in Music Conducting,” in Proceedings of the 2010 International Computer Music Conference, San Francisco: The International Computer Music Association, (August 2010), pp. 327-330.
[Adobe Acrobat (PDF) Version]
Conducting is a high-level form of expressive musical communication. The possibility of human-computer interaction through a conducting-based interface to a computer performance system has attracted many computer music researchers. This study explores conducting through interviews with conductors and musicians and also through accelerometers attached to conductors during rehearsals with a (human) orchestra and chamber music group. We found that “real” conducting gestures are much more subtle than “textbook” conducting gestures made in the lab, but we observed a very high correlation between the smoothed RMS amplitudes of conductors' wrist acceleration and the ensembles' audio.
Psychoacoustics, perception, and cognition
See Computer Accompaniment,
[Adobe Acrobat (PDF) Version]
and Interactive Performance.
Dannenberg and Mercer, “Real-Time Software Synthesis on Superscalar Architectures,” in Proceedings of the 1992 International Computer Music Conference, International Computer Music Association, (October 1992), pp. .174-177.
Dannenberg and Jameson, “Real-Time Issues in Computer Music,” in Proceedings of the Real-Time Systems Symposium, IEEE Computer Society Press, (December 1993), pp. 258-261.
Thompson and Dannenberg, “Optimizing Software Synthesis Performance,” in Proceedings of the 1995 International Computer Music Conference, International Computer Music Association, (September 1995), pp. 235-6.
Dannenberg, “Interpolation Error in Waveform Table Lookup,” in Proceedings of the 1998 International Computer Music Conference, (1998), pp 240-243.
Previous papers analyzed the interpolation error for sinusoids. This paper looks at interpolation error for arbitrary (harmonic) waveforms, and gives some time/space tradeoffs for higher-order interpolation in software.[Adobe Acrobat (PDF) Version] [HTML Version]
Dannenberg, Bernstein, Zeglin, and Neuendorffer, “Sound Synthesis from
Video, Wearable Lights, and `The Watercourse Way',” in Proceedings
The Eighth Biennial Symposium on Arts and Technology, New London: Connecticut
College, (February 2003), pp. 38-44.
“The Watercourse Way” is a mostly-music interactive multimedia performance for violin, cello, percussion, and dancer. The work uses a computer to process sounds from the performers, to synthesize sound, and to generate computer animation. A novel synthesis technique is introduced in which the sound spectrum is controlled in real time by images of light reflected from a shallow pool of water. In addition, performers wear computer-controlled lights that respond to video and sound input, using a wireless radio link to the computer. This work explores connections between the senses using technology to both sense and generate images and sounds.
[Acrobat (PDF) Version]
Dannenberg and Neuendorffer. “Sound Synthesis from Real-Time Video Images,” in Proceedings of the 2003 International Computer Music Conference.
San Francisco: International Computer Music Association, (2003), pp. 385-388.
Digital video offers an interesting source of control information for
musical applications. A novel synthesis technique is introduced where
digital video controls sound spectra in real time. Light intensity
modulates the amplitudes of 32 harmonics in each of several synthesized
“voices.” Problems addressed include how to map from video to sound
dealing with global variations in light level, dealing with low frame
rates of video relative to high sample rates of audio, and overall
system implementation. In one application, images of light reflected
from a shallow pool of water are used to control sound, offering a
rich tactile interface to sound synthesis.
[Acrobat (PDF) Version]
Dannenberg, “Concatenative Synthesis Using Score-Aligned Transcriptions,” in Proceedings of the 2006 International Computer Music Conference, San Francisco, CA: The International Cmputer Music Association, (2006), pp. 352-355.
See Spectral Interpolation
Dannenberg, McAvinney, and Thomas, “Carnegie-Mellon University Studio Report,” in Proceedings of the 1984 International Computer Music Conference, Computer Music Association, (June 1985), 281-286.
Dannenberg, “Systemes pour Informatique Musicale a l'universite de Carnegie Mellon,” in Actes du Symposium “Systemes Personnels et Informatique Musicale,” IRCAM, Paris, France, 1987.