Bibliography by Subject for Roger Dannenberg

(Click here for resume-style bibliography)

Multimedia

The Tactus Project and Related Synchronization Papers

Blattner and Dannenberg, eds., Multimedia Interface Design, ACM Press, 1992. (Also published in Chinese, 1994.)

This book resulted from a workshop at CHI'90. It's a collection of papers organized by general topic. At the time, corporations (esp. Apple) were pushing what we thought was a very narrow view of multimedia and we thought it would be a good idea to present a collection that spanned a wide range of technology and applications.

Rob Fisher, Paul Vanouse, Roger Dannenberg, and Jeff Christensen, "Audience Interactivity: A Case Study in Three Perspectives Including Remarks About a Future Production," in Proceedings of the Sixth Biennial Symposium for Arts and Technology, Connecticut College, (February 1997).

Audience interactivity was a primary element of a major planetarium production about cell biology entitled "Journey into the Living Cell." The artist/authors were directly involved with the design of the production from concept to realization. Rob Fisher was the Project, Artistic and Technical Director. Paul Vanouse was Assistant Director responsible for the design and production of the interactive visual portions of the show. Roger Dannenberg directed the interactive audio portions and was responsible for the interactive audio system with the assistance of Jeff Christensen. The following paper provides background about the production and our varied perspectives on the use of the innovative interactive system. In addition, a future production currently pending approval of an NSF grant will be described. This new show about the brain builds upon the experiences gained in the cell project and sheds light on features of audience interactivity that point to some startling conclusions about group behavior.
[Acrobat Version] [Postscript Version] [HTML Version]

Roger Dannenberg and Rob Fisher, "An Audience-Interactive Multimedia Production on the Brain" in Proceedings of the Symposium for Arts and Technology, Connecticut College, (March 2001).

A multimedia planetarium show, “Gray Matters: The Brain Movie,” was created to teach fundamental scientific concepts about the human brain. During the show, the planetarium dome represents a giant brain enclosing the audience. Audience members play the role of neurons in various simulations and representations of brain function. This leads to new ways of thinking about audience interactivity in theaters, with many applications to art and entertainment. Some of the problems of large art/science collaborations are also discussed.
[Acrobat Version] [HTML Version]

Dannenberg, Witkin, and Fisher, Method and apparatus for interactive audience participation by audio command. US Patent #798382, 1997.

An interactive audience participation system which utilizes audio command signals, such as loudness or sound intensity, transmitted by different audience groups. These respective audio command signals are detected to determine the aggregate of the signals for each group and then the detected aggregates are converted into data. An audience sensed interactive communication medium device, such as a large stadium video screen, is correspondingly manipulated by the detected data so that the audience may play a competitive or collaborative game.

Dannenberg and Bernstien, " `Origin, Direction, Location': An Installation" in Proceedings of the 10th Biennial Symposium on Arts and Technology, New London, Connecticut: Connecticut College, (2006).

An interactive installation uses microphones to capture sounds from participants. Sounds activate images, causing them to display and transform. Sounds are also processed, mixed, and fed back into the gallery space. Inspired by Buddist teachings, the piece emerges from and is created by the participants.
[Acrobat Version]

Real-Time Control

See Functional Languages for Real-Time Control,

and Interactive Performance,

and Real-Time Scheduling/Dispatching

and The Tactus Project and Related Synchronization Papers


Dannenberg and Jameson, "Real-Time Issues in Computer Music," in Proceedings of the Real-Time Systems Symposium, IEEE Computer Society Press, (December 1993), pp. 258-261.


Program Verification

See Program Verification

Programming Languages and Operating Systems

Dannenberg and P. Hibbard, "A Butler Process for Resource Sharing on Spice Machines," Transactions on Office Information Systems, Vol. 3, No. 3 (July 1985), pp. 234-252.

A network of personal computers may contain a large amount of distributed computing resources. For a number of reasons it is desirable to share these resources, but sharing is complicated by issues of security and autonomy. A process known as the Butler addresses these problems and provides support for resource sharing. The Butler relies upon a capability-based accounting system called the Banker to monitor the use of local resources.
[Acrobat Version]

Dannenberg, "Protection for Communication and Sharing in A Personal Computer Network," in Proceedings of the Fifth International Conference on Distributed Computer Systems, (May 1985), pp. 88-98.

Dannenberg, "AMPL: Design, Implementation, and Evaluation of A Multiprocessing Language," CMU Tech Report CMU-CS-82-116, 1982.

Dannenberg, "Resource Sharing In A Network Of Personal Computers," CMU, 1982 (Ph.D. Thesis)

Brandt and Dannenberg, ``Low-Latency Music Software Using Off-The-Shelf Operating Systems,'' in Proceedings of the International Computer Music Conference, San Francisco: International Computer Music Association, (1998), pp.137-141.


Human Computer Interaction

Myers, Giuse, Dannenberg, Vander Zanden, Kosbie, Pervin, Mickish, and Marchal, "Garnet: Comprehensive Support for Graphical, Highly Interactive User Interfaces," Computer 23(11), 1990, pp. 71-85.

Dannenberg and Amon, "A Gesture Based User Interface Prototyping System," in Proceedings of the ACM SIGGRAPH Symposium on User Interface Software and Technology (November 1989), pp. 127-132.

Myers, Vander Zanden, and Dannenberg, "Creating Graphical Interactive Application Objects by Demonstration," in Proceedings of the ACM SIGGRAPH Symposium on User Interface Software and Technology (November 1989), pp. 95-104.


Education

Capell and Dannenberg, "Instructional Design and Intelligent Tutoring: Theory and the Precision of Design," Journal of Artificial Intelligence in Education 4(1), 1993, pp. 95-121.
ABSTRACT: Instructional Design aspires to define a sound curriculum by using instructional analysis and concept organization. Along with other criteria, the purpose of instructional design is to ensure integrity among instructional objectives, tasks that students must perform, and the evaluation of their performance. Currently, the methods used in instructional design models have a limited scientific basis. Even with many efforts towards a science of instruction, this goal remains elusive. Computers may provide a positive shift towards systematic and verifiable instructional analysis with the advent of intelligent tutoring systems and the byproducts of their development. One such system, the Piano Tutor, has led to a formal model for curriculum design and analysis and is described in detail.
[Acrobat Version]

Frances K. Dannenberg, Roger B. Dannenberg, and Philip Miller, "Teaching Programming to Musicians," in 1984 Proceedings of the Fourth Annual Symposium on Small Computers in the Arts (October 1984), pp. 114-122.

ABSTRACT: A new approach has been developed for teaching programming to musicians. The approach uses personal computers with music synthesis capabilities, and students write programs in order to realize musical compositions. Our curriculum emphasizes abstraction in programming by the early introduction of high-level concepts and the late introduction of programming language details. We also emphasize abstraction by relating programming concepts to musical concepts which are already familiar to our students. We have successfully used this curriculum to teach Pascal to children and we are presently using it in a university-level course for composers.
[Acrobat Version]

See also: The Piano Tutor


Computer Systems

Dannenberg, "An Architecture With Many Operand Registers to Efficiently Execute Block Structured Languages," in Proceedings of the 6th Annual Symposium on Computer Architecture, pp. 50-57, 1979.

Dannenberg, "On Machine Architecture for Structured Programs," Communications of the Association for Computing Machinery 22,5 (May 1979), p. 311, (technical correspondence).


Computer Music

The single-level ICMA taxonomy is listed here, with my publications inserted at the appropriate points.

Acoustics of musical instruments and voice

Aesthetics, philosophy, and criticism

Dannenberg and Bates, "A Model for Interactive Art," in Proceedings of the Fifth Biennial Symposium for Arts and Technology, Connecticut College, (March 1995), pp. 103-111.

ABSTRACT: The new technologies of computer systems and artificial intelligence enable new directions in art. One new direction is the creation of highly interactive works based on computation. We describe several interactive artworks and show that there are strong similarities that transcend categories such as drama, music, and dance. Examining interactive art as a general approach, we identify one important dimension of variability having to do with the degree to which the art is focused on the process of interaction as opposed to generating a final product. The model that results from our analysis suggests future directions and forms of interactive art. We speculate what some of these new forms might be like.
[Acrobat Version] [Postscript Version]

Aesthetics: see also

Artificial intelligence in music

See
Music Understanding, Computer Accompaniment, Beat Tracking, Performance Style Classification, and The Piano Tutor.

Thom and Dannenberg, "Predicting Chords in Jazz," in Proceedings of the 1995 International Computer Music Conference, International Computer Music Association, (September 1995), pp. 237-8.


Tzanetakis, Hu, and Dannenberg. "Toward an Intelligent Editor for Jazz Music," in Ebroul Izquierdo, ed., Digital Media Processing for Multimedia Interactive Services (Proceedings of the 4th European Workshop on Image Analysis for Multimedia Interactive Services), Singapore: World Scientific Press (2002), pp. 332-333.

Audio analysis and resynthesis

Woodruff, Pardo, and Dannenberg, "Remixing Stereo Music with Score-Informed Source Separation," in ISMIR 2006 7th International Conference on Music Information Retrieval Proceedings, Victoria, BC, Canada: University of Victoria, October 2006, pp. 314-319.


Dannenberg, "An Intelligent Multi-Track Audio Editor," in Proceedings of the 2007 International Computer Music Conference, Volume II. San Francisco: The International Computer Music Association, (August 2007), pp. II-89 - 94.

ABSTRACT. Audio editing software allows multi-track recordings to be manipulated by moving notes, correcting pitch, and making other fine adjustments,but this is a tedious process. An "intelligent audio editor" uses a machine-readable score as a specification for the desired performance and automatically makes adjustments to note pitch, timing, and dynamic level.

Adobe Acrobat (PDF) Version


See also Spectral Interpolation

Audio hardware design

Composition systems and techniques

Diffusion, sonorization


William D. Haines, Jesse R. Vernon, Roger B. Dannenberg, and Peter F. Driessen, ``Placement of Sound Sources in the Stereo Field Using Measured Room Impulse Responses,'' in Proceedings of the 2007 International Computer Music Conference, Volume I. San Francisco: The International Computer Music Association, (August 2007), pp. I-496 - 499.
This paper was selected for inclusion in a post-proceedings publication (see below), which contains an expanded version of the paper. Subsequent research (unpublished) revealed some important new results. First, our model of commercial convolution-based reverb was not complete. While we did find some use of the simple model in our paper (left channel convolved with a left impulse response and right channel convolved with a right impulse response), the best-sounding commercial reverb we found uses 4 impulse responses: left input to left output, left input to right output, right input to right output, and right input to right output. This is beginning to approximate our ``placed'' convolution reverb which does two convolutions (left and right) on each of N inputs (sound sources). In our tests, N = 3, which is not far from the commercial systems (N = 2). Our recent tests with only 10 subjects did not show significant preference, and to our ears, the results are only better with certain materials. We want to explore what conditions favor ``placed'' convolution and do a larger study in the future.

ABSTRACT. Current advances in techniques have made it possible to simulate reverberation effects in real world performance spaces by convolving dry instrument signals with physically measured impulse response data. Such reverberation effects have recently become commonplace; however, current techniques apply a single effect to an entire ensemble, and then separate individual instruments in the stereo field via panning. By measuring impulse response data from each instrument's desired location, it is possible to place instruments in the stereo field using their unique initial reflection and reverberation patterns. A pilot study compares the perceived quality of dry signals convolved to stereo center, convolved to stereo center and panned to desired placement, and convolved with measured impulse responses to simulate actual placement. The results of a single blind study show a conclusive preference for location-based reverberation effects.

Adobe Acrobat (PDF) Version


William D. Haines, Jesse R. Vernon, Roger B. Dannenberg, and Peter F. Driessen, ``Placement of Sound Sources in the Stereo Field Using Measured Room Impulse Responses.'' In R. Kronland-Martinet, S. Ystad, and K. Jensen (Eds.): Computer Music Modeling and Retrieval. Sense of Sounds, Lecture Notes in Computer Science LNCS 4969, Berlin and Heidelberg: Springer-Verlag, pp. 276-287, 2008.
Please see my notes above about subsequent research.

ABSTRACT. Reverberation can be simulated by convolving dry instrument signals with physically measured impulse response data. Such reverberation effects have recently become commonplace; however, current techniques apply a single effect to an entire ensemble, and then separate individual instruments in the stereo field via panning. By measuring impulse response data from each desired instrument location, it is possible to place instruments in the stereo field using their unique early reflection and reverberation patterns without panning. A pilot study compares the perceived quality of dry signals convolved to stereo center, convolved to stereo center and panned to desired placement, and convolved with measured impulse responses to simulate placement. The results of a single blind study show a preference for location-based (as opposed to panning-based) reverberation effects.

Adobe Acrobat (PDF) Version


History of electroacoustic music

Interactive performance systems

See Interactive Performance

Machine recognition of audio signals

Machine recognition of music data

See Music Understanding, Beat Tracking, Performance Style Classsification, Artificial Intelligence in Music, and Computer Accompaniment.

MIDI applications

Xavier Chabot, Roger Dannenberg, Georges Bloch, "A Workstation in Live Performance: Composed Improvisation," in Proceedings of the 1986 International Computer Music Conference, (October 1986), pp. 57-60.

Dannenberg and Bookstein, ``Practical Aspects of a Midi Conducting Program,'' in Proceedings of the 1991 International Computer Music Conference, International Computer Music Association, (October 1991), pp. 537-540.

ABSTRACT: A MIDI-based conducting program was implemented to allow a conductor to control the tempo of a MIDI performance that accompanies a live performer. The tempo is controlled by tapping beats on a keyboard. A number of features were added in the process of preparing for a large-scale performance, a concerto for live piano and MIDI orchestra and chorus. This experience led to a number of practical suggestions.
[Adobe Acrobat (PDF) Version]

Miscellaneous

Dannenberg, "Foundations of Computer Music edited by Curtis Roads and John Strawn (Book Review)," Journal of the Acoustical Society of America, 78(6), (December 1985), pp. 2154-5.
[Adobe Acrobat (PDF) Version]

Dannenberg, ed., Computer Music Video Review, International Computer Music Association (video), 1991.


Lee, Dannenberg, and Chun. "Cancellation of Unwanted Audio to Support Interactive Computer Music," in The ICMC 2004 Proceedings, San Francisco: The International Computer Music Association, (2004), pp. 692-698.

ABSTRACT: A real-time unwanted-audio cancellation system is developed. The system enhances recorded sound by canceling unwanted loudspeaker sounds picked up during the recording. After cancellation, the resulting sound gives an improved estimation of the live performer's sound. The cancellation works by estimating the unwanted audio signal and subtracting it from the recorded signal. The canceller is composed of a delay block and two adaptive digital filters. Our work extends conventional echo-cancellation methods to address problems we encountered in music applications. We describe a realtime implementation in Aura and present experimental results in which the proposed canceller enhances the performance of a real-time pitch detector. The cancellation ratio is measured and limitations of the system are discussed.

[Adobe Acrobat (PDF) Version]

Dannenberg, "Book Review: David Cope, Computer Models of Musical Creativity," Artificial Intelligence 170 (November 2006), pp. 1218-1221.


Dannenberg, Ben Brown, Garth Zeglin, Ron Lupish, ``McBlare: A Robotic Bagpipe Player,''in Proceedings of the International Conference on New Interfaces for Musical Expression, Vancouver: University of British Columbia, (2005), pp. 80-84.

ABSTRACT: McBlare is a robotic bagpipe player developed by the Robotics Institute at Carnegie Mellon University. McBlare plays a standard set of bagpipes, using a custom air compressor to supply air and electromechanical exceeds the measured speed of expert human performers. On the other hand, human performers surpass McBlare in their ability to compensate for limitations and imperfections in reeds, and we discuss future enhancements to address these problems. McBlare has been used to perform traditional bagpipe music as well as experimental computer generated music.

[Adobe Acrobat (PDF) Version]

Music analysis

Dannenberg and Benade, ``An Automated Approach to Tuning,'' in Proceedings of the 1983 International Computer Music Conference, (October 1983).

ABSTRACT: Conventional keyboard or computer tuning systems suffer from either a lack of "natural" harmonic intervals, or the inability to support modulation. In contrast to conventional fixed-pitch systems, a variable-pitch tuning system allows small changes in pitch to obtain desirable intervals within a framework where modulation is possible. Such a system is practical only when the correct pitch variations can be determined without elaborate notation by the composer. To solve this problem, an algorithm is proposed that computes pitch from a conventional score. A modification to the basic algorithm allows a controlled amount of "equal-temperedness," and similar algorithms can be applied to microtonal scales.

[Adobe Acrobat (PDF) Version]

Music and graphics

Music data structures and representations

Dannenberg, "A Structure for Efficient Update, Incremental Redisplay and Undo in Display-Oriented Editors," Software: Practice and Experience, 20(2) (February 1990), pp. 109-132.

Dannenberg, "Music Representation Issues, Techniques, and Systems," Computer Music Journal, 17(3) (Fall 1993), pp. 20-30.

This invited paper is a survey for a special issue of Computer Music Journal on music representation. It covers a lot of ground, including Levels of Representation, Hierarchy and Structure, Extensibility, Pitch, Tempo, Beat, Duration, Time, Timbre, Continuous and Discrete Data, Declarative and Procedural Representations, Resources, Instances and Streams, Protocols, and Coding. I try to describe the current practice and describe the many problems that exist. Many of these problems are still open today.
[Postscript Version] [Adobe Acrobat (PDF) Version]

Dannenberg, "A Structure for Representing, Displaying and Editing Music," in Proceedings of the 1986 International Computer Music Conference, (October 1986), pp. 153-60.

Dannenberg, "Music Representation: A Position Paper," in 1989 International Computer Music Conference, Computer Music Association, (October 1989), pp. 73-75.


Dannenberg, Rubine, and Neuendorffer, "The Resource-Instance Model of Music Representation," in Proceedings of the 1991 International Computer Music Conference, International Computer Music Association, (October 1991). pp. 428-432.

Traditional software synthesis systems, such as Music V, utilize an instance model of computation in which each note instantiates a new copy of an instrument. An alternative is the resource model, exemplified by MIDI "mono mode", in which multiple updates can modify a sound continuously, and where multiple notes share a single instrument. We have developed a unified, general model for describing combinations of instances and resources. Our model is a hierarchy in which resource-instances at one level generate output which is combined to form updates to the next level. The model can express complex system configurations in a natural way.
[Postscript Version] [Adobe Acrobat (PDF) Version]

Dannenberg, "Abstract Time Warping of Compound Events and Signals," in Proceedings of the 1994 International Computer Music Conference, International Computer Music Association, (September 1994), pp. 251-254.


Mazzoni and Dannenberg, "A Fast Data Structure for Disk-Based Audio Editing," in Proceedings of the 2001 International Computer Music Conference, International Computer Music Association, (September 2001), pp. 107-110.

This is the first publication on Audacity. A somewhat expanded article was prepared for Computer Music Journal (see below).

ABSTRACT:Computer music research calls for a good tool to display and edit music and audio information. Finding no suitable tools available that are flexible enough to support various research tasks, we created an open source tool called Audacity that we can customize to support annotation, analysis, and processing. The editor displays large audio files as well as discrete data including MIDI. Our implementation introduces a new data structure for audio that combines the speed of non-destructive editing with the direct manipulation convenience of in-place editors. This paper describes the data structure, its performance, features, and its use in an audio editor.

[Adobe Acrobat (PDF) Version]

Mazzoni and Dannenberg, ``A Fast Data Structure for Disk-Based Audio Editing,'' Computer Music Journal, 26(2), (Summer 2002), pp. 62-76.

[Adobe Acrobat (PDF) Version]


Dannenberg, "The Interpretation of MIDI Velocity," in Proceedings of the 2006 International Computer Music Conference, San Francisco, CA: The International Computer Music Association, (2006), pp. 193-196.

Real syntheizers are measured to find out how manufacturers interpret MIDI velocity.

ABSTRACT:The MIDI standard does not specify how MIDI key velocity is to be interpreted. Of course, individual synthetic instruments respond differently, but one would expect that on average, instruments will respond about the same. This study aims to determine empirically how hardware and software MIDI synthesizers translate velocity to peak RMS amplitude. Analysis shows synthesizers roughly follow an x squared rather than exponential mapping. Given a desired dynamic range (from velocity 1 to 127), a square-law mapping from velocity to RMS is uniquely determined, making dynamic range a convenient way to summarize behavior. Surprisingly, computed values of dynamic range for commercial synthesizers vary by more than 60dB.

[
Adobe Acrobat (PDF) Version]

Music education

Cammuri, Dannenberg, and De Poli, "Instruction of Computer Music for Computer Engineering Students and Professionals," in Proceedings of the 1994 International Computer Music Conference, International Computer Music Association, (September 1994), p. 487.

See also The Piano Tutor

Music grammars

Music languages

See Functional Languages for Real-Time Control

and Interactive Performance.

Music printing

Music workstations

Dannenberg, McAvinney, Thomas, Bloch, Rubine, and Serra, "A Project in Computer Music: The Musician's Workbench," in Advances in Computing and the Humanities, edited by Ephraim Nissan, (to appear).

Optical music recognition

Performance interfaces

See Interactive Performance.


Dannenberg and Wasserman, ``Estimating the Error Distribution of a Single Tap Sequence without Ground Truth'' in Proceedings of the 10th International Conference on Music Information Retrieval (ISMIR 2009), (October 2009), pp. 297-302.

Abstract. Detecting beats, estimating tempo, aligning scores to audio, and detecting onsets are all interesting problems in the field of music information retrieval. In much of this research, it is convenient to think of beats as occuring at precise time points. However, anyone who has attempted to label beats by hand soon realizes that precise annotation of music audio is not possible. A common method of beat annotation is simply to tap along with audio and record the tap times. This raises the question: How accurate are the taps? It may seem that an answer to this question would require knowledge of ``true'' beat times. However, tap times can be characterized as a random distribution around true beat times. Multiple independent taps can be used to estimate not only the location of the true beat time, but also the statistical distribution of measured tap times around the true beat time. Thus, without knowledge of true beat times, and without even requiring the existence of precise beat times, we can estimate the uncertainty of tap times. This characterization of tapping can be useful for estimating tempo variation and evaluating alternative annotation methods.

[Adobe Acrobat (PDF) Version.]


Dannenberg, Siewiorek, and Zahler, ``Exploring Meaning and Intention in Music Conducting,'' in Proceedings of the 2010 International Computer Music Conference, San Francisco: The International Computer Music Association, (August 2010), pp. 327-330.

Abstract. Conducting is a high-level form of expressive musical communication. The possibility of human-computer interaction through a conducting-based interface to a computer performance system has attracted many computer music researchers. This study explores conducting through interviews with conductors and musicians and also through accelerometers attached to conductors during rehearsals with a (human) orchestra and chamber music group. We found that “real” conducting gestures are much more subtle than “textbook” conducting gestures made in the lab, but we observed a very high correlation between the smoothed RMS amplitudes of conductors’ wrist acceleration and the ensembles’ audio.

[Adobe Acrobat (PDF) Version.]


Psychoacoustics, perception, and cognition

Real-time hardware

Real-time software

See Computer Accompaniment,

Real-Time Scheduling/Dispatching,

and Interactive Performance.

Dannenberg and Mercer, "Real-Time Software Synthesis on Superscalar Architectures," in Proceedings of the 1992 International Computer Music Conference, International Computer Music Association, (October 1992), pp. .174-177.

Dannenberg and Jameson, "Real-Time Issues in Computer Music," in Proceedings of the Real-Time Systems Symposium, IEEE Computer Society Press, (December 1993), pp. 258-261.

Thompson and Dannenberg, "Optimizing Software Synthesis Performance," in Proceedings of the 1995 International Computer Music Conference, International Computer Music Association, (September 1995), pp. 235-6.

Room acoustics

Sound synthesis languages

See Fugue and Nyquist in Functional Languages for Real-Time Control

Sound synthesis methods

Dannenberg and Benade, "An Automated Approach to Tuning," in Proceedings of the 1983 International Computer Music Conference, (October 1983).

Dannenberg, "Interpolation Error in Waveform Table Lookup," in Proceedings of the 1998 International Computer Music Conference, (1998), pp 240-243.

Previous papers analyzed the interpolation error for sinusoids. This paper looks at interpolation error for arbitrary (harmonic) waveforms, and gives some time/space tradeoffs for higher-order interpolation in software.
[Adobe Acrobat (PDF) Version] [HTML Version]

Dannenberg, Bernstein, Zeglin, and Neuendorffer, "Sound Synthesis from Video, Wearable Lights, and `The Watercourse Way'," in Proceedings The Eighth Biennial Symposium on Arts and Technology, New London: Connecticut College, (February 2003), pp. 38-44.

``The Watercourse Way'' is a mostly-music interactive multimedia performance for violin, cello, percussion, and dancer. The work uses a computer to process sounds from the performers, to synthesize sound, and to generate computer animation. A novel synthesis technique is introduced in which the sound spectrum is controlled in real time by images of light reflected from a shallow pool of water. In addition, performers wear computer-controlled lights that respond to video and sound input, using a wireless radio link to the computer. This work explores connections between the senses using technology to both sense and generate images and sounds.
[Acrobat (PDF) Version]

Dannenberg and Neuendorffer. "Sound Synthesis from Real-Time Video Images," in Proceedings of the 2003 International Computer Music Conference. San Francisco: International Computer Music Association, (2003), pp. 385-388.

Digital video offers an interesting source of control information for musical applications. A novel synthesis technique is introduced where digital video controls sound spectra in real time. Light intensity modulates the amplitudes of 32 harmonics in each of several synthesized "voices." Problems addressed include how to map from video to sound dealing with global variations in light level, dealing with low frame rates of video relative to high sample rates of audio, and overall system implementation. In one application, images of light reflected from a shallow pool of water are used to control sound, offering a rich tactile interface to sound synthesis.
[Acrobat (PDF) Version]

Dannenberg, "Concatenative Synthesis Using Score-Aligned Transcriptions," in Proceedings of the 2006 International Computer Music Conference, San Francisco, CA: The International Cmputer Music Association, (2006), pp. 352-355.


See Spectral Interpolation

Studio reports

Dannenberg, "Computer Music at Carnegie Mellon University," in Music Processing, Goffredo Haus, ed., A-R Editions, 1993, pp. 303-333.

Dannenberg, McAvinney, and Thomas, "Carnegie-Mellon University Studio Report," in Proceedings of the 1984 International Computer Music Conference, Computer Music Association, (June 1985), 281-286.

Dannenberg, "Systemes pour Informatique Musicale a l'universite de Carnegie Mellon," in Actes du Symposium "Systemes Personnels et Informatique Musicale," IRCAM, Paris, France, 1987.