Carnegie Mellon Computer Music Group

Research Seminars & Other Events

We meet approximately once every two-three weeks during the Fall and Spring semesters to discuss the latest in computer music and sound synthesis. EMAIL LIST: If you would like to be added to our email list to be informed about future presentations, please send email to Tom Cortina (username: tcortina, domain: cs.cmu.edu).

For other semesters, click here:
FALL 2014 | FALL 2013 | SPRING 2013| FALL 2012 | SPRING 2009 | FALL 2008 | SPRING 2008 | FALL 2007 | SPRING 2007 | FALL 2006 | SPRING 2006 | FALL 2005 | SUMMER 2005 | SPRING 2005

SPRING 2007

SPECIAL EVENTS

Friday, May 4
Computer Music 2007
A unique concert of computer music by students in 15-322.
8PM - Alumni Concert Hall, College of Fine Arts, CMU - FREE

Sunday, April 15
Critical Point
A new work by Roger Dannenberg for cello and interactive computer music.
8PM - Bellefield Hall in Oakland (between Forbes and Fifth, across from the Heinz Chapel and Cathedral of Learning) - FREE
Part of the U3 Festival
Cellist: Hampton Mallory of the Pittsburgh Symphony
Dedicated to the memory of Rob Fisher

The performance will be based on Roger's development of a a multi-threaded message-passing system using his own real-time scripting language combined with a visual programming language for highly optimized data-flow-style signal processing. 4 channel sound will expand the cello to new dimensions.

RESEARCH SEMINARS

Thursday, April 26 - 1:30-2:30PM Newell Simon 3001
Speaker: Sofia Cavaco
Topic: Data-driven Modeling of Intrinsic Structures in Impact Sounds

A struck object produces sound that depends on the way the object vibrates. This sound is determined by physical properties of the object, such as its size, geometry, and material, and also by the characteristics of the event, such as the force and location of impact. It is possible to derive physical models of impact sounds given the relationship between the physical and dynamic properties of the object, and the acoustics of the resulting sound. Models of sounds have proven useful in many fields, such as sound recognition, identification of events or properties (e.g. material or length) of the objects involved, sound synthesis, virtual reality and computer graphics. However, physical models are limited because of the a priori knowledge they require and because they do not successfully model all the complexities and variability of real sounds. We propose data-driven methods for learning the intrinsic features that govern the acoustic structure of impact sounds. The methods characterize the structures that are common to sounds of the same type as well as their variability (for instance, if the impacts on the same rod have a ringing property, the methods should be able to learn a characterization of this intrinsic structure and also capture the subtle differences that make this ringing property sound slightly different in one impact or another). The methods require no a priori knowledge and are aimed for low dimensional characterizations of the sounds. In addition, they are not restricted to learn an explicit set of properties of the sounds (e.g., basic features such as decay rate and average spectra); instead, they learn the properties that best characterize the statistics of the data.

Thursday, April 5 - 1:30-2:20PM Newell-Simon Hall 3001
Speaker: Wei You
Topic: Ongoing Research

Wei will discuss some new investigations on his research in the detection of note on-set.

Thursday, March 22 - 1:30-2:30PM Newell-Simon Hall 3001
Speaker: Roger Dannenberg
Topic: Concurrency Without Processes

A common problem in computer music programming is to generate concurrent sequences of actions. Consider generating note-on and note-off events for both a melody and bass line. Concurrent threads are one solution, but threads suffer from several problems. Another approach is to schedule events that schedule other events, generating sequences of actions. This is simple and effective, but makes it hard to compose sequences, i.e. "play this sequence of sequences 3 times." I have been developing a collection of objects that encapsulate sets of events over a finite interval of time. I will motivate the development and provide some illustrations of its use.

Thursday, March 1 - 4:30-5:30PM Margaret Morrison Carnegie Hall 407
(NOTE SPECIAL TIME AND ROOM)
Speaker: Roger Dannenberg
Topic: A Programming Language and Environment for Sound Synthesis and Music Composition

Nyquist is a programming language for music. It began as an attempt to make research software accessible to composers. The idea was that Nyquist could provide infrastructure for audio research software development and testing. Once the research was finished, it would be embedded in a high-level language that could be used immediately by composers. An unexpected new direction is that Nyquist is now embedded in an interactive development environment (IDE). Originally, this just helped to balance parentheses in the Lisp-like syntax, but the IDE is growing steadily to offer graphical interfaces, control panels, and visualization tools. At some point in the future, users might use Nyquist without writing any code at all (this may be good or bad, depending on your philosophy and values.) I will try to give a feel for how Nyquist is used for composition and research.

Thursday, February 15 - 1:30-2:30PM Newell-Simon 1505
(NOTE ROOM CHANGE FOR THIS SEMINAR)

Speaker: Wei You
Topic: Singing proficiency in the general population

Wei will talk a little bit on the analogs between speech recognition and music transcription. He will also expand on research work by Simone Dalla Bella, Jean-Francois Giguere and Isabelle Peretz published in the Journal of the Acoustical Society of America title "Singing Proficiency in the General Population". A link to their paper is available HERE.

Thursday, February 1 - 1:30-2:30PM Newell-Simon 3001
Speaker: Roger Dannenberg
Topic: Work in Progress: Using Spectral Scrambling in a Piece for Cello and Computer

Roger Dannenberg will talk about work in progress on a piece for cello and computer. One of the sound processing techniques under development is a spectral scrambler that permutes spectral bins. While in the spectral domain, the data can be altered in interesting ways. In particular, spectral data can be multiplied by the amplitude spectrum obtained from live vocal input to impose real-time vowel formants onto the sound.

Web page and seminar program managed by Tom Cortina, CSD