Carnegie Mellon Computer Music Group

Research Seminars & Other Events

We meet approximately once every two-three weeks during the Fall and Spring semesters to discuss the latest in computer music and sound synthesis. EMAIL LIST: If you would like to be added to our email list to be informed about future presentations, please send email to Tom Cortina (username: tcortina, domain:

For other semesters, click here:
FALL 2014 | FALL 2013 | SPRING 2013| FALL 2012 | SPRING 2009 | FALL 2008 | SPRING 2008 | FALL 2007 | SPRING 2007 | FALL 2006 | SPRING 2006 | FALL 2005 | SUMMER 2005 | SPRING 2005


Tuesday, February 15
Wean Hall 3213

Speaker: Roger Dannenberg
Topic: McBlare, The Robotic Bagpiper

Roger Dannenberg will demonstrate McBlare, a robotic bagpiper. McBlare was built last fall by Ben Brown and Garth Zeglin of the Robotics Institute, and Roger has been working on it ever since, with help from Ron Lupish, trying to make it play properly (insert bagpipe joke here). In the event McBlare is not cooperating, Roger will demonstrate some recent work with Aura.

Tuesday, March 1
Smith Hall 100

Speaker: Sofia Cavaco
Topic: Learning intrinsic structures of impact sounds

Abstract: When one object impacts another, we can readily perceive many intrinsic properties of the resulting sound that allow us to distinguish differences in material, surface properties, and even object shape, in spite of wide variations in the raw acoustic waveform. Algorithms have been developed to extract basic features of impact sounds, such as the decay rates or the average spectra, but these approaches fail to capture the acoustic richness and variability that is characteristic of natural impact sounds. I will talk about a data-driven method for learning intrinsic structures of impact sounds. The method applies principal and independent component analysis techniques to learn low-dimensional representations that model the distribution of both the time-varying harmonic and amplitude structure of the sounds. The method is highly flexible and makes no a priori assumptions about the physics, acoustics, or dynamics of the objects.



Tuesday, March 15
PNC Recital Hall, Duquesne University

Performer: Roger Dannenberg
Title: Feedback

The "U3 Festival" opens with a free concert of chamber music and electronic music at PNC Recital Hall, Duquesne University, including "Feedback", a new work for trumpet and computer, composed and performed by Roger Dannenberg. "Feedback" uses software to create "real" feedback in the concert hall, with computer-controlled attenuation to keep the volume at a comfortable level. A total of 8 independent feedback circuits run using 4 channel sound, and each can be tuned using interactively controlled delays and filters.

Directions: For Duquesne Univ. maps and directions, click HERE (look for the School of Music). There's a big parking structure with an entrance on Forbes Ave. Via PAT bus, the 61's, 71's, and 501 stop at Duquesne University.


Tuesday, March 29
Wean Hall 3213 (NOTE ROOM CHANGE)

Speaker: Roger Dannenberg
Topic: Oscillation Controlled Sound Synthesis & A Description of "Feedback"

Roger Dannenberg has been working with Cornelius Popel on a new synthesis technique in which a live signal replaces an oscillator in a more traditional synthesis technique. They call this "Oscillation Controlled Sound Synthesis" and variants of subtractive synthesis and FM have been implemented. In addition, Roger recently completed and performed "Feedback" for trumpet and computer. Real feedback is generated and controlled by the computer, so that the resonance of the hall becomes part of a huge oscillator -- essentially one giant electroacoustic instrument containing the performer and the audience. Roger will talk about the new synthesis technique and the "Feedback" composition.

Tuesday, April 19

Speaker: Ning Hu
Automatic Accurate Music Segmentation

Ning will present her recent results and ideas on automatic accurate segmentation.

If time permits, Roger Dannenberg will discuss the following new research:

Topic: Toward Holistic Music Analysis: Beat Tracking Using Music Structure

Most music processing attempts to focus on one particular feature or structural element such as pitch, beat location, tempo, or genre. This hierarchical approach, in which music is separated into elements that are analyzed independently, is convenient for the scientific researcher, but is at odds with intuition about music perception. Music is interconnected at many levels, and the interplay of melody, harmony, and rhythm are important in perception. As a first step toward more holistic music analysis, music structure is used to constrain a beat tracking program. With structural information, the simple beat tracker, working with audio input, shows a 70% improvement. This seminar will provide an informal overview of this new research.

Tuesday, May 3
Room: Newell-Simon Hall (NSH) 3305

Interface and Interaction Design
Student Presentations

Students, working in small teams, have designed music players that employ novel interaction methods to strengthen the emotional connection users have with music. Coffee and bagels served.