Carnegie Mellon Computer Music Group

Research Seminars & Other Events

We meet approximately once every two-three weeks during the Fall and Spring semesters to discuss the latest in computer music and sound synthesis. EMAIL LIST: If you would like to be added to our email list to be informed about future presentations, please send email to Tom Cortina (username: tcortina, domain: cs.cmu.edu).

For other semesters, click here:
SPRING 2013 | FALL 2014 | FALL 2012 | SPRING 2009 | FALL 2008 | SPRING 2008 | FALL 2007 | SPRING 2007 | FALL 2006 | SPRING 2006 | FALL 2005 | SUMMER 2005 | SPRING 2005

SPRING 2013

SEMINARS & EVENTS

Friday, Jan 18, 15:30PM--16:30PM, GHC 7501

Topic: Thesis Projects & Topics

 

Friday, Jan 25, 16:00PM--17:00PM, Location: TBA

Speaker: Richard Stern

1. Impact of psychoacoustics on music storage and reproduction.

2. Perceptual audio coding (like MP3) and the other half on 3-D audio spatialization.

 

Friday, Feb 1, 16:00PM--17:00PM, Location: GHC 7501

Topic: Philip's music fingerprint

Speaker: Gus Xia

Imagine the following situation. You are in your car, listening to the radio and suddenly you hear a song that catches your attention. It is the best new song you have heard for a long time, but you missed the announcement and don't recognize the artist. Still, you would like to know more about this music. What should you do? You could call the radio station, but that is too cumbersome. Wouldn't it be nice if you could push a few buttons on your mobile phone and a few seconds later the phone would respond with the name of the artist and the title of the music you are listening to? Perhaps even sending an email to your default email address with some supplemental information. In this paper we present an audio fingerprinting system, which makes the above scenario possible. By using the fingerprint of an unknown audio clip as a query on a fingerprint database, which contains the fingerprints of a large library of songs, the audio clip can be identified. At the core of the presented system are a highly robust fingerprint extraction method and a very efficient fingerprint search strategy, which enables searching a large fingerprint database with only limited computing resources.

 

Friday, Feb 8, 16:00PM--17:00PM, Location: GHC 7501

Topic: Listen to music from previous International Computer Music Conferences.

 

Friday, Feb 15, 16:00PM--17:00PM, Location: GHC 7501

Topic: New Graduate Programme in Electronic and Electroacustic Music, Interactivity and Video Creation and Final Concert.

Speaker: Jorge Sastre

 

Friday, Feb 22, 16:00PM--17:00PM, Location: GHC 7501

Topic: Some interesting videos of music techonology.

 

Friday, Feb 29, 16:00PM--17:00PM, Location: GHC 7501

Topic: On the Origins of Electronic Music.

Speaker: Jorge Sastre

 

Friday, Mar 22, 16:00PM--17:00PM, Location: GHC 7501

Topic: Introduction to Basic Guitar Effects in PureData

Speaker: Haochuan Liu

Effects units are electronic devices that alter how a musical instrument or other audio source sounds. Some effects subtly "color" a sound, while others transform it dramatically. They are housed in amplifiers, table top units, "stompboxes" and "rackmounts", or they are built into the instruments themselves. While there is currently no consensus on how to categorize effects, the following are seven common classifications: distortion, dynamics, filter, modulation, pitch/frequency, time-based and feedback/sustain.

PureData is a real-time graphical programming environment for audio, video, and graphical processing, and it is free software which was written to be multi-platform and therefore is quite portable; versions exist for Win32, IRIX, GNU/Linux, BSD, and MacOS X running on anything from a PocketPC to an old Mac to a brand new PC. It is easy to extend Pd by writing object classes ("externals") or patches ("abstractions"). The work of many developers is already available as part of the standard Pd packages and the Pd developer community is increasingly growing. Recent developments include a system of abstractions for building performance environments; a library of objects for physical modeling; and a library of objects for generating and processing video in realtime.

In the presenattion, I will:

  1. Make a short introduction to PureData.

  2. Discuss the theory of some basic guitar effects.

  3. Show examples of basic guitar effects in PureData with my guitar.

You can find the sliders of this presentation here.

 

Friday, Mar 29, 16:00PM--17:00PM, Location: GHC 7501

Topic: Music Technology in the Browser: Where we came from, where we are, and where we're going

Speaker: Robert Kotcher

We've come a long way since Tim Berners-Lee and Daniel Connolly drafted the first version of HTML in June 1993. In 20-some years, the Internet transformed a set of disjoint packet-switching networks into a global body of dynamic documents and databases. Within just a few years, we will see new amazing technologies such as WebCL (Javascript binding to OpenCL for heterogenous parallel computing), CSS custom-shaders (GPU graphics computation), and WebAudio API become a reality. Or will we? The issue here is that browser vendors have their own agenda, set of priorities, and security considerations. The only reliable cross-browser platform for audio today is Flash, but we'll take a look at other drafts related to music technology that may become cross-browser standards in the future. Finally, we'll look at cool music-related websites (but first make sure you have Google Chrome installed!).

 

Friday, Apr 4, 16:00PM--17:00PM, Location: GHC 7501

Topic: A good solution of remote audio transmission

Speaker: Dalong Chen

Using modern internet to efficiently transfer bi-directional audio information poses several challenges that are different from common application. In tomorrow's presentation, Dalong Cheng will talk about some key design element in designing such systems by analysing a interesting system, JackTrip.

Sliders of the presentation.

Two papers about this topic:

JackTrip: Under the Hood of an Engine for Network Audio JackTrip/SoundWIRE Meets Server Farm

 

Friday, Apr 12, 16:00PM--17:00PM, Location: GHC 7501

Topic: Z Gallery - Sound Art

Speaker: Ziyun Peng

Ziyun Peng will be showing selected sound art works ranging from generative music, sound installation to audio-visual works, etc. She will break down each work with technical and aesthetic details - discussion very much encouraged!

 

Friday, Apr 26, 16:00PM--17:00PM, Location: GHC 7501

Topic: Hearing is believing - tonal samples in algorithmic composition

Speaker: Zeyu Jin

Abstract: In this short talk, I will sample some interesting approaches in the algorithmic composition studies that aim at making tonal music out of irregular numbers. Instead of looking at programs and equations, let's simplify our appreciation to just listening and the ideas that might interest us. The samples will include the "classics" such as the 5000 Bach Chorals by David Cope, and also some young and recent works such as the "Hyperscore" from Media Lab. Some samples are reproduced to ensure better sound quality. A major reference is "A Brief History of Algorithmic Composition" by John Maurer.

Topic: Tangible *not-Comp Vision based* Musical Instruments / Controllers

Speaker: Can Ozbay

Abstract: Existing digital musical interfaces are either a replica of analog musical instruments / controllers (elec. piano / midi controller, - electric guitar / midi guitar etc. ) OR they're camera based, and they're checking for gestures. I think current hardware development is based on tradition, not good design, and software options are not good / effective ways to make music. With kinect refusing to work outdoors, and with regular cameras, computer vision software lacking precision, this is not the right way to go for making musical interfaces. I'll be showing multiple alternatives and a couple of videos supporting this idea.

 

 

FUTURE SEMINAR/EVENT DATES

If you would like to present a topic at our seminar, please send email to Haochuan Liu(haochual@andrew.cmu.edu).

Web page and seminar program managed by Tom Cortina, CSD