Carnegie Mellon Computer Music Group

Research Seminars & Other Events

We meet approximately once every two-three weeks during the Fall and Spring semesters to discuss the latest in computer music and sound synthesis. EMAIL LIST: If you would like to be added to our email list to be informed about future presentations, please send email to Tom Cortina (username: tcortina, domain: cs.cmu.edu).

For other semesters, click here:
FALL 2014 | FALL 2013 | SPRING 2013| FALL 2012 | SPRING 2009 | FALL 2008 | SPRING 2008 | FALL 2007 | SPRING 2007 | FALL 2006 | SPRING 2006 | FALL 2005 | SUMMER 2005 | SPRING 2005

FALL 2012

SEMINARS & EVENTS

Friday, Sep 7, 15:30PM--16:30PM, GHC 6121

Roger Dannenberg "Music Understanding and the Future of Music Performance" practice talk for Interspeech 2012 keynote.

Friday, Sep 14, 15:30PM--16:30PM, GHC 6121

No seminar today.

Friday, Sep 21, 15:30PM--16:30PM, GHC 6121

Zeyu Jin thesis proposal

Introduction: Computer-assisted music production/performance is an active eld of study among musicians and scientist for the purpose of enriching music expression as well as boosting production and eciency by automating the process. Rich studies have been seen these year surrounding human-computer music performance, computer-based editing, algorithmic composition and so forth. And in this proposal, I'm going to explore new ideas and creations that concerns communication between human and computer in the language of music.

Friday, Sep 28, 15:30PM--16:30PM, GHC 6121

"An Overview of Some Fundamental Music Representation Issues" by Roger Dannenberg

Abstract: I'll give an overview of music representation issues at our seminar on Friday. This is a huge area, so I'll focus on a subset of general issues that have been considered in computer music research: event lists, flexibility through attribute/value pairs, hierarchy, multiple hierarchies, continuous vs. discrete data, expressions vs data, and resources vs instances.

Friday, Oct 5, 15:30PM--16:30PM, GHC 6121

"An Introduction to Markovishnu Orchestra: Computer Generated Jazz Improvisation" by Kevin Elfenbein

The goal of this project is to algorithmically generate improvisational jazz solos in real time. We are still deciding on the exact implementation, but plan to combine stylistic analysis of jazz improvisations by various artists with algorithmic composition techniques to generate realistic and pleasing solos. The system will take a chord progression as input and produce a solo as output. In the presentation I will describe how we will analyze MIDI files to look for distinguishing features between artists and two types of algorithms we can use to generate the melodies.

Reference papers:

A Machine Learning Approach to Musical Style Recognition

Genetic Algorithm for Generation of Jazz Melodies

Friday, Oct 12, 15:30PM--16:30PM, GHC 6121

Speaker: Can Ozbay

Topic1: Distant/Remote Recording.

I'm going to talk about how me and my band recorded 2 albums and 2 music videos without ever being together (or even one case, not even meeting them), but appeared in the music video side by side. How we made the albums with technical details. Generally it will have some Logic Pro, and workflow examples for distant album recording, what the musicians have to do, and what they should expect from the computer.

Topic2: Tools to Help Musicians Overcome Temporary / Permanent Disabilities.

I'm going to talk about how I recorded & mixed an album while I wasn't able to sit due to an illness, and what other musicians did to overcome temporary or permanent disabilities. And deviating from this subject, I will talk about different musical interfaces too. Not just tools that help the disabled musicians, but tools that enable every musician. So generally this subject will be more about tools like Kinect, EEGs, IR Eye Trackers, etc.

For more information, please visit www.canozbay.com.

Friday, Oct 19, 11:00AM--5:30PM at the STUDIO for Creative Inquiry

Listening Spaces Symposium and Workshop

First will be a symposium with four invited speakers discussing different aspects of how late-20th and 21st- century technology affects the way we utilize, acquire, share, recommend, and communicate through music. This will be an interactive symposium rather than just people talking at you for 1.5 hours. Of particular interest to M&T folk will be Jonathan Sterne's discussion of "format theory". His new book MP3: The Meaning of a Format (Sign, Storage, Transmission) discusses the undeniable impact this lossy compression format has made on the music world and frames this impact in a broad cultural/technological context.

Second will be a workshop where speakers and guests gather in groups to discuss our thoughts on the relationship between music, technology, and culture. Groups will tackle specific questions and then present their finding to the symposium for discussion. The idea is to create a document which captures the current thoughts of a variety of different voices -- artist, students, teachers, musicians, listeners, technologists, cultural critics, etc.

More information can be found here (including specific workshop topics):

http://www.hss.cmu.edu/pressreleases/pressreleases/listeningspacesevent.html

http://omgpgh.com/blog/2012/10/19/listening-spaces-21st-century-perspectives-music-technology-culture/

http://www.cmu.edu/cas/media%20initiave/listening%20spaces/index.html

Friday, Oct 26, 15:30PM--16:30PM, GHC 6121

JiuQiang Tang's thesis proposal:

Extracting Sounds From Gestures: Gesture-to-Sound Mapping for Real-time Music Performance

Introduction: In the field of computer music, real-time musical interaction has been a novel and attractive focus of research. It is related to all aspects of the interactive processes including the capture and multimodal analysis of the gestures and sounds created by artists, management of interaction, as well as techniques for real-time synthesis and sound processing. My thesis focuses on how to map the gestures of musicians and dancers to sounds in the real time naturally and smoothly. Classical mapping functions distribute different gesture-sound relationships according to their structures (one-to-one, one-to-many, many-to-one relationship) as well as the degrees of determinism (explicit and implicit gesture). It’s an appropriate tactic to catalog gesture-to-sound relationships, however, sometimes this division seems to be too straightforward and cannot help us deal with the real world complicated gesture-to-sound problem. In my thesis, I attempt to seek an applicable machine-learning approach to classify and map continuous sensor data to both continuous and discrete music parameters.

Friday, Nov 2, 15:30PM--16:30PM, GHC 6121

Dalong Chen's thesis proposal:

A Proposal for a Music Player for the Human Computer Music Performance Project

Abstract: The goal of the Human Computer Music Performance (HCMP) project is to create an autonomous "artificial performer" with the ability of a human-level musical performance. An important component of the HCMP project is to develop a music player, which could flexibly adjust and respond to changes in music signal. In my master's project, I will design, implement and extend the HCMP midi player for the HCMP project.

More information:

Slides for Friday presentation

Proposal_Dalong_Cheng

All the related code and slides, paper has been hosted on his github here, feel free to check out.

Friday, Nov 9, 15:30PM--16:30PM, GHC 6121

Topic: Introduction to Serpent, "The Computer Music Scripting Language"
Speaker: Roger B. Dannenberg

Abstract: Music applications must often run in real-time, access many libraries for MIDI, audio, graphics, and networking, support concurrency, and run on multiple platforms. Although this can all be done in languages such as C, it would be nice to have a higher-level language for (hopefully) greater productivity and more reliability. Serpent is my answer to this problem. Serpent is inspired by Python, a widely used scripting language, but Serpent uses a real-time garbage collector and offers simple interfaces for MIDI, Open Sound Control, networking, standard midi files, and GUIs. In this talk, I will introduce Serpent and describe more broadly some goals for building interactive computer music components.

Friday, Nov 16, 15:30PM--16:30PM, GHC 6121

Topic: Recording & Processing Audio
Speaker: Anders Øland

Anders Øland will give a talk on "Music Production 101, part 1: Recording & Processing Audio". It will be very informal, and the aim is to give the audience a sense of the different steps involved in music production. Also, He will give some actual examples in the Lab (GHC 7208).

Friday, Nov 30, 15:30PM--16:30PM, GHC 4405

Topic:Web Audio API: Realtime Audio Processing in the Web Browser
Speaker:Kyle Verrier


Abstract: The Web Audio API is a high-level JavaScript API for processing and synthesizing audio in web applications. The goal of this API is to include capabilities found in modern game audio engines and some of the mixing, processing, and filtering tasks that are found in modern desktop audio production applications. In this talk, the API will be introduced along with various example works.

Friday, Dec 7, 15:30PM--16:30PM, GHC 4405

Topic: TBA

Friday, Dec 14, 15:30PM--16:30PM, GHC 6121

Topic: TBA

FUTURE SEMINAR/EVENT DATES

If you would like to present a topic at our seminar, please send email to Haochuan Liu(haochual@andrew.cmu.edu).

Web page and seminar program managed by Tom Cortina, CSD