Home About Schedule Projects Resources

About the Course

Description: Computers are used to synthesize sound, process signals, and compose music. Personal computers have replaced studios full of sound recording and processing equipment, completing a revolution that began with recording and electronics. In this course, students will learn the fundamentals of digital audio, basic sound synthesis algorithms, and techniques for digital audio effects and processing. Students will apply their knowledge in programming assignments using a very high-level programming language for sound synthesis and composition. In a final project, students will demonstrate their mastery of tools and techniques through a publicly performed music composition.

Prerequisites: Any introductory programming course, or permission of the instructor.

About the Instructor

Jesse Stiles (Professor) is an electronic composer, performer, installation artist, and software designer. Stiles’ work has been featured at internationally recognized institutions including the Smithsonian American Art Museum, Lincoln Center, the Whitney Museum of American Art, and the Park Avenue Armory. Stiles has appeared multiple times at Carnegie Hall, performing as a soloist with electronic instruments.

Roger B. Dannenberg (Course designer) is an internationally known researcher, composer, and performer specializing in Computer Music. His invention of computer accompaniment led to the creation of the SmartMusic product used by thousands of music students every day. His work on real-time techniques and software synthesis have influenced the design of many systems in use today. Dr. Dannenberg designed the programming language Nyquist, the scripting language of Audacity, a popular audio editor he also designed with his student Dominic Mazzoni. Dr. Dannenberg is currently working on music understanding by computer and advanced programming techniques for interactive music. He serves as Chief Science Officer of Music Prodigy, an award-winning music education start-up. As a performer, Dannenberg plays trumpet in the Edgewood Symphony and various jazz groups in Pittsburgh, including Capgun Quartet. As a composer, Dannenberg has written works for computer, trumpet, and chamber groups, including commissions by the Wats:On? Festival, U3, and the Pittsburgh New Music Ensemble. He has also performed his compositions in Havana, Mexico City, Paris, Pisa, and Curitiba, Brazil, as well as across the United States.

Who Is This For?

This course is open to a wide range of students. It is a Computer Science course, but most of the content is orthogonal to programming or traditional computer science. If you are a great computer science student or even a great programmer, you will be able to use your special skills in this class to your advantage.

On the other hand, if you are a musician with intro-level programming skills, you can get by without writing a lot of difficult programs. Your musical knowledge and intuitions will also be of great value. However, this course does have technical content. You will need to learn and apply basic concepts of sampling theory, frequency, amplitude, spectral content, modulation, and so on. These subjects will not be taught at the level of rigor I would expect to see in an EE course on linear systems, but they are technical nonetheless. I think these are essential skills for modern musicians and composers (and computer scientists!).

Course Content

Computer Music includes many things. This course focuses on using computers to create and manipulate sound. Things that will be covered:

You may notice that the things that will be covered have to do with music as an audio signal. The course will teach how to manipulate audio signals to achieve musical goals. The things not emphasized tend to deal with music as "events" such as notes, phrases, and other structures, which can be analyzed, generated, and manipulated by computer. A companion course, Computer Music Systems and Information Processing, explores these aspects of computer music. Also not taught are techniques for real-time interactive systems.


The general plan starts with an examination of sound. What is it, how to we describe it and measure it, and how to we store it on a computer? There are some simple but profound answers, and anyone working with sound on computers needs to know them.

We will immediately begin to learn and use Nyquist. Nyquist is probably the most powerful programming language for audio manipulation, sound synthesis, and computer music composition. Nyquist was designed and implemented by Roger Dannenberg and his students, and has been in use for over a decade. It now runs under Windows, the Macintosh, and Linux, so you will be able to use it on your favorite machine (it is also free).

Nyquist will be used to experiment with what we learn in class. For example, if we learn about FM Synthesis, rather than listening to examples or playing with an FM synthesizer, we’ll program an FM Synthesizer in Nyquist (maybe 10 lines of code) and use it to make music. We’ll spend most of our time learning about different techniques, always exploring three aspects:

As the semester moves on, everyone will be expected to create music. We don’t expect masterpieces, nor do we require a musical background. We will require an appreciation for artistic intentions and a serious effort to create something interesting. Often, the students with the least musical baggage produce the best work. We hope everyone will hear music differently after this course. The main homework assignments require music compositions that demonstrate your mastery of technical material from the course...

The culmination of the creative side of the class is a composition and public performance. Students will integrate what they have learned, produce an original composition, and present their work in a public concert.