Acoustical Society of America Talk

This page is for further information and resources for the talk I will be giving at the Acoustical Society of America meeting in Baltimore, Maryland on April 20, 2010. I plan to add more details to this page over the first couple weeks of April 2010.

Topics and Publications

The talk will be about using voice transformation techniques to synthesize speech from Surface Electromyography (EMG) and Acoustic Doppler Sonar (ADS). It will cover some material from the following two conference papers:
  • Arthur R. Toth, Michael Wand, Tanja Schultz. Synthesizing Speech from Electromyography using Voice Transformation Techniques. Proc. Interspeech 2009.
  • Arthur R. Toth, Bhiksha Raj, Kaustubh Kalgaonkar, Tony Ezzat. Synthesizing Speech From Doppler Signals. Proc. ICASSP 2010.
  • Surface Electromyography

    The first publication came from my visit to Dr. Tanja Schultz's Cognitive Systems Lab at University of Karlsruhe from February 2009 through April 2009. It only represents a small portion of a larger research project headed by Dr. Schultz. By the time I visited, she and her students had already worked with EMG data for a few years and had built speech recognition systems based on it. For much more information about their work on Surface Electromyography and speech, including text and video demonstrations of the technology, please visit here.

    Acoustic Doppler Sonar

    The second publication came from working with Dr. Bhiksha Raj at Carnegie Mellon University in the summer of 2009. He and our coauthors had previously collected Acoustic Doppler Sonar (ADS) and applied it to a number of speech-related tasks at Mitsubishi Electric Research Laboratories. I applied techniques similar to the ones I had applied to the EMG data to the ADS data. For example sound files for the synthesis of speech from Acoustic Doppler Sonar, please visit here.


    Finally, although he did not work with us on these projects, I would like to acknowledge Dr. Tomoki Toda for making his voice transformation code available through the FestVox project and demonstrating how such techniques could be used to transform Electro-Magnetic Articulograph and Non-Audible Murmur data to speech. I ported and modified his speech-to-speech voice transformation code in order to transform from EMG to speech and from ADS to speech.