Newsgroups: comp.speech
Path: lyra.csx.cam.ac.uk!warwick!slxsys!pipex!howland.reston.ans.net!usc!cs.utexas.edu!swrinde!gatech!newsxfer.itd.umich.edu!nntp.cs.ubc.ca!alberta!quartz.ucs.ualberta.ca!acs.ucalgary.ca!cpsc.ucalgary.ca!hill
From: hill@cpsc.ucalgary.ca (David Hill)
Subject: Re: Help with DecTalk...please!
Message-ID: <CoJKI2.746@cpsc.ucalgary.ca>
Sender: news@cpsc.ucalgary.ca (News Manager)
Organization: University of Calgary Computer Science
References: <18APR199414190266@uhcl2>
Date: Wed, 20 Apr 1994 04:58:49 GMT
Lines: 42

In article <18APR199414190266@uhcl2> CSCI2424@CL.UH.EDU writes:
>I am looking for some information about DEC Talk...  The version
>I have only came with the hardware and the installation manual.
>
>In particular, I would like documentation of the interface
>to the board.  Like, what are the C calls to manipulate it,
>how do I manipulate it phonetically, etc.
>
>One of the things i am trying to do is the make a face move in
>sync with the sounds being made by the voice board.  Can anyone
>tell me if this is possible?  Can anyone make any suggestions
>about how?  
>
>thanks.
>-tim

With colleagues here at U of Calgary, I have worked on computer animated
speaking faces for several years now.  It is possible, but to do it
properly, you really need access to the internals of any synthetic-speech-
by-rules system so that you can extend the parameters generated to include
extra parameters to control the face (lips/jaw) model.  You then have to
lay the speech down on a black video tape, count how many frames it occupies,
and resample the face parameters to produce appropriate frames.  Sounds
complicated, but it isn't so bad with the right equipment.  If you have enough
processing power, you could do the whole thing on one computer in real time,
of course, which would solve a lot of problems, except you'd find maintaining
synchronisation was a problem unless the software and operating system had
appropriate hooks/facilities.  This is not easy.  You could probably animate
a wire frame set of lips in real time, or fake something with precomputed
frames of facial postures.

There's a good write up of our work in:
 HILL, D.R., PEARCE, A. & WYVILL, B.L.M. (1988)  Animating speech: an
 automated approach using speech synthesised by rules.  The Visual
 Computer 3 (5), 277289, Mar (J)
and a different slant in the SIGGRAPH Tutorials on the State of the Art in
Facial Animation for 1988 and 1989

-- 
david hill: hill@cpsc.ucalgary.ca	|	Imagination is more
voice: 403-282-6481, fax: 403-282-6778	|	important than knowledge.
nextmail: hill@trillium.ab.ca		|		(Albert Einstein)
