My final year project is about to get under way
with the supervision of Clive Souter, one of the
Natural Language Processing staff.
My aim is to create a system which would take
english language input, either in written or
audio form - and produce an animated face with
appropriate facial gestures for the text.
This would include accurate lip-synch effects
and the possibility of eye reactions and emotion.
In order to achieve this, I need to find a means of
mapping audio / textual input to phonemes or some
index of pronounciation which would allow my system to
select appropriate images from a database of expressions.
Is there an existing mapping available ?
If not, what might be the right way to go about creating one ?
Sincere thanks in advance to anyone who may provide assistance.
Dan Graf
--------------------------------------------------------
du da do. post-au-to
--------------------------------------------------------