Cosker, Darren P., Marshall, Andrew David ORCID: https://orcid.org/0000-0003-2789-1395, Rosin, Paul L. ORCID: https://orcid.org/0000-0002-4965-3884 and Hicks, Yulia Alexandrovna ORCID: https://orcid.org/0000-0002-7179-4587 2003. Speaker-independent speech-driven facial animation using a hierarchical model. Presented at: International Conference on Visual Information Engineering 2003, University of Surrey, UK, 7-9 July 2003. |
Abstract
We present a system capable of producing video-realistic videos of a speaker given audio only. The audio input signal requires no phonetic labelling and is speaker independent. The system requires only a small training set of video to achieve convincing realistic facial synthesis. The system leams the natural mouth and face dynamics of a speaker to allow new facial poses, unseen in the training video, to be synthesised. To achieve this we have developed a novel approach which utilises a hierarchical and non-linear PCA model which couples speech and appearance. We show that the model is capable of synthesising videos of a speaker using new audio segments from both previously heard and unheard speakers. The model is highly compact making it suitable for a wide range of real-time applications in multimedia and telecommunications using standard hardware.
Item Type: | Conference or Workshop Item (Paper) |
---|---|
Date Type: | Publication |
Status: | Published |
Schools: | Computer Science & Informatics Engineering |
Subjects: | Q Science > QA Mathematics > QA75 Electronic computers. Computer science Q Science > QA Mathematics > QA76 Computer software |
Uncontrolled Keywords: | Facial animation ; facial modelling ; videos ; faces ; talking heads ; computer animation ; speech processing ; |
Additional Information: | Organised by the IEE, Visual Information Engineering Professional Network |
Last Modified: | 17 Oct 2022 09:41 |
URI: | https://orca.cardiff.ac.uk/id/eprint/5165 |
Citation Data
Actions (repository staff only)
Edit Item |