Wu, Zhenhua, Jiang, Linxuan, Li, Xiang, Fang, Chaowei, Qin, Yipeng ![]() |
Preview |
PDF
- Accepted Post-Print Version
Download (14MB) | Preview |
Abstract
Audio-driven talking head synthesis is a critical task in digital human modeling. While recent advances using diffusion models and Neural Radiance Fields (NeRF) have improved visual quality, they often require substantial computational resources, limiting practical deployment. We present a novel framework for audio-driven talking head synthesis, namely it Hierarchically Controlled Deformable 3D Gaussians (HiCoDe), which achieves state-of-the-art performance with significantly reduced computational costs. Our key contribution is a hierarchical control strategy that effectively bridges the gap between sparse audio features and dense 3D Gaussian point clouds. Specifically, this strategy comprises two control levels: i) coarse-level control based on a 3D Morphable Model (3DMM) and ii) fine-level control using facial landmarks. Extensive experiments on the HDTF dataset and additional test sets demonstrate that our method outperforms existing approaches in visual quality, facial landmark accuracy, and audio-visual synchronization while being more computationally efficient in both training and inference.
Item Type: | Conference or Workshop Item (Paper) |
---|---|
Date Type: | Publication |
Status: | Published |
Schools: | Schools > Computer Science & Informatics |
Publisher: | Association for the Advancement of Artificial Intelligence |
ISBN: | 978-1-57735-897-8 |
ISSN: | 2159-5399 |
Date of First Compliant Deposit: | 17 January 2025 |
Date of Acceptance: | 10 December 2024 |
Last Modified: | 30 Apr 2025 14:57 |
URI: | https://orca.cardiff.ac.uk/id/eprint/174778 |
Actions (repository staff only)
![]() |
Edit Item |