Cardiff University | Prifysgol Caerdydd ORCA
Online Research @ Cardiff 
WelshClear Cookie - decide language by browser settings

NeRFFaceShop: learning a photo-realistic 3D-aware generative model of animatable and relightable heads from large-scale in-the-wild videos

Jiang, Kaiwen, Liu, Feng-Lin, Chen, Shu-Yu, Wan, Pengfei, Zhang, Yuan, Lai, Yu-Kun ORCID: https://orcid.org/0000-0002-2094-5680, Fu, Hongbo and Gao, Lin 2025. NeRFFaceShop: learning a photo-realistic 3D-aware generative model of animatable and relightable heads from large-scale in-the-wild videos. IEEE Transactions on Visualization and Computer Graphics 10.1109/TVCG.2025.3560869

[thumbnail of NeRFFaceShopTVCG.pdf]
Preview
PDF - Accepted Post-Print Version
Available under License Creative Commons Attribution.

Download (3MB) | Preview

Abstract

Animatable and relightable 3D facial generation has fundamental applications in computer vision and graphics. Although animation and relighting are highly correlated, previous methods usually address them separately. Effectively combining animation methods and relighting methods is nontrivial. In terms of explicit shading models, animatable methods cannot be easily extended to achieve realistic relighting results, such as shadow effects, due to prohibitive computational training costs. Regarding implicit lighting representations, current animatable methods cannot be incorporated due to their inharmonious animation representations, i.e., deforming spatial points. This paper, armed with a lightweight but effective lighting representation, presents a compatible animation representation to achieve a disentangled generative model of 3D animatable and relightable heads. Our represented animation allows for updating and control of realistic lighting effects. Due to the disentangled nature of our representations, we learn the animation and relighting from large-scale, in-the-wild videos instead of relying on a morphable model. We show that our method can synthesize geometrically consistent and detailed motion along with the disentangled control of lighting conditions. We further show that our method is still compatible with morphable models for driving generated avatars. Our method can also be extended to domains without video data by domain transfer to achieve a broader range of animatable and relightable head synthesis. We will release the code for reproducibility and facilitating future research.

Item Type: Article
Date Type: Published Online
Status: In Press
Schools: Schools > Computer Science & Informatics
Publisher: Institute of Electrical and Electronics Engineers
ISSN: 1077-2626
Date of First Compliant Deposit: 5 June 2025
Date of Acceptance: 28 March 2025
Last Modified: 12 Jun 2025 09:45
URI: https://orca.cardiff.ac.uk/id/eprint/178800

Actions (repository staff only)

Edit Item Edit Item

Downloads

Downloads per month over past year

View more statistics