Zhao, Hao, Zhang, Jinsong, Lai, Yukun ORCID: https://orcid.org/0000-0002-2094-5680, Zheng, Zerong, Xie, Yingdi, Liu, Yebin and Li, Kun 2022. High-fidelity human avatars from a single RGB camera. Presented at: 2022 IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2022), New Orleans, LA, USA, 19-24 June 2022. 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE: 10.1109/CVPR52688.2022.01544 |
Preview |
PDF
- Accepted Post-Print Version
Download (5MB) | Preview |
Abstract
In this paper, we propose a coarse-to-fine framework to reconstruct a personalized high-fidelity human avatar from a monocular video. To deal with the misalignment problem caused by the changed poses and shapes in different frames, we design a dynamic surface network to recover pose-dependent surface deformations, which help to decouple the shape and texture of the person. To cope with the complexity of textures and generate photo-realistic results, we propose a reference-based neural rendering network and exploit a bottom-up sharpening-guided fine-tuning strategy to obtain detailed textures. Our frame-work also enables photo-realistic novel view/pose syn-thesis and shape editing applications. Experimental re-sults on both the public dataset and our collected dataset demonstrate that our method outperforms the state-of-the-art methods. The code and dataset will be available at http://cic.tju.edu.cn/faculty/likun/projects/HF-Avatar.
Item Type: | Conference or Workshop Item (Paper) |
---|---|
Date Type: | Published Online |
Status: | In Press |
Schools: | Computer Science & Informatics |
ISBN: | 9781665469470 |
Date of First Compliant Deposit: | 2 April 2022 |
Date of Acceptance: | 2 March 2022 |
Last Modified: | 30 Nov 2022 13:24 |
URI: | https://orca.cardiff.ac.uk/id/eprint/149028 |
Actions (repository staff only)
Edit Item |