Li, Xiongzheng, Zhang, Jinsong, Lai, Yu-Kun ORCID: https://orcid.org/0000-0002-2094-5680, Yang, Jingyu and Li, Kun 2024. High-quality animatable dynamic garment reconstruction from monocular videos. IEEE Transactions on Circuits and Systems for Video Technology 34 (6) , pp. 4243-4256. 10.1109/TCSVT.2023.3329972 |
Preview |
PDF
- Accepted Post-Print Version
Download (6MB) | Preview |
Abstract
Much progress has been made in reconstructing garments from an image or a video. However, none of existing works meet the expectations of digitizing high-quality animatable dynamic garments that can be adjusted to various unseen poses. In this paper, we propose the first method to recover high-quality animatable dynamic garments from monocular videos without depending on scanned data. To generate reasonable deformations for various unseen poses, we propose a learnable garment deformation network that formulates the garment reconstruction task as a pose-driven deformation problem. To alleviate the ambiguity estimating 3D garments from monocular videos, we design a multi-hypothesis deformation module that learns spatial representations of multiple plausible deformations. Experimental results on several public datasets demonstrate that our method can reconstruct high-quality dynamic garments with coherent surface details, which can be easily animated under unseen poses. The code will be provided for research purposes.
Item Type: | Article |
---|---|
Date Type: | Publication |
Status: | Published |
Schools: | Computer Science & Informatics |
Publisher: | Institute of Electrical and Electronics Engineers |
ISSN: | 1051-8215 |
Date of First Compliant Deposit: | 10 November 2023 |
Date of Acceptance: | 26 October 2023 |
Last Modified: | 09 Nov 2024 15:15 |
URI: | https://orca.cardiff.ac.uk/id/eprint/163784 |
Actions (repository staff only)
Edit Item |