Cardiff University | Prifysgol Caerdydd ORCA
Online Research @ Cardiff 
WelshClear Cookie - decide language by browser settings

High-quality animatable dynamic garment reconstruction from monocular videos

Li, Xiongzheng, Zhang, Jinsong, Lai, Yu-Kun ORCID:, Yang, Jingyu and Li, Kun 2024. High-quality animatable dynamic garment reconstruction from monocular videos. IEEE Transactions on Circuits and Systems for Video Technology 34 (6) , pp. 4243-4256. 10.1109/TCSVT.2023.3329972

[thumbnail of DynGarmentRecons_TCSVT.pdf]
PDF - Accepted Post-Print Version
Download (6MB) | Preview


Much progress has been made in reconstructing garments from an image or a video. However, none of existing works meet the expectations of digitizing high-quality animatable dynamic garments that can be adjusted to various unseen poses. In this paper, we propose the first method to recover high-quality animatable dynamic garments from monocular videos without depending on scanned data. To generate reasonable deformations for various unseen poses, we propose a learnable garment deformation network that formulates the garment reconstruction task as a pose-driven deformation problem. To alleviate the ambiguity estimating 3D garments from monocular videos, we design a multi-hypothesis deformation module that learns spatial representations of multiple plausible deformations. Experimental results on several public datasets demonstrate that our method can reconstruct high-quality dynamic garments with coherent surface details, which can be easily animated under unseen poses. The code will be provided for research purposes.

Item Type: Article
Date Type: Publication
Status: Published
Schools: Computer Science & Informatics
Publisher: Institute of Electrical and Electronics Engineers
ISSN: 1051-8215
Date of First Compliant Deposit: 10 November 2023
Date of Acceptance: 26 October 2023
Last Modified: 04 Jul 2024 18:30

Actions (repository staff only)

Edit Item Edit Item


Downloads per month over past year

View more statistics