Alhamazani, Fahd, Rosin, Paul L. ![]() ![]() Item availability restricted. |
![]() |
PDF
- Accepted Post-Print Version
Restricted to Repository staff only until 19 August 2026 due to copyright restrictions. Download (3MB) |
Abstract
3D reconstruction from 2D inputs, especially for non-rigid objects like humans, presents unique challenges due to the significant range of possible deformations. Traditional methods often struggle with non-rigid shapes, which require extensive training data to cover the entire deformation space. This study addresses these limitations by proposing a canonical pose reconstruction model that transforms single-view depth images of deformable shapes into a canonical form. This alignment facilitates shape reconstruction by enabling the application of rigid object reconstruction techniques, and supports recovering the input pose in voxel representation as part of the reconstruction task, utilising both the original and deformed depth images. Notably, our model achieves effective results with using a small dataset with 300 samples in total, containing variations in shape (obese, slim and fit bodies) and gender (female and male) and size (child and adult). Experimental results on animal and human datasets demonstrate that our model outperforms other state-of-the-art methods.
Item Type: | Article |
---|---|
Date Type: | Publication |
Status: | Published |
Schools: | Schools > Computer Science & Informatics |
Publisher: | Elsevier |
ISSN: | 0097-8493 |
Funders: | The Royal Society |
Date of First Compliant Deposit: | 27 September 2025 |
Date of Acceptance: | 4 August 2025 |
Last Modified: | 29 Sep 2025 11:30 |
URI: | https://orca.cardiff.ac.uk/id/eprint/181363 |
Actions (repository staff only)
![]() |
Edit Item |