Cardiff University | Prifysgol Caerdydd ORCA
Online Research @ Cardiff 
WelshClear Cookie - decide language by browser settings

DyCrowd: Towards dynamic crowd reconstruction from a large-scene video

Wen, Hao, Kang, Hongbo, Ma, Jian, Huang, Jing, Yang, Yuanwang, Lin, Haozhe, Lai, Yu-Kun ORCID: https://orcid.org/0000-0002-2094-5680 and Li, Kun 2025. DyCrowd: Towards dynamic crowd reconstruction from a large-scene video. IEEE Transactions on Pattern Analysis and Machine Intelligence 10.1109/tpami.2025.3600465

[thumbnail of DyCrowd_TPAMI.pdf]
Preview
PDF - Accepted Post-Print Version
Download (20MB) | Preview

Abstract

3D reconstruction of dynamic crowds in large scenes has become increasingly important for applications such as city surveillance and crowd analysis. However, current works attempt to reconstruct 3D crowds from a static image, causing a lack of temporal consistency and inability to alleviate the typical impact caused by occlusions. In this paper, we propose DyCrowd, the first framework for spatio-temporally consistent 3D reconstruction of hundreds of individuals' poses, positions and shapes from a large-scene video. We design a coarse-to-fine group-guided motion optimization strategy for occlusion-robust crowd reconstruction in large scenes. To address temporal instability and severe occlusions, we further incorporate a VAE (Variational Autoencoder)-based human motion prior along with a segment-level group-guided optimization. The core of our strategy leverages collective crowd behavior to address long-term dynamic occlusions. By jointly optimizing the motion sequences of individuals with similar motion segments and combining this with the proposed Asynchronous Motion Consistency (AMC) loss, we enable high-quality unoccluded motion segments to guide the motion recovery of occluded ones, ensuring robust and plausible motion recovery even in the presence of temporal desynchronization and rhythmic inconsistencies. Additionally, in order to fill the gap of no existing well-annotated large-scene video dataset, we contribute a virtual benchmark dataset, VirtualCrowd, for evaluating dynamic crowd reconstruction from large-scene videos. Experimental results demonstrate that the proposed method achieves state-of-the-art performance in the large-scene dynamic crowd reconstruction task. The code and dataset will be available for research purposes.

Item Type: Article
Date Type: Published Online
Status: Published
Schools: Schools > Computer Science & Informatics
Publisher: Institute of Electrical and Electronics Engineers
ISSN: 0162-8828
Date of First Compliant Deposit: 29 September 2025
Date of Acceptance: 4 August 2025
Last Modified: 29 Sep 2025 09:45
URI: https://orca.cardiff.ac.uk/id/eprint/180888

Actions (repository staff only)

Edit Item Edit Item

Downloads

Downloads per month over past year

View more statistics