Cardiff University | Prifysgol Caerdydd ORCA
Online Research @ Cardiff 
WelshClear Cookie - decide language by browser settings

Diverse motion in-betweening from sparse keyframes with dual posture stitching

Ren, Tianxiang, Yu, Jubo, Guo, Shihui, Ma, Ying, Ouyang, Yutao, Zeng, Zijiao, Zhang, Yazhan and Qin, Yipeng ORCID: https://orcid.org/0000-0002-1551-9126 2025. Diverse motion in-betweening from sparse keyframes with dual posture stitching. IEEE Transactions on Visualization and Computer Graphics 31 (2) , pp. 1402-1413. 10.1109/TVCG.2024.3363457

[thumbnail of Diverse_Motion_In_betweening_from_Sparse_Keyframes_with_Dual_Posture_Stitching_v1.pdf]
Preview
PDF - Accepted Post-Print Version
Download (9MB) | Preview

Abstract

In-betweening is a technique for generating transitions given start and target character states. The majority of existing works require multiple (often ≥ 10) frames as input, which are not always available. In addition, they produce results that lack diversity, which may not fulfill artists’ requirements. Addressing these gaps, our work deals with a focused yet challenging problem: generating diverse and high-quality transitions given exactly two frames (only the start and target frames). To cope with this challenging scenario, we propose a bi-directional motion generation and stitching scheme which generates forward and backward transitions from the start and target frames with two adversarial autoregressive networks, respectively, and stitches them midway between the start and target frames. In contrast to stitching at the start or target frames, where the ground truth cannot be altered, there is no strict midway ground truth. Thus, our method can capitalize on this flexibility and generate high-quality and diverse transitions simultaneously. Specifically, we employ conditional variational autoencoders (CVAEs) to implement our autoregressive networks and propose a novel stitching loss to stitch the bi-directional generated motions around the midway point. Extensive experiments demonstrate that our method achieves higher motion quality and more diverse results than existing methods on the LaFAN1, Human3.6m and AMASS datasets.

Item Type: Article
Date Type: Publication
Status: Published
Schools: Computer Science & Informatics
Publisher: Institute of Electrical and Electronics Engineers
ISSN: 1077-2626
Date of First Compliant Deposit: 19 February 2024
Date of Acceptance: 23 January 2024
Last Modified: 12 Feb 2025 14:30
URI: https://orca.cardiff.ac.uk/id/eprint/166339

Actions (repository staff only)

Edit Item Edit Item

Downloads

Downloads per month over past year

View more statistics