Cardiff University | Prifysgol Caerdydd ORCA
Online Research @ Cardiff 
WelshClear Cookie - decide language by browser settings

FRNeRF: Fusion and regularization fields for dynamic view synthesis

Jing, Xinyi, Yu, Tao, He, Renyuan, Lai, Yukun ORCID: and Li, Kun 2024. FRNeRF: Fusion and regularization fields for dynamic view synthesis. Computational Visual Media

[thumbnail of FRNeRFCVMJ.pdf]
PDF - Accepted Post-Print Version
Download (6MB) | Preview


Novel space-time view synthesis for monocular video is a highly challenging task: both static and dynamic objects usually appear in the video, but only a single view of the current scene is available, resulting in inaccurate synthesis results. To address this challenge,we proposeFRNeRF, a novel space-time viewsynthesis method with a fusion regularization field. Specifically, we design a 2D-3D fusion regularization field for the original dynamic neural field, which helps reduce blurring of dynamic objects in the scene. In addition, we add image prior features to the hierarchical sampling to solve the problem that the traditional hierarchical sampling strategy cannot obtain sufficient sampling points during training. We evaluate our method extensively on multiple datasets and show the results of dynamic space-time view synthesis. Our method achieves state-of-the-art performance both qualitatively and quantitatively. Code is available for research purposes at

Item Type: Article
Status: In Press
Schools: Computer Science & Informatics
Publisher: SpringerOpen
ISSN: 2096-0433
Date of First Compliant Deposit: 21 March 2024
Date of Acceptance: 29 December 2023
Last Modified: 29 Mar 2024 03:31

Actions (repository staff only)

Edit Item Edit Item


Downloads per month over past year

View more statistics