Jing, Xinyi, Yu, Tao, He, Renyuan, Lai, Yukun ORCID: https://orcid.org/0000-0002-2094-5680 and Li, Kun 2024. FRNeRF: Fusion and regularization fields for dynamic view synthesis. Computational Visual Media |
Preview |
PDF
- Accepted Post-Print Version
Download (6MB) | Preview |
Abstract
Novel space-time view synthesis for monocular video is a highly challenging task: both static and dynamic objects usually appear in the video, but only a single view of the current scene is available, resulting in inaccurate synthesis results. To address this challenge,we proposeFRNeRF, a novel space-time viewsynthesis method with a fusion regularization field. Specifically, we design a 2D-3D fusion regularization field for the original dynamic neural field, which helps reduce blurring of dynamic objects in the scene. In addition, we add image prior features to the hierarchical sampling to solve the problem that the traditional hierarchical sampling strategy cannot obtain sufficient sampling points during training. We evaluate our method extensively on multiple datasets and show the results of dynamic space-time view synthesis. Our method achieves state-of-the-art performance both qualitatively and quantitatively. Code is available for research purposes at https://cic.tju.edu.cn/faculty/likun/projects/FRNerf.
Item Type: | Article |
---|---|
Status: | In Press |
Schools: | Computer Science & Informatics |
Publisher: | SpringerOpen |
ISSN: | 2096-0433 |
Date of First Compliant Deposit: | 21 March 2024 |
Date of Acceptance: | 29 December 2023 |
Last Modified: | 10 Nov 2024 22:30 |
URI: | https://orca.cardiff.ac.uk/id/eprint/167438 |
Actions (repository staff only)
Edit Item |