Cardiff University | Prifysgol Caerdydd ORCA
Online Research @ Cardiff 
WelshClear Cookie - decide language by browser settings

3D snapshot: Invertible embedding of 3D neural representations in a single image

Lu, Yuqin, Deng, Bailin ORCID: https://orcid.org/0000-0002-0158-7670, Zhong, Zhixuan, Zhang, Tianle, Quan, Yuhui, Cai, Hongmin and He, Shengfeng 2024. 3D snapshot: Invertible embedding of 3D neural representations in a single image. IEEE Transactions on Pattern Analysis and Machine Intelligence 10.1109/TPAMI.2024.3411051

[thumbnail of 3DSnapshot.pdf]
Preview
PDF - Accepted Post-Print Version
Download (9MB) | Preview
[thumbnail of supp.pdf]
Preview
PDF - Supplemental Material
Download (14MB) | Preview
[thumbnail of supp_video.mp4] Video (MPEG) - Supplemental Material
Download (39MB)

Abstract

3D neural rendering enables photo-realistic reconstruction of a specific scene by encoding discontinuous inputs into a neural representation. Despite the remarkable rendering results, the storage of network parameters is not transmission-friendly and not extendable to metaverse applications. In this paper, we propose an invertible neural rendering approach that enables generating an interactive 3D model from a single image (i.e., 3D Snapshot). Our idea is to distill a pre-trained neural rendering model (e.g., NeRF) into a visualizable image form that can then be easily inverted back to a neural network. To this end, we first present a neural image distillation method to optimize three neural planes for representing the original neural rendering model. However, this representation is noisy and visually meaningless. We thus propose a dynamic invertible neural network to embed this noisy representation into a plausible image representation of the scene. We demonstrate promising reconstruction quality quantitatively and qualitatively, by comparing to the original neural rendering model, as well as video-based invertible methods. On the other hand, our method can store dozens of NeRFs with a compact restoration network (5MB), and embedding each 3D scene takes up only 160KB of storage. More importantly, our approach is the first solution that allows embedding a neural rendering model into image representations, which enables applications like creating an interactive 3D model from a printed image in the metaverse.

Item Type: Article
Date Type: Published Online
Status: In Press
Schools: Computer Science & Informatics
Subjects: Q Science > QA Mathematics > QA75 Electronic computers. Computer science
Q Science > QA Mathematics > QA76 Computer software
Publisher: Institute of Electrical and Electronics Engineers
ISSN: 0162-8828
Funders: Guangdong Natural Science Funds for Distinguished Young Scholar, National Research Foundation Singapore
Date of First Compliant Deposit: 9 June 2024
Date of Acceptance: 29 May 2024
Last Modified: 17 Jul 2024 11:04
URI: https://orca.cardiff.ac.uk/id/eprint/169623

Actions (repository staff only)

Edit Item Edit Item

Downloads

Downloads per month over past year

View more statistics