Cardiff University | Prifysgol Caerdydd ORCA
Online Research @ Cardiff 
WelshClear Cookie - decide language by browser settings

DE-NeRF: DEcoupled Neural Radiance Fields for view-consistent appearance editing and high-frequency environmental relighting

Tong, Wu, Jia-Mu, Jia-Mu, Lai, Yukun ORCID: https://orcid.org/0000-0002-2094-5680 and Lin, Gao 2023. DE-NeRF: DEcoupled Neural Radiance Fields for view-consistent appearance editing and high-frequency environmental relighting. Presented at: ACM SIGGRAPH, Los Angeles, CA, USA, 6-10 August 2023. Published in: Brunvand, E., Sheffer, A. and Wimmer, M. eds. ACM SIGGRAPH 2023 Conference Proceedings. New York, NY, USA: Association for Computing Machinery, p. 74.

[thumbnail of 3588432.3591483.pdf]
Preview
PDF - Published Version
Available under License Creative Commons Attribution.

Download (76MB) | Preview

Abstract

Neural Radiance Fields (NeRF) have shown promising results in novel view synthesis. While achieving state-of-the-art rendering results, NeRF usually encodes all properties related to geometry and appearance of the scene together into several MLP (Multi-Layer Perceptron) networks, which hinders downstream manipulation of geometry, appearance and illumination. Recently researchers made attempts to edit geometry, appearance and lighting for NeRF. However, they fail to render view-consistent results after editing the appearance of the input scene. Moreover, high-frequency environmental relighting is also beyond their capability as lighting is modeled as Spherical Gaussian (SG) and Spherical Harmonic (SH) functions or a low-resolution environment map. To solve the above problems, we propose DE-NeRF to decouple view-independent appearance and view-dependent appearance in the scene with a hybrid lighting representation. Specifically, we first train a signed distance function to reconstruct an explicit mesh for the input scene. Then a decoupled NeRF learns to attach view-independent appearance to the reconstructed mesh by defining learnable disentangled features representing geometry and view-independent appearance on its vertices. For lighting, we approximate it with an explicit learnable environment map and an implicit lighting network to support both low-frequency and high-frequency relighting. By modifying the view-independent appearance, rendered results are consistent across different viewpoints. Our method also supports high-frequency environmental relighting by replacing the explicit environment map with a novel one and fitting the implicit lighting network to the novel environment map. Experiments show that our method achieves better editing and relighting performance both quantitatively and qualitatively compared to previous methods.

Item Type: Conference or Workshop Item (Paper)
Date Type: Publication
Status: Published
Schools: Schools > Computer Science & Informatics
Publisher: Association for Computing Machinery
ISBN: 979-8-4007-0159-7
Funders: The Royal Society
Related URLs:
Date of First Compliant Deposit: 13 May 2023
Date of Acceptance: 19 April 2023
Last Modified: 17 Mar 2025 15:02
URI: https://orca.cardiff.ac.uk/id/eprint/159466

Actions (repository staff only)

Edit Item Edit Item

Downloads

Downloads per month over past year

Loading...

View more statistics