Wu, Tong, Sun, Jia-Mu, Lai, Yu-Kun ![]() ![]() |
Preview |
PDF
- Accepted Post-Print Version
Download (26MB) | Preview |
Abstract
Neural Radiance Fields (NeRFs) have shown promising results in novel view synthesis. While achieving state-of-the-art rendering results, NeRF usually encodes all properties related to geometry and appearance of the scene together into several MLP (Multi-Layer Perceptron) networks, which hinders downstream manipulation of geometry, appearance and illumination. Recently researchers made attempts to edit geometry, appearance and lighting for NeRF. However, they fail to render view-consistent results after editing the appearance of the input scene. Moreover, many approaches use Spherical Gaussian (SG) or Spherical Harmonic (SH) functions, or low-resolution environment maps to model lighting. These methods, however, struggle with high-frequency environmental relighting. While some approaches utilize high-resolution environment maps, the strategy of jointly optimizing geometry, material, and lighting introduces additional ambiguity. To solve the above problems, we propose VD-NeRF, a visibility-aware approach to decoupling view-independent appearance and view-dependent appearance in the scene with a hybrid lighting representation. Specifically, we first train a signed distance function to reconstruct an explicit mesh for the input scene. Then a decoupled NeRF learns to attach view-independent appearance to the reconstructed mesh by defining learnable disentangled features representing geometry and view-independent appearance on its vertices. For lighting, we approximate it with an explicit learnable environment map and an implicit lighting network to support both low-frequency and high-frequency relighting. By modifying the view-independent appearance, rendered results are consistent across different viewpoints. Our method also supports high-frequency environmental relighting by replacing the explicit environment map with a novel one and fitting the implicit lighting network to the novel environment map. We further take visibility into consideration when rendering and decoupling the input 3D scene, which improves the quality of decomposition and relighting results and also enables more downstream applications such as scene composition where occlusions between scenes are common. Extensive experiments show that our method achieves better editing and relighting performance both quantitatively and qualitatively compared to previous methods.
Item Type: | Article |
---|---|
Date Type: | Published Online |
Status: | In Press |
Schools: | Computer Science & Informatics |
Additional Information: | License information from Publisher: LICENSE 1: URL: https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html, Start Date: 2025-01-01 |
Publisher: | Institute of Electrical and Electronics Engineers |
ISSN: | 0162-8828 |
Date of First Compliant Deposit: | 3 February 2025 |
Date of Acceptance: | 21 January 2025 |
Last Modified: | 03 Feb 2025 10:30 |
URI: | https://orca.cardiff.ac.uk/id/eprint/175701 |
Actions (repository staff only)
![]() |
Edit Item |