Paper
14 November 2023 Hash encoded neural radiance field with view-dependent mapping
Yuanzhen Zhou, Wen Cheng
Author Affiliations +
Proceedings Volume 12934, Third International Conference on Computer Graphics, Image, and Virtualization (ICCGIV 2023); 1293406 (2023) https://doi.org/10.1117/12.3008019
Event: 2023 3rd International Conference on Computer Graphics, Image and Virtualization (ICCGIV 2023), 2023, Nanjing, China
Abstract
Neural Radiance Fields (NeRF) constructs the connection from 3D position and 2D direction to voxel density and color in the scene by learning the implicit expression of the scene space through a multi-layer perceptron. It is capable to output high-quality images in the task of novel view synthesis. Although NeRF performs well under the ideal conditions of static scenes with precise camera calibration, it can hardly handle freely-shot images, and the low training efficiency also hinders its application in reconstructing real-world scenes. This paper proposes a NeRF model extended by hash position encoding and view-dependent mapping, which is able to better deal with image sets collected from the real-world under complex lighting conditions while improving the learning speed and the effectiveness in recovering scene details. Through experiments, it has been proven to achieve more ideal results than the classic NeRF and its variants.
(2023) Published by SPIE. Downloading of the abstract is permitted for personal use only.
Yuanzhen Zhou and Wen Cheng "Hash encoded neural radiance field with view-dependent mapping", Proc. SPIE 12934, Third International Conference on Computer Graphics, Image, and Virtualization (ICCGIV 2023), 1293406 (14 November 2023); https://doi.org/10.1117/12.3008019
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
3D modeling

Cameras

Light sources and illumination

Mathematical optimization

3D image reconstruction

Artificial intelligence

RELATED CONTENT


Back to Top