This paper presents a Progressively-connected Light Field network, for the novel view synthesis of complex forward-facing scenes, which allows rendering a large batch of rays in one training step for image- or patch-level losses.
Directly learning a neural light field from images has difficulty in rendering novel view images with multi-view consistency due to its unawareness of the underlying 3D geometry.
To address this problem, we propose a progressive training scheme and regularization losses to help the neural light field infer the underlying geometry during training, which enforces the multi-view consistency and thus greatly improves the rendering quality. Experiments demonstrate that our method is able to achieve significantly better rendering quality than the baseline neural light fields and comparable results to NeRF-like rendering methods on the challenging LLFF dataset and Shiny Object dataset. Moreover, we demonstrate better compatibility with LPIPS loss to achieve robustness to varying light conditions and CLIP loss to control the rendering style of the scene.
Progressive training scheme. In this training scheme, we first separately predict densities and colors of points with different subnetworks, and then we progressively densify the connections between subnetworks to merge them. At the last training stage, we obtain a single fully-connected MLP to predict all the densities and colors of point samples.
ProLiF is able to robustly fit scenes under varing light conditions using LPIPS loss.
Using CLIP loss, ProLiF is able to control the scene styles guided by texts.
There are some other excellent works that managed to model the neural light fields:
Light Field Networks: Neural Scene Representations with Single-Evaluation Rendering Learning Neural Light Fields with Ray-Space Embedding Networks NeuLF: Efficient Novel View Synthesis with Neural 4D Light Field
@article{wang2022progressively,
title={Progressively-connected Light Field Network for Efficient View Synthesis},
author={Wang, Peng and Liu, Yuan and Lin, Guying and Gu, Jiatao and Liu, Lingjie and Komura, Taku and Wang, Wenping},
journal={arXiv preprint arXiv:2207.04465},
year={2022}
}
}