Progressively-connected Light Field Network for Efficient View Synthesis

1The University of Hong Kong, 2Zhejiang University, 3Meta AI, 4Max Planck Institute for Informatics 5Texas A&M University

We present a compact light field representation, ProLiF, for efficient photo-realistic novel view synthesis (left). ProLiF has good compatibility with image-level losses such as LPIPS loss to achieve robustness to varying lighting conditions of input images (middle), and CLIP loss for multi-view consistent scene style editing (right).


This paper presents a Progressively-connected Light Field network, for the novel view synthesis of complex forward-facing scenes, which allows rendering a large batch of rays in one training step for image- or patch-level losses.

Directly learning a neural light field from images has difficulty in rendering novel view images with multi-view consistency due to its unawareness of the underlying 3D geometry.

To address this problem, we propose a progressive training scheme and regularization losses to help the neural light field infer the underlying geometry during training, which enforces the multi-view consistency and thus greatly improves the rendering quality. Experiments demonstrate that our method is able to achieve significantly better rendering quality than the baseline neural light fields and comparable results to NeRF-like rendering methods on the challenging LLFF dataset and Shiny Object dataset. Moreover, we demonstrate better compatibility with LPIPS loss to achieve robustness to varying light conditions and CLIP loss to control the rendering style of the scene.



Progressive training scheme. In this training scheme, we first separately predict densities and colors of points with different subnetworks, and then we progressively densify the connections between subnetworks to merge them. At the last training stage, we obtain a single fully-connected MLP to predict all the densities and colors of point samples.



Scene fitting under varing light conditions

ProLiF is able to robustly fit scenes under varing light conditions using LPIPS loss.

Text-guided style editing

Using CLIP loss, ProLiF is able to control the scene styles guided by texts.

More results


  title={Progressively-connected Light Field Network for Efficient View Synthesis},
  author={Wang, Peng and Liu, Yuan and Lin, Guying and Gu, Jiatao and Liu, Lingjie and Komura, Taku and Wang, Wenping},
  journal={arXiv preprint arXiv:2207.04465},