1 The University of York
2 Friedrich-Alexander-Universität Erlangen-Nürnberg
Inverse rendering is an ill-posed problem. Previous work has sought to resolve this by focussing on priors for object or scene shape or appearance. In this work, we instead focus on a prior for natural illuminations. Current methods rely on spherical harmonic lighting or other generic representations and, at best, a simplistic prior on the parameters. This results in limitations for the inverse setting in terms of the expressivity of the illumination conditions, especially when taking specular reflections into account. We propose a conditional neural field representation based on a variational auto-decoder and a transformer decoder. We extend Vector Neurons to build equivariance directly into our architecture, and leveraging insights from depth estimation through a scale-invariant loss function, we enable the accurate representation of High Dynamic Range (HDR) images. The result is a compact, rotation-equivariant HDR neural illumination model capable of capturing complex, high-frequency features in natural environment maps. Training our model on a curated dataset of 1.6K HDR environment maps of natural scenes, we compare it against traditional representations, demonstrate its applicability for an inverse rendering task and show environment map completion from partial observations. We share our PyTorch implementation. Code, trained models and the dataset are available at the links above.
@misc{gardner2023reni,
title={RENI++ A Rotation-Equivariant, Scale-Invariant, Natural Illumination Prior},
author={James A. D. Gardner and Bernhard Egger and William A. P. Smith},
year={2023},
eprint={2311.09361},
archivePrefix={arXiv},
primaryClass={cs.CV}
}