Non-line-of-sight (NLOS) imaging is the problem of reconstructing properties of scenes occluded from a sensor, using measurements of light that indirectly travels from the occluded scene to the sensor through intermediate diffuse reflections. We introduce an analysis-by-synthesis framework that can reconstruct complex shape and reflectance of an NLOS object. Our framework deviates from prior work on NLOS reconstruction, by directly optimizing for a surface representation of the NLOS object, in place of commonly employed volumetric representations. At the core of our framework is a new rendering formulation that efficiently computes derivatives of radiometric measurements with respect to NLOS geometry and reflectance, while accurately modeling the underlying light transport physics. By coupling this with stochastic optimization and geometry processing techniques, we are able to reconstruct NLOS surface at a level of detail significantly exceeding what is possible with previous volumetric reconstruction methods.
Most existing NLOS imaging techniques perform 3D reconstruction based on an approximate scene model called volumetric albedo. This model uses a voxelization of the NLOS scene, with each voxel being an isotropic reflector with an associated albedo value. This representation allows approximately formulating image formation using only linear algebraic operations. In turn, this allows recovering the NLOS scene by solving a linear least-squares system. Unfortunately, this mathematical tractability comes at the cost of reduced physical accuracy, as the model ignores effects such as occlusions, normal-dependent shading, and non-Lambertian reflectance. This constraints the fidelity at which we can reconstruct the NLOS scene.
Our approach is to use a physically accurate representation of the NLOS scene, in terms of opaque continuous surfaces of some associated reflectance. The rendering equation from computer graphics provides us with a framework for solving the forward problem of simulating physically-accurate radiometric measurements of such a scene. We show that we can use a differentiable formulation of the rendering equation to solve the inverse problem of reconstructing the NLOS surfaces and their reflectance from radiometric transient measurements. This allows us to obtain surface reconstructions at a level of detail comparable to what is achieved by albedo-volume methods using two orders of magnitude more measurements.
Our key technical result is that we can express derivatives of radiometric measurements with respect the NLOS surface as a surface integral, similar to the one derived for the forward image formation model from the rendering equation. Given this, we can estimate this derivative using Monte Carlo algorithms, exactly analogously to the estimation of images. Then, we can optimize for the NLOS surface using efficient stochastic gradient descent optimization.
We evaluate the performance of our reconstruction framework on several simulated and real-world datasets. Below we show a few surface optimization results for different groundtruth shapes and initialization schemes.
For an in-depth description of the technology behind this work, please refer to our paper, supplementary material, and the accompanying video.
Chia-Yin Tsai, Aswin C. Sankaranarayanan, and Ioannis Gkioulekas. "Beyond Volumetric Albedo—A Surface Optimization Framework for Non-Line-of-Sight Imaging", CVPR 2019
Our implementation is available at the following GitHub repository. It uses Embree for efficient rendering, and libigl and El Topo for geometry processing operations. In addition to the inverse rendering framework, the repository includes a very efficient forward renderer for non-line-of-sight imaging (3-bounce "looking around the corner" setting). Finally, the repository lists a pre-configured Amazon Web Services AMI for easily deploying our implementation.
This work was supported by DARPA REVEAL (HR0011-16-C-0025, HR0011-16-C-0028) and NSF Expeditions (CCF-1730147) grants. CYT gratefully acknowledges support from the Bertucci Graduate Fellowship and the Google PhD Fellowship.