A theory of volumetric representations for opaque solids

Bailey Miller, Hanyu Chen, Alice Lai, Ioannis Gkioulekas

arXiv preprint

teaser
Left: Our theory explains why exponential volumetric light transport can model scenes as disparate as scattering media (microparticle geometry) and opaque objects (solid geometry). Right: Our theory suggests 3D reconstruction pipelines that learn meaningful representations for opaque objects better than previous neural rendering techniques.

Abstract

We develop a theory for the representation of opaque solids as volumetric models. Starting from a stochastic representation of opaque solids as random indicator functions, we prove the conditions under which such solids can be modeled using exponential volumetric transport. We also derive expressions for the volumetric attenuation coefficient as a functional of the probability distributions of the underlying indicator functions. We generalize our theory to account for isotropic and anisotropic scattering at different parts of the solid, and for representations of opaque solids as implicit surfaces. We derive our volumetric representation from first principles, which ensures that it satisfies physical constraints such as reciprocity and reversibility. We use our theory to explain, compare, and correct previous volumetric representations, as well as propose meaningful extensions that lead to improved performance in 3D reconstruction tasks.

Visualization

A visualization of all our 3D reconstruction results is available at the interactive supplemental website.

Reference Ours VolSDF NeuS
  • Ours
  • NeuS
  • VolSDF

Resources

Paper: Our paper and supplement are available here and also on arXiv.

Code: Our code is available on Github.

Data: The data to reproduce our experiments is available on Amazon S3: Blended MVS, NeRF Realistic Synthetic, DTU.

Citation

@article{Miller:VOS:2023,
	title={A theory of volumetric representations for opaque solids},
	journal={arXiv},
	author={Miller, Bailey and Chen, Hanyu and Lai, Alice and Gkioulekas, Ioannis},
	year={2023},
}

Acknowledgments

This work was supported by NSF awards 1900849 and 2008123, and a Sloan Research Fellowship for Ioannis Gkioulekas.