Physics-Based Rendering and Its Applications in Computational Photography and Imaging

CVPR 2023 tutorial

teaser
Physics-based rendering provides algorithms that efficiently and accurately simulate and invert light transport in settings involving complex physical phenomena and imaging systems. We highlight its applications in several areas of computational photography and imaging.

Presenters

Adithya Pediredla
Adithya Pediredla
Dartmouth College
Ioannis Gkioulekas
Ioannis Gkioulekas
Carnegie Mellon University

Description

Physics-based rendering algorithms simulate photorealistic radiometric measure-ments captured by a variety of sensors, including conventional cameras, time-of-flight sensors, lidar, and so on. They do so by computationally mimicking the flow of light through a mathematical representation of a virtual scene. This capability has made physics-based rendering a key ingredient in inferential pipelines for computational photography, computer vision, and computer graphics applications. For example, forward renderers can be used to simulate new camera systems or optimize the design of existing ones. Additionally, they can generate datasets for further training and optimization of tailored post-processing algorithms, jointly with hardware in an end-to-end fashion. Differentiable renderers can be used to backpropagate through image losses involving complex light transport effects. This makes it possible to solve previously intractable analysis-by-synthesis problems, and to incorporate physics-based simulation modules into probabilistic inference, deep learning, and generative pipelines. The goal of this tutorial is to introduce physics-based rendering, and highlight relevant theory, algorithms, implementations, and current and future applications in computer vision and related areas. This material should help equip computer vision researchers and practitioners with the necessary background for utilizing state-of-the-art rendering tools in a variety of exciting applications in vision, graphics, computational photography, and computational imaging.

Agenda

Times are approximate. Slides for each tutorial section are available below.

Time (PT)TopicPresenter
1:30 - 1:35 pmWelcome and introduction
1:35 - 1:45 pmPrimer on physics-based renderingIoannis Gkioulekas
1:45 - 2:15 pmTime-of-flight and non-line-of-sight imagingAdithya Pediredla
2:15 - 2:40 pmAcousto-opticsAdithya Pediredla
2:40 - 3:00 pmUltrafast lensesAdithya Pediredla
3:00 - 3:30 pmCoffee break
3:30 - 3:50 pmSpeckle and fluorescence imagingIoannis Gkioulekas
3:50 - 4:10 pmVision-based tactile sensorsIoannis Gkioulekas
4:10 - 4:30 pmDifferentiable renderingIoannis Gkioulekas
4:30 - 4:40 pmInverse rendering problemsIoannis Gkioulekas
4:40 - 4:50 pmTake-home messagesAdithya Pediredla
4:50 - 5:00 pmWrap-up and Q & A

Recording

Slides

The slides for this tutorial are available here.

References

The following is a list of references to papers and software we highlight in each section of the tutorial. It is not an exhaustive, or even representative, bibliography for the corresponding research areas. For pointers to additional resources, we recommend perusing the bibliography of these references.

Background on physics-based rendering

Time-of-flight rendering

Non-line-of-sight imaging

Acousto-optics

Ultrafast lenses

Speckle and fluorescence imaging

Vision-based tactile sensors

Differentiable rendering

Inverse rendering problems

Monte Carlo PDE simulation

Sponsors

This tutorial was supported by NSF awards 1730147, 1900849, 2008123, a Sloan Research Fellowship for Ioannis Gkioulekas, and a Burke Award for Adithya Pediredla. We are also grateful to the sponsors of the individual research projects presented in this tutorial (NSF, ONR, DARPA, Amazon Web Services).