Abstract
We introduce an interferometric technique for passive time-of-flight imaging and depth sensing at micrometer axial resolutions. Our technique uses a full-field Michelson interferometer, modified to use sunlight as the only light source. The large spectral bandwidth of sunlight makes it possible to acquire micrometer-resolution time-resolved scene responses, through a simple axial scanning operation. Additionally, the angular bandwidth of sunlight makes it possible to capture time-of-flight measurements insensitive to indirect illumination effects, such as interreflections and subsurface scattering. We build an experimental prototype that we operate outdoors, under direct sunlight, and in adverse environment conditions such as machine vibrations and vehicle traffic. We use this prototype to demonstrate, for the first time, passive imaging capabilities such as micrometer-scale depth sensing robust to indirect illumination, direct-only imaging, and imaging through diffusers.
Hardware implementation
Data pipeline
Our method takes as input a video of a scene under sunlight that includes minute speckle changes during scanning. After some simple processing (basic filtering operations), our method produces as output a direct-only transient video and a depth map of the scene.
Video
Passive direct-only transient imaging
We show transient videos of various scenes in outdoor environments, which we capture passively using only sunlight.
Resources
Paper: Our paper and supplement are available on CVF open access, on arXiv, and locally (paper, supplement).
Poster: Our poster is available here.
Presentation: Our presentation slides are available here.
Data and code: The data to reproduce our experiments, alongside processing code, is available here. The paper supplement includes more details about the code.
Citation
@InProceedings{Kotwal:2023:Passive,
author = {Kotwal, Alankar and Levin, Anat and Gkioulekas, Ioannis},
title = {Passive Micron-Scale Time-of-Flight With Sunlight Interferometry},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2023},
pages = {4139-4149}
}
Acknowledgments
We thank Sudershan Boovaraghavan and Yuvraj Agrawal, who provided the samples for some of our experiments. This work was supported by NSF awards 1730147, 2047341, 2008123 (NSF-BSF 2019758), and a Sloan Research Fellowship for Ioannis Gkioulekas.