Kaleidoscopic Structured Light

Byeongjoo Ahn, Ioannis Gkioulekas, and Aswin C. Sankaranarayanan


Abstract

Full surround 3D imaging for shape acquisition is essential for generating digital replicas of real-world objects. Surrounding an object we seek to scan with a kaleidoscope, that is, a configuration of multiple planar mirrors, produces an image of the object that encodes information from a combinatorially large number of virtual viewpoints. This information is practically useful for the full surround 3D reconstruction of the object, but cannot be used directly, as we do not know what virtual viewpoint each image pixel corresponds---the pixel label. We introduce a structured light system that combines a projector and a camera with a kaleidoscope. We then prove that we can accurately determine the labels of projector and camera pixels, for arbitrary kaleidoscope configurations, using the projector-camera epipolar geometry. We use this result to show that our system can serve as a multi-view structured light system with hundreds of virtual projectors and cameras. This makes our system capable of scanning complex shapes precisely and with full coverage. We demonstrate the advantages of the kaleidoscopic structured light system by scanning objects that exhibit a large range of shapes and reflectances.

Overview


Main Challenge: Labeling

The key challenge when using a kaleidoscope lies in interpreting the captured image and decoding the numerous views of the object that it provides. Specifically, we need to identify the virtual viewpoint corresponding to each pixel on the captured image. We call this the labeling problem. The labeling information allows us to decompose the single captured image into multiple segments, one for each virtual viewpoint. In the absence of this information, we cannot estimate the 3D shape by triangulating from correspondences across different views. The fact that in a kaleidoscope it is common to observe hundreds of virtual views that are interwoven with each other makes the labeling problem particularly challenging.


Key Idea: Epipolar Geometry

Our main technical result is to show that we can correctly label the virtual projectors and virtual cameras for arbitrary kaleidoscope configurations, by using their epipolar geometry and other physical constraints arising from image formation for this setup. We prove a theoretical result that establishes the uniqueness and correctness of the labels we decode with our labeling technique.

Epipolar labeling


Results

[elephant]

Photograph



Camera label



Projector label



Virtual projectors and cameras

Point cloud


Point cloud
(hidden point removal)

Surface


Normal


[skeleton]

Photograph



Camera label



Projector label



Virtual projectors and cameras

Point cloud


Point cloud
(hidden point removal)

Surface


Normal


[cat brush]

Photograph



Camera label



Projector label



Virtual projectors and cameras

Point cloud


Point cloud
(hidden point removal)

Surface


Normal


[treble clef]

Photograph



Camera label



Projector label



Virtual projectors and cameras

Point cloud


Point cloud
(hidden point removal)

Surface


Normal


[skull]

Photograph



Camera label



Projector label



Virtual projectors and cameras

Point cloud


Point cloud
(hidden point removal)

Surface


Normal



More Details

For an in-depth description of the technology behind this work, please refer to our paper.

Byeongjoo Ahn, Ioannis Gkioulekas, and Aswin C. Sankaranarayanan, "Kaleidoscopic Structured Light", ACM Transactions on Graphics (Proc. SIGGRAPH ASIA), 2021

SIGGRAPH Asia Talk


Code and Data

Code and data are available at the following GitHub repository.


Acknowledgements

We thank Jinmo Rhee and Vishwanath Saragadam for help with the prototype. This work was supported by the National Science Foundation (NSF) under awards 1652569, 1900849, and 2008464, as well as a Sloan Research Fellowship for Ioannis Gkioulekas.

Copyright © 2021 Byeongjoo Ahn