During the last decade, we have been witnessing the continued development of new time-of-flight imaging devices, and their increased use in numerous and varied applications. However, physics-based rendering techniques that can accurately simulate these devices are still lacking: while existing algorithms are adequate for certain tasks, such as simulating transient cameras, they are very inefficient for simulating time-gated cameras because of the large number of wasted path samples. We take steps towards addressing these deficiencies, by introducing a procedure for efficiently sampling paths with a predetermined length, and incorporating it within rendering frameworks tailored towards simulating time-gated imaging. We use our open-source implementation of the above to empirically demonstrate improved rendering performance in a variety of applications, including simulating proximity sensors, imaging through occlusions, depth-selective cameras, transient imaging in dynamic scenes, and non-line-of-sight imaging.
We broadly classify ToF rendering tasks into two categories. The first category includes tasks such as simulating continuous-wave time-of-flight cameras, which accumulate all photons with a weight that depends on their time of travel; as well as transient cameras, which aggregate all photons but separate them into a sequence of images, each recording contributions only from photons with a specific time of travel. Existing steady-state rendering algorithms remain efficient for simulation tasks in this category, as the majority of paths they generate will have a non-zero contribution, regardless of their path length. Tasks in this category have generally been the main focus of previous research on ToF rendering; for instance, Jarabo et al.~\cite{jarabo2014framework} improve rendering performance by introducing path-sampling schemes and reconstruction techniques tailored to the transient imaging setting.
The second type of ToF rendering tasks involves simulating images that accumulate contributions only from a small subset of photons, whose time of travel is within some narrow interval. These time-gated rendering tasks arise in a large number of practical situations; examples include time-gated sensors used as proximity detectors, gated laser ranging cameras, as well as situations where transient imaging is performed in dynamic scenes such as outdoors environments or tissue with blood flow. Unfortunately, existing rendering algorithms cannot be used for efficient time-gated rendering: the vast majority of the paths generated by these algorithms end up being rejected, for having length outside the narrow range accumulated by the simulated sensor.
Given sampled source and camera subpaths, standard BDPT forms complete paths by directly connecting every vertex in one subpath to every vertex in the other. Unfortunately, baseline BDPT becomes very inefficient for rendering
We introduce a path sampling algorithm that helps ameliorate the inefficiency of baseline BDPT for time-gated rendering tasks. We first select a pathlength τ either by importance sampling or stratified sampling the narrow pathlength importance function. Complete paths are formed by connecting every pair of vertices in the source and camera subpaths through an additional vertex, rather than directly as in standard BDPT, with the new vertex selected so that the total pathlength equals τ.
We evaluate the performance of BPDT with ellipsoidal connections and compare it with baseline BDPT (direct connections) on various time-gated rendering scenarios. We maintained the rendering time constant between both the renderings. Here are few renderings
For an in-depth description of the technology behind this work, please refer to our paper, supplementary material, ICCP poster, and SIGGRAPH talk video.
Adithya Pediredla, Ashok Veeraraghavan, and Ioannis Gkioulekas. "Ellipsoidal Path Connections for Time-gated Rendering", SIGGRAPH 2019
Our implementation is available on the following GitHub repository and you can also download our Docker file. It is based on the Mitsuba renderer and supports a large variety of time-of-flight cameras, including transient, time-gated, and continuous-wave amplitude-modulated cameras. We also provide an Amazon Web Services AMI for easily deploying the renderer. For any questions, please contact Adithya Pediredla.
This work was supported by DARPA Reveal (HR0011-16-C-0028), NSF Expeditions (CCF-1730574, CCF-1730147) and NSF CAREER (IIS-1652633) grants.