Depth Fields

Depth Fields: Extending Light Field Techniques to Time-of-Flight Imaging

Technical Paper, Code/Data

Figure 1: Digital refocusing of TOF depth maps using depth field information


Figure 2: Using depth fields, depth imaging past partial occluders (plant) allows the monkey’s depth to be recovered.


Suren Jayasuriya, Adithya Pediredla, Sriram Sivaramakrishnan, Alyosha Molnar, Ashok Veeraraghavan


A variety of techniques such as light field, structured illumination, and time-of-flight (TOF) are commonly used for depth acquisition in consumer imaging, robotics and many other applications. Unfortunately, each technique suffers from its individual limitations preventing robust depth sensing. In this paper, we explore the strengths and weaknesses of combining light field and time-of-flight imaging, particularly the feasibility of an on-chip implementation as a single hybrid depth sensor. We refer to this combination as depth field imaging. Depth fields combine light field advantages such as synthetic aperture refocusing with TOF imaging advantages such as high depth resolution and coded signal processing to resolve multipath interference. We show applications including synthesizing virtual apertures for TOF imaging, improved depth mapping through partial and scattering occluders, and single frequency TOF phase unwrapping. Utilizing space, angle, and temporal coding, depth fields can improve depth sensing in the wild and generate new insights into the dimensions of light’s plenoptic function.


Oral presentation at International Conference on 3D Vision (3DV) 2015, Lyon, France


  1. Paper
  2. Presentation (coming soon)
  3. One-slide summary (coming soon)


To download, please go to the Github page:

Citation (BibTex)

  author = {Jayasuriya, Suren and Pediredla, Adithya and Sivaramakrishnan, Sriram and Molnar, Alyosha and Veeraraghavan, Ashok},
  title = {Depth Fields: Extending Light Field Techniques to Time-of-Flight Imaging},
  journal = {International Conference on 3D Vision (3DV)},
  year = {2015},


Suren Jayasuriya, sj498 “at” cornell “dot” edu


The authors gratefully acknowledge Achuta Kadambi for insights into modeling depth field capture and the nanophotography setup, and Ryuichi Tadano for coded TOF experiments and improving phase unwrapping algorithms. This work was partially supported by NSF CAREER-1150329 and NSF grant CCF-1527501, and S.J. was supported by a NSF Graduate Research Fellowship.