Link Search Menu Expand Document

Depth of Field Ray Tracing

This was a mini-project I developed for my Advanced Graphics and Image Processing course for which I developed a ray tracer that took RGBD images as input instead of a full 3D scene to render depth of field effects. The paper can be found here.

Summary

A compromise must be made between accuracy and efficiency when rendering depth of field, and a variety of techniques and image capturing devices have been developed to fill this spectrum. For real-time applications such as video games and user interfaces we may prefer efficiency and therefore use convolution techniques to quickly blur the out of focus areas. In contrast, ray tracing more closely simulates the physics of light, producing more accurate results at the cost of higher computing times.

Virtual reality (VR) poses a difficult problem as its real-time nature requires efficiency, but a high accuracy is also required as we want match viewer’s visual perception as closely as possible to minimize discomfort. In particular, the human eye’s lens does not perfectly conform to the thin lens model typically used for rendering DoF. Accommodation — the mechanism the eye uses to adjust focus — relies on cues taken from the diffraction and chromatic aberration produced from the lens. We therefore want to simulate these effects when rendering DoF in VR applications. Previous methods of rendering DoF with aberrations have either used full 3D scenes or been restricted to flat 2D textures of constant depth. Using RGBD images, thus fulfills the desired sweet-spot in the accuracy-efficiency spectrum.

In this paper we introduce a method for accurately rendering depth of field as viewed by the human eye that makes use of ray tracing for its accuracy and RGBD images for efficiency. We incorporate the effects of diffraction and chromatic aberration in our implementation, being the first to do so on RGBD data, providing a proof-of-concept to be optimized for real-time rendering applications. Thus, we hope to pave the way for rendering visually accurate depth of field in VR applications once concerns with respect to efficiency are addressed.

Paper