Computational Cameras and Displays
Recent advances in both computational photography and displays have given rise to a new generation of computational devices. These computational cameras and displays provide a visual experience that goes beyond the capabilities of traditional systems by adding computational power to optics, lights, and sensors, and they are breaking new ground in the consumer market with lightfield cameras that redefine our understanding of pictures (Lytro), displays for visualizing 4D content without special eyewear (Nintendo 3DS), motion-sensing devices that use light coded in space or time to detect motion and position (Kinect, Leap Motion), and a movement toward ubiquitous computing with wearable cameras and displays (Google Glass).
This introduction to the state of the art in computational cameras and displays provides a broad overview of key concepts and work in the field. Unlike previous courses and papers that focus on either imaging or displays, this course combines both subjects to highlight the duality of the principles behind the technology, and the combination of such cameras and displays to infer properties of real, unknown, and complex scenes.
The course begins with the three key components in any computational camera or display (lights, sensors, and optics), then focuses on computational imaging, where the objective is to capture new forms of visual information by adding computation to cameras. A detailed overview of displays, focusing on auto-stereoscopic displays and light-field projectors leads into a discussion of computational light transport, an area that combines computational cameras and lights to analyze the light transport of real-world scenes in a broad range of applications, from image-based relighting to capturing geometry and visualizing light transport phenomena.
Introduction and Overview
Computational Light Transport
Summary; Questions and Answers
University of Toronto
MIT Media Lab