Technical Papers

Video Applications

Tuesday, 12 August 2:00 PM - 3:30 PM | Vancouver Convention Centre, East Building, Exhibit Hall A Session Chair: Floraine Berthouzoz, Adobe Systems, Inc.

VideoSnapping: Interactive Synchronization of Multiple Videos

This method enables video clips in a timeline to snap to one another in a content-aware manner when dragged by a user. It computes optimal nonlinear synchronizations for arbitrary numbers of videos, enabling new applications and interfaces.

Oliver Wang
Disney Research Zürich

Christopher Schroers
Disney Research Zürich

Henning Zimmer
Disney Research Zürich

Markus Gross
Disney Research Zürich, ETH Zürich

Alexander Sorkine-Hornung
Disney Research Zürich

First-Person Hyper-Lapse Videos

A method for converting first-person videos (for example, captured with a helmet camera during activities such as rock climbing or bicycling) into hyper-lapse videos (time-lapse videos with a smoothly moving camera).

Johannes Kopf
Microsoft Research

Michael Cohen
Microsoft Research

Richard Szeliski
Microsoft Research

The Visual Microphone: Passive Recovery of Sound from Video

When sound causes an object to vibrate, the movement of that object creates a visual signal. This paper shows that by recovering the small motions of an object vibrating with sound, audio signals can be recovered from high-speed video - turning visible surfaces into visual microphones.

Abe Davis
MIT CSAIL

Michael Rubinstein
Microsoft Research, MIT CSAIL

Neal Wadhwa
MIT CSAIL

Gautham J. Mysore
Adobe Research

Frédo Durand
MIT CSAIL

William T. Freeman
MIT CSAIL

Intrinsic Video and Applications

A method to decompose a video into its intrinsic components of reflectance and shading, plus a number of example applications in video editing such as segmentation, material editing, recolorization, and color transfer.

Genzhi Ye
Tsinghua University

Elena Garces
Universidad de Zaragoza

Yebin Liu
Tsinghua University

Qionghai Dai
Tsinghua University

Diego Gutierrez
Universidad de Zaragoza

Automatic Editing of Footage from Multiple Social Cameras

This paper presents an approach that takes multiple videos captured by social cameras (cameras that are carried or worn by members of the group involved in an activity) and produces a coherent
"cut" video of the activity using the focus of attention of the people and applying cinematographic rules.

Ido Arev
The Interdisciplinary Center Herzliya, Disney Research Pittsburgh

Hyun Soo Park
Carnegie Mellon University

Yaser Sheikh
Carnegie Mellon University, Disney Research Pittsburgh

Jessica Hodgins
Carnegie Mellon University, Disney Research Pittsburgh

Ariel Shamir
The Interdisciplinary Center Herzliya, Disney Research Pittsburgh