Talks

Capture in Depth

Monday, 11 August 9:00 AM - 10:30 AM | Vancouver Convention Centre, West Building, Rooms 109-110 Session Chair: Bill Polson, Pixar Animation Studios

Rapid Avatar Capture and Simulation Using Commodity Depth Sensors

This system can capture a human figure using a Microsoft Kinect without assistance, create a 3D model of the subject, then automatically rig, skin, and simulate it in a 3D environment in a matter of minutes.

Andrew Feng
USC Institute For Creative Technologies

Ari Shapiro
USC Institute For Creative Technologies

Wang Ruizhe
University of Southern California

Hao Li
University Of Southern California

Mark Bolas
USC Institute For Creative Technologies

Gerard Medioni
University of Southern California

Evan Suma
USC Institute for Creative Technologies

Live Real-Time Animated Content-Leveraging Machine Learning and Game-Engine Technology

Machine learning and modern game engines are opening the field for real-time creation of animated content for both film and games. With a simple webcam and the Unity3d game engine, facial animation for a 3D character can be created while the film is being rendered.

Stefano Corazza
Mixamo, Inc.

Charles Piña
Mixam, Inc.

Emiliano Gambaretto
Mixamo, Inc.

Alternative Strategies for Run-Time Facial Motion Capture

Facial motion capture is an effective albeit costly means of delivering performances for game characters. Using Kinect-based hardware, this technique explores a pipeline for delivering game-ready performances in Unity, enlisting the talents of actors and game developers.

Izmeth Siddeek
Vancouver Institute of Media Arts

Real-Time Motion Capture of the Human Tongue

A technique for using motion capture to animate a model of a human tongue in real time, for use by speech therapists to aid treatment of people with speech disorders that affect the movements of the tongue.

Eric Farrar
University of Texas at Dallas

Coleman Eubanks
University of Texas at Dallas

Arvind Balasubramanian,
University of Texas at Dallas