Intel

Courses

Digital Ira and Beyond: Creating Photoreal Real-Time Digital Characters

Sunday, 10 August 9:00 AM - 12:15 PM | Vancouver Convention Centre, East Building, Ballroom A

This course summarizes the process of creating Digital Ira, presented in SIGGRAPH 2013 Real-Time Live! It covers the complete set of technologies from high-resolution facial scanning, blendshape rigging, video-based performance capture, animation compression, and realtime skin and eye shading to hair rendering. The course also presents and explains late-breaking results and refinements, and points the way to future directions that may increase the quality and efficiency of this kind of digital-character pipeline.

For this project, an actor was scanned in 30 high-resolution expressions, from which eight were chosen for real-time performance rendering. Performance clips were captured using multi-view video. Expression UVs were interactively correlated with the neutral expression, then retopologized to an artist mesh. An animation solver created a performance graph representing dense GPU optical flow between video frames and the eight expressions. Dense optical flow and 3D triangulation were computed, yielding per-frame spatially varying blendshape weights approximating the performance.

The performance was converted to standard bone animation on a 4k mesh using a bone-weight and transform solver. Surface stress values were used to blend albedo, specular, normal, and displacement maps from the high-resolution scans per-vertex at run time. DX11 rendering includes SSS, translucency, eye refraction and caustics, physically based two-lobe specular reflection with microstructure, DOF, antialiasing, and grain.

The course explains each of these processes, why each design choice was made, and alternative components that could replace any of the steps. It also covers emerging technologies in performance capture and facial rendering. Attendees receive a solid understanding of the techniques used to create photoreal digital characters in video games and other applications, and the confidence to incorporate some of the techniques into their own pipelines.

Course Schedule

9 am
Introduction/Overview
von der Pahlen

9:10 am
Facial Scanning and Microgeometry Capture
Debevec

9:25 am
Facial Scan Correspondence With Vuvuzela (Live Demo)
Debevec

9:40 am
Performance Capture and Animation Solving
Fyffe

10:05 am
Vertex Animation Pipeline
Danvoye

10:25 am
Question and Answers - First Half
All

10:30 am
Break

10:45 am
Pixel Animation Pipeline
Danvoye

10:55 am
Shading of Skin, Eyes, and Hair
Jimenez

11:30 am
Live Demo, Latest Results and Future Work
von der Pahlen

11:45 am
Future Directions in Real-Time Capture and Hair
Li

12:05 pm
Question and Answers
All

Level

Intermediate

Prerequisites

Some experience with video-game pipelines, facial animation, and shading models.

Intended Audience

Digital character artists, game developers, texture painters, and researchers working on performance capture, facial modeling, and real-time shading research.

Instructor(s)

Javier von der Pahlen
Activision, Inc.

Jorge Jimenez
Activision, Inc.

Etienne Danvoye
Activision, Inc.

Paul Debevec
USC Institute for Creative Technologies

Graham Fyffe
USC Institute for Creative Technologies

Hao Li
University of Southern California