Real-Time Live!

Real-Time Live!

Tuesday, 12 August 5:30 PM - 7:15 PM | Vancouver Convention Centre, West Building, Ballroom C/D

Destruction Sequences in Call of Duty: Ghosts

In an effort to raise the bar for destruction sequences in a real-time game engine, non-localized particles, "instant pop", low density, generic fracturing, etc. were identified and addressed through several artistic techniques and and technical advancements. Fracturing was improved by artist-authored fracture cutting. Density was addressed by introducing a new file format to run (relatively) dense particle cache that are properly localized over time. And lighting and smoke were addressed through pre-rendered offline smoke simulations played back on a single card. Progressive damage over time and stress rigs and other artist techniques were introduced for a full, dense beautiful result.

David Johnson
Infinity Ward

Alessandro Nardini
Infinity Ward

Fantasia: Music Evolved

This demonstration creatively merges art, audio, and interactivity to show how players can interact in three dimensions with virtual environments, how they can transform music in tandem with their physical motions, and how the developers married art and audio in non-photorealistic settings.

Mike Fitzgerald
Harmonix Music Systems, Inc.

David Battilana
Harmonix Music Systems, Inc.

Freeform: Digital Sculpting With Adaptive Surface Topology and Seamless 3D Coordinate Systems

Limitations in the world of digital sculpting have traditionally been divided into two key categories:

• Complex, unintuitive tools that require many hours of training
• Primitive tools that succeed in accessibility but fail at providing enough power and flexibility for creating complex shapes

Using a combination of novel and emerging techniques, Freeform is a native 3D sculpting tool that represents a significant step forward in the delicate balance between usability and power. The Leap Motion SDK provides highly accurate markerless and gloveless 3D hand interaction for both sculpting and the user interface. Workflow improvements are achieved through an intuitive radial menu system that is seamlessly and unobtrusively integrated into the 3D scene. The underlying mesh engine is both flexible and highly performant, so it elegantly handles variable-detail resolution and arbitrary changes in surface topology.

Freeform supports several sculpting interactions (growing, smoothing, and painting, for example) inside an extensible framework for performing many types of surface deformations while preserving topological correctness. HDR image-based lighting enhances the visual richness of the background environments and the sculpted surface, which can be further customized through material selection (clay, glass, etc.) or painting. Camera control is implemented through a novel isopotential-based system, in which camera panning moves along equipotentials generated from the mesh geometry, and the camera zoom moves perpendicularly to these equipotentials. This control scheme intuitively allows smooth traversal over the surface at large distances, while seamlessly enabling finely controlled surface crawling at closer distances.

Raffi Bedikian
Leap Motion, Inc.

Adrian Gasinki
Leap Motion, Inc.

David Holz
Leap Motion, Inc.

Make Your Own Avatar

This near-automatic pipeline captures a human subject and, in just a few minutes, simulates the person in a virtual scene. The process can be fully managed by the capture subject, who operates a single Microsoft Kinect. No additional assistance is required. The speed and accessibility of this process fundamentally changes the economics of avatar capture and simulation in 3D. Because the avatar-capture cost is near zero, and the technology to perform this capture has been deployed in millions of households worldwide, this technology has the potential to significantly expand the use of realistic-looking avatars.

The short capture time allows frequen, even daily, avatar creation. The avatars are of sufficient resolution to be recognizable to those familiar with the human subject, and they are suitable for use at a medium distance (such as third-person perspectives) and in crowd scenes.

The pipeline consists of three stages: capture and 3D reconstruction, rigging and skinning, animation and simulation. The capture process requires the subject to remain steady for about 15 seconds at four angles with 90-degree offsets from each other. The automatic rigging and skinning uses voxel-based approaches that do not require watertight meshes, and are thus suitable for capture methods that use reconstruction from points. The simulation and animation performs online retargeting of a large variety of behaviors, ranging from locomotion to reaching and gazing.

Ari Shapiro
USC Institute For Creative Technologies

Andrew Feng
USC Institute For Creative Technologies

Rhuizhe Wang
University of Southern California

Hao Li
University of Southern California

Gerard Medioni
University of Southern California

Mark Bolas
USC Institute for Creative Technologies

Evan Suma
USC Institute for Creative Technologies

NVIDIA FlameWorks: Real-Time Fire Simulation

FlameWorks is the first system to bring cinema-quality volumetric fire, smoke, and explosion effects to real-time graphics. The demo simulates and renders more than 32 million voxels per frame, at around 30 frames per second. The system includes an advanced combustion model and is highly customizable, supporting user-defined emitters, force fields, and collision objects.

Simon Green
NVIDIA Corporation

Nuttapong Chentanez
NVIDIA Corporation

Aron Zoellner
NVIDIA Corporation

Johnny Costello
NVIDIA Corporation

Kevin Newkirk
NVIDIA Corporation

Dane Johnston
NVIDIA Corporation

Real Time is Now

The internet is a frontier. It incubates a wilderness of unexplored insanity, but it also contains a fundamentally untapped potential for interacting with graphics. Some communities are working to bring GPU graphics to the web, but even with these powerful advances, it is difficult to find pages that contain anything but hyperlinks, html, and css. Technology advances, but storytelling stays stagnant. A link is a link is a link.

There are many explorers who want to change this. Moving beyond blue underlined text, these pioneers want to create an internet where we can fly from page to page. Armed with fragment and vertex shaders, the web audio api, and AJAX calls, these adventurers are setting out not to make new web stores, but rather create destinations for people to discover and explore.

With each new page built in WebGL, we as a community take one small step toward this new land, overflowing with infinite untold stories. With each new normal-mapped model or generative landscape, we make the internet more and more of an experience. Every new GPGPU calculation helps us reach the place where each time we open our browser, we get to experience the magic of real time, the magic of now.

Isaac Cohen
Cabbibo

Real-Time Animation of Cartoon Character Faces

This live demo allows a viewer to stand in front of a webcam and animate a 3D character in real time. The video stream is processed in real time by Mixamo Face Plus proprietary technology, the facial expression of the user is extracted, and the emotional facial information is transferred to the 3D character. The non-linear nature of the mapping is designed to push typical features of cartoon characters such as exaggeration, squash, and stretch.

The technology leverages machine learning and computer vision, and it does not require calibration, as it uses an extensive training set to recognize facial shapes, appearances, and illumination levels.

Emiliano Gambaretto
Mixamo, Inc.

Charles Piña
Mixamo, Inc.

Maturing the Virtual Production Workflow: Interactive Path Tracing for Filmmakers

Chaos Group and filmmaker Kevin Margo have leveraged the latest GPU hardware to prototype V-Ray for Autodesk's Motion Builder, the industry-standard software central to the motion capture experience. The results posit a mature virtual-production workflow that more closely replicates a live-action shoot. Camera and lighting creative direction have been introduced into the motion capture volume concurrently with live-actor performances for a more accurate representation of a final production frame. The scalability of path tracing from interactive frame rates, progressively refining over time to a photorealistic image, will have a huge impact on any film production with a heavy virtual, VFX, or animation component.

Kevin Margo
Blur Studio

Vladimir Koylazov
Chaos Group

Christopher Nichols
Chaos Group

Augmented/Virtual Reality Contest Winner Presentations

SIGGRAPH 2014's Augmented/Virtual Reality Contest encouraged developers to create and showcase the best augmented/virtual reality experiences possible using today’s technologies. Each of the three finalists receives an Oculus Rift Development Kit 2 and a Sixsense STEM. The first-prize winner also receives Full Conference registration for SIGGRAPH 2015.

The finalists demonstrate their work during Real-Time Live! (Tuesday, 12 August, 5:30-7:15 pm, West Building, Ballroom C-D) and Appy Hour (Wednesday, 13 August, 5:30-7:30 pm, West Building, Exhibit Hall A):
Birdly

Birdly is an immersive installation that explores the experience of a bird in flight. It tries to capture the mediated flying experience with several methods. Unlike a common flight simulator, users do not control a machine. Insteady, they embody a bird, the Red Kite. Evocation of this embodiment relies mainly on the sensory-motor coupling. Users can control the simulator with their hands and arms, which directly correlates to the wings and the primary feathers of the bird.

Max Rheiner
Zürcher Hochschule der Künste

Fabian Troxler
Zürcher Hochschule der Künste

MixPerceptions

The MixPerceptions project proposes a novel approach for artistic expression based on augmented reality. It merges the technical capabilities of modern smartphones and tablets with scientific advances in image analysis to provide an innovative interactive experience with paintings and murals.

Jose San Pedro
Independent New Media Artist

Aurelio San Pedro
Independent Artists

Juan Pablo Carrascal
Independent Artists

Matylda Szmukier
Independent Artists

Smart Specs Real-Time Augmented Vision for the Sight-Impaired

This set of smart glasses augments the vision of severely sight-impaired people in real time, highlighting the most relevant parts of the visual scene by combining information from a depth camera and an edge-detection algorithm operating on the output of an RGB camera.

Stephen Hicks
University of Oxford

Joram van Rheede
University of Oxford

Iain Wilson
University of Oxford

Stuart Golodetz
University of Oxford