Studio Projects

Registration Categories

F Full Conference
S Select Conference
E+ Exhibits Plus 
Ex Exhibitors

Hack. Tinker. Make. In Studio Projects, attendees interact with innovative specialists in informal drop-in style workshops. Spend as little or as much time as you like creating something awesome in the Studio!

2D Print Area

Bring in your work or pull up a chair and make something new in the Studio to print! Epson is in the Studio again, and there are many 2D printing options to explore. Come see what your high-resolution work looks like printed out on a large-format printer. Make a design to print DIRECTLY onto a t-shirt! Print a lustrous 13x36 panorama on canvas. Print one of your characters or images on glossy ready-to-frame 13x19 sheets. 

3D Print Area 

Bring your files to create a unique model for 3D printing on Stratasys, Formlabs, or even Roland's brand-new 3D printing systems. There will also be CNC milling and 2D printing DIRECTLY on to 3D objects! And Universal Laser is providing laser-cutting services! 

3D Data Capture and Scanning

The scanning and data-capture area is well represented this year with small table-top scanners, face scanning, and even full-body scanning! Take your scan data home or work directly in the Studio to create a model for 3D printing or CNC milling! 

Equipment Donations for the Studio, SIGGRAPH 2014:

3dMD
4DDynamics
Epson
Formlabs
NextEngine, Inc.
Roland DG
Stratasys, Ltd.
Universal Laser Systems

Arduino Drawing Machines


Automated drawing machines are mechanical mechanisms that make drawings, typically on paper using pens, pencils, charcoal, or other traditional drawing implements. they are sculptures that can produce artwork (drawings) as they move and react to the environment.

This workshop blends computer control with simple mechanical and motorized mechanisms to build drawing machines. It begins with a short history of drawing machines, including examples of contemporary approaches in the arts, and their educational possibilities. The hands-on component of the workshop uses an Arduino microcontroller, a small, inexpensive system used by “makers” all over the world, plus light sensors, potentiometers (knobs), and hobby servos to construct mechanisms that make marks on paper. Initial designs are prototyped using foam core and masking tape. These machines, which can be constructed in an hour or two, demonstrate the potential of simple drawing machines and serve as a starting point for further investigation.

Attendees who want to pursue the topic can construct final designs during SIGGRAPH 2014 and enter their creations in a drawing-macine competition. Entries are judged by attendees and a celebrity guest. Judging criteria: overall design, function, and drawing quality.

Erik Brunvand
University of Utah

Paul Stout
University of Utah

Ginger Alford
Trinity Valley School, Fort Worth Museum Of Science and History

Arduino Drawing Machines LIVE Contest


Enter the Studio’s drawing-machine competition. Entries are judged by the crowd and a celebrity guest. Judging criteria: overall design, function, and drawing quality. This event features a cash bar and prizes will be awarded.

Bitcube: The New Kind of Physical Programming Interface With Embodied Programming


BitCube offers a new, simplified way to interact with art and technology. Users can learn programming logic with no need for coding literacy and easily create artworks. The system has the potential to change existing programming paradigms by making algorithms in a totally new way: embodied programming.

Most programming languages are composed of digits, characters, and logical expression. But children tend to think in images and relationships when they try to understand algorithms. Embodied programming is a new approach to learning algorithms through body, mind, and physical objects, including computers.

BitCube is a set of tangible blocks that consists of three functional parts: power blocks, a type of rechargeable battery with a micro-USB connector; sensor blocks that perceive data, such as light, sound, etc.; and action blocks, which receive data from sensor blocks and actuate LEDs, motors, etc. Data cables connect the blocks. Users select modules, connect them, arrange them, and decorate them. Then they set algorithms by clicking and rotating rotary encoders on the modules.

Jaeyoung Kim
HELLO!GEEKS Inc.

Byongsue Kang
HELLO!GEEKS Inc.

Shinyoung Rhee
HELLO!GEEKS Inc.

Byeongwol Kim
HELLO!GEEKS Inc.

Hyeonjin Yun
HELLO!GEEKS Inc.

Junghwan Sung
Soongsil University

Computational Bead Design


Throughout history, humans have invested beads with symbolic and sacred knowledge. They can communicate social standing, political history, and religious beliefs. Beads have been indicators of a civilization's technological advancement. The technical sophistication of bead manufacturing often mirrors the general technological level of the society (Dubin).

Computational Bead Design is an interdisciplinary project designed to introduce students to beginning programming, digital modeling, and additive manufacturing techniques. The project highlights the bead, once again, as a marker of our current technological advancements in computing and 3D printing. It introduces students to logical thinking and computing through creative exercises in bead modeling.

Marguerite Doman
Winthrop Unversity

Courtney Starrett
Winthrop University

Christopher Smalls
Winthrop University

Lauren Copley
Winthrop University

Chelsea Arthur
Winthrop University

Creation Station


This installation demystifies the process of creating hybrid alternative digital fine art prints by demonstrating important challenges and successful outcomes in collaborative projects. Topics include: moisture, archivability, tools, products, printers, color management, and related issues.

Lyn Bishop
Art Farm

Nance Paternoster
Digital Artist

Draco: Sketching Animated Drawings With Kinetic Textures


While previous systems have introduced sketch-based animations for individual objects, this installation demonstrates a unified framework of motion controls that allows users to seamlessly add coordinated motions to object collections. The framework is built around kinetic textures, which provide continuous animation effects while preserving the unique timeless nature of still illustrations. This enables many dynamic effects that are difficult or impossible with previous sketch-based tools, such as a school of fish swimming, tree leaves blowing in the wind, or water rippling in a pond.

Draco provides motion controls across multiple scales. Global motion controls direct the whole collection, while granular motion controls contribute to the variation and randomness of motion in a collection. This technique simultaneously achieves generality, control, and ease of use. Draco capitalize on the free-form nature of sketching and direct manipulation to seamlessly author and control coordinated motions of collections of objects. It pushes the boundary of an emerging form of visual media that lies between static illustration and videos.

Rubaiat Habib Kazi
Autodesk Research

Fanny Chevalier
INRIA

Tovi Grossman
Autodesk Research

Shengdong Zhao
National University of Singapore

George Fitzmaurice
Autodesk Research

Gigamacro


GIGAmacro 3D introduces a unique method of producing 3D content for education, research, and scientific study. The robotic camera systems use multi-axis image capture, focal stacking, image stitching, and photogrammetry to produce highly detailed microscopic 3D data from complex subjects. This new technique has potential applications in several areas, including entomology, palentology, life sciences, medical diagnostics, and manufacturing quality control.

Traditional photogrammetry techniques and laser scanning have a very limited range of resolution for small subjects such as a beetle, a small fossil, or an electronics board. By integrating multiple process such as focal stacking with photogrammetry and precision robotics, GIGAmacro 3D can precisely capture detail as small as one micron.

Information, examples, and available systems

Gene Cooper
Four Chambers Studio, GIGAmacro

Graffiti Fur: Turning Your Carpet Into a Computer Display


This new display technology, Graffiti Fur, utilizes the fact that the shading properties of fur change as the fibers are raised or flattened. Users can erase drawings by sweeping the surface with the hand and flattening the fibers, and then draw lines by moving fingers in the opposite direction and raising the fibers. These material properties can be found in various items such as carpets in our living environments.

The Graffiti Fur demonstration allows users to draw patterns on two different "fur displays" using a roller device and a pen device. The technology converts ordinary objects into rewritable displays without requiring or creating any non-reversible modifications. In addition, it can be used to present large-scale images without glare, and the images it creates require no operating costs.

Yuta Sugiura
Keio University

Koki Toda
Keio University

Takayuki Hoshi
Nagoya Institute of Technology

Masahiko Inami
Keio University

Takeo Igarashi
The University of Tokyo

Hyve-3D: A New Embodied Interface for Immersive Collaborative 3D Sketching


Hyve-3D is an interface for 3D content creation via collaborative 3D sketching. It introduces a semi-spherical, immersive 3D sketching environment based on spherical panoramas and uses 2D drawing planes that are intuitively manipulated in 3D space with tracked handheld tablets.

Tomás Dorta
Université de Montréal

Gokce Kinayoglu
Université de Montréal

Michael Hoffmann
codemacher UG

(In)visible Light Communication: Combining Illumination and Communication


Communication with light enables a true “Internet of Everything”. Consumer devices transform into interactive communication interfaces when visible light is used to transmit data. Light bulbs, toys, or other electronics and accessories can be used as environmental sensors and act as user interfaces based on their location, play pattern, or other context provided by the internet.

This project demonstrates visible light communication based on light-emitting diodes (LEDs) and low-cost off-the-shelf microcontrollers. LED-based lighting can be used for the Internet-of-Things and related wireless communication services by modulating the intensity of the emitted light. LEDs can also be used as receivers, just like photodiodes. This approach provides the foundation for ubiquitous networking using visible light as a communication medium. Such networks consist of consumer devices with LEDs and light bulbs that can also serve as access points (to connect to other networks) or as fixed points for localization. The technology enables reliable communication over a distance of a few meters.

LED-to-LED communication is a feasible method of bringing low-cost and non-complex connectivity to a large number of LED bulbs and consumer devices. Using the visible light spectrum not only enables a combination of communication and illumination, but also makes it possible to hide data exchange within lighting. This communication is independent from light effects or flickering that human eyes can perceive. Data flow is visible and therefore steerable toward potential receivers.

Stefan Schmid
Disney Research Zürich, ETH Zürich

Josef Ziegler
ETH Zürich

Thomas R. Gross
ETH Zürich

Manuela Hitz
Disney Research Zürich

Afroditi Psarra
Disney Research Zürich

Giorgio Corbellini
Disney Research Zürich

Stefan Mangold
Disney Research Zürich

MaD: Mapping by Demonstration for Continuous Sonification


MaD supports simple and intuitive design of continuous sonic gestural interaction. The system automatically learns motion-sound mapping when movement and sound examples are jointly recorded. In this demonstration, applications focus on using vocal sounds – recorded in performance – as primary material for interaction design.

The system integrates specific probabilistic models with hybrid sound-synthesis models. It does not require devices that sense motion or gesture, and it can directly accommodate additional sensors such as cameras, contact microphones, and inertial measurement units. Potential applications include performing arts, computer games, and medical applications such as auditory-aided rehabilitation.

Jules Françoise
Institut de Recherche et Coordination Acoustique/Musique

Norbert Schnell
Institut de Recherche et Coordination Acoustique/Musique

Frédéric Bevilacqua
Institut de Recherche et Coordination Acoustique/Musique

Mag-B: Tactile Sand Play Using an Interactive Magnetic Display


In this installation, attendees create and experience interaction technology to explore tactile expression and haptic communication.

A 50-inch display consisting of 192 individual electromagnetic pieces arranged in a 12 x 16 configuration is interactively controlled by a touch screen. Steel balls with a diameter of 1mm mimick the texture of sand as they are manipulated by electromagnets controlled by hand movements.

Attendees are invited to create an interactive sandbox experience with the magnetic display.

Kumiko Kushiyama
Tokyo Metropolitan University

Yuya Kikukawa
Tokyo Metropolitan University

Tetsuaki Baba
Tokyo Metropolitan University

Paul Haimes
Edith Cowan University

Shinji Sasada
Nippon Electronics College

Physical Painting With a Digital Airbrush


This augmented airbrush device acts both as a physical spraying device and an intelligent digital guiding tool that maintains manual and computerized control. Custom-designed hardware and numerous control algorithms support a human-computer collaborative for a physical painting effort. The system uses a pistol-style airbrush relieved of its paint-volume control knob and fashioned with a custom-made augmentation. A 6-degree-of-freedom magnetic tracker is integrated with a mechanical actuation system composed of a servo and multiple gears, a potentiometer (POT), an LED and a two-state switch. Onboard electronics drive the servo and LED, query the POT, and also include a 6-degree-of-freedom inertial measurement unit. A comprehensive driver loads a reference image to paint, register, and calibrate the tool and canvas, and control the tool’s mechanical operation. The GPU-implemented control algorithms determine if the painter is at risk of spraying in the wrong direction and location, as calculated from the information from the tool, and issue control commands to the tool at ∼100Hz. The demonstration includes live painting with the airbrush and a portfolio of artwork created with the tool.

Roy Shilkrot
MIT Media Laboratory

Pattie Maes
MIT Media Lab

Amit Zoran
MIT Media Lab

SprBlender: Creation Environment for Touchable Characters


What if you could built your own virtual characters, and pretend they are your pets? SprBlender is a novel environment for creating and interacting with direct-touch characters. It is an add-on for Blender, which provides real-time rigid-body simulation and an attention-driven behavior engine to generate realistic behaviors with touch interaction.

This installation displays touchable interactive characters and invites attendees to create their own. If you bring a rigged 3D model, you can make it touchable and take it home with you.

Hironori Mitake
Tokyo Institute of Technology

Takahiro Harano
Tokyo Institute of Technology

Shingo Fujinaga
Tokyo Institute of Technology

Shunsuke Matsuyama
Tokyo Institute of Technology

Shinichi Shibata
Tokyo Institute of Technology

Shoichi Hasegawa
Tokyo Institute of Technology

Tangible and Modular Input Device for Character Articulation


Articulation of 3D characters requires control over many degrees of freedom: a difficult task with standard 2D interfaces. This project demonstrates a tangible input device composed of interchangeable, hot-pluggable parts. Embedded sensors measure the device's pose at rates suitable for real-time editing and animation. Splitter parts allow branching to accommodate any skeletal tree. During assembly, the device continuously recognizes topological changes as individual parts or pre-assembled subtrees are plugged and unplugged. A novel semi-automatic registration approach helps the user quickly map the device's degrees of freedom to a virtual skeleton inside the character. The device provides input for character rigging and automatic weight computation, direct skeletal deformation, interaction with physical simulations, and handle-based variational geometric modeling.

User studies report favorable comparisons to mouse and keyboard interfaces.

Alec Jacobson
ETH Zürich

Daniele Panozzo
ETH Zürich

Oliver Glauser
ETH Zürich

Cedric Pradalier
GeorgiaTech Lorraine

Otmar Hilliges
ETH Zürich

Olga Sorkine-Hornung
ETH Zürich