Summer 2011

Tangible digital (case studies)

How can we make the virtual more physical while bringing emotion into the digital domain?

‘In one sense, my work is totally aesthetic,’ explains Karsten Schmidt, ‘but my real interest is in the development of the underlying tools.’ Which is interesting, not least because he’s talking about his contribution to the V&A’s Aestheticism exhibition, ‘The Cult of Beauty’ (2 April – 17 July 2011). Schmidt’s main piece was a Victorian-style room divider, on show in the museum’s Sackler Centre from 30 May to 3 June, and re-appearing at the London Design Festival this September.

Two wooden panels were laser-cut with intricate patterns, reminiscent of Arabic tiling but digitally generated, consisting of ever so slightly irregular tetragons, pentagons, hexagons, heptagons and octagons. Just eighteen unique shapes are needed to create the full pattern. These two panels were joined together, and covered in 620 protruding paper polyhedrons that slot into the shapes of the wooden frame. The geometric nets for these 3D paper forms were created with the help of Schmidt’s own open-source ‘unwrapping tools’, software designed to aid digital sheet fabrication.

Visitors interacted via an iPad-based interface: patterns could be allocated to each of the eighteen repeating shapes, which were then projected on to the individual paper shapes held by the frame. The projection software was calibrated to match on-screen polygons with their physical counterparts.

The only tangible reference to Aestheticism is found in the colour palette for these kaleidoscopic patterns, taken from original wallpaper designs by William Morris. But in a curious way, Schmidt’s deeply modern approach chimes with the Victorian artist’s own opinion of Aestheticism: ‘Any decoration is futile ... when it does not remind you of something beyond itself.’

P1070518

Evan Roth / Capturing the solid geometry of a spray-can graffiti tag

From the colourful New York City documented in Martha Cooper and Henry Chalfont’s Subway Art (1984) to the Banksy-esque stencils that have become a 21st-century cliché, graffiti’s raw power has often found a place in the graphic mainstream. Yet one form in particular, the quickly scrawled tag, has tended to evade widespread approval. These hard-to-decipher squiggles, sprayed or scribbled in a few seconds, might seem crude and careless – wanton, even – but, for this fast-moving form, context is king.

‘If you’re in the community, you’re not just looking at how the person wrote the letter “s”,’ says Paris-based artist Evan Roth. ‘How high off the ground is it? How did they get access? What’s the neighbourhood like?’ His recent digital artwork stems partly from a desire to challenge the misunderstandings (and misappropriations) of graffiti, and partly from a huge passion for this diverse community and its art.

‘Graffiti Analysis: Sculptures’ sees Roth combine two of his other developments: Graffiti Markup Language (GML), a way of encoding tags in an XML-based open file format; and Graffiti Analysis, a technique to capture a tag in GML by motion-tracking a light source attached to the artist’s spray can or marker. For his 3D sculptures, he uses tracked GML data to create 3D geometry – time is extruded in the Z-dimension, and pen / spray speed is represented by the thickness of the model at any given point. The results are 3D-printed, with each sculpture reproduced at about the size of a shoe-box.

‘This is about taking graffiti motion and turning it into something you can see ... people don’t get to see that dance happen,’ says Roth. When he exhibits these pieces, it is not always immediately obvious they have anything to do with graffiti at all. People come to them intrigued by their strange, smooth, organic forms – and free from any preconceptions.

Roth often uses direct lighting when exhibiting: if it is shone at the right angle, the shadow cast onto the wall leaves a subtle trace of the artist’s original mark. People have already adapted his software – openly released – to read calligraphy. ‘There are applications for anything that creates motion,’ he adds, ‘but my heart’s in the graffiti.’

space

Kyle McDonald / 3D models, made from a cloud of open-source data

The Janus Machine, on show at Austria’s Ars Electronica Center until September 2011, invites visitors to sit in front of a scanner and push a yellow button. They are bathed in structured light for a few seconds, capturing their (real) face as a series of light points, which are then projected on to the gallery wall. Slowly the particles appear, collide and reassemble. The participant’s (digital) face re-emerges; shifting facial expressions made during the scan are replayed before their eyes. The vision spins and rotates, before atomising once more, disappearing into a cloud of digital data.

Kyle McDonald, who created The Janus Machine in collaboration with Zachary Lieberman, Theodore Watson and Daito Manabe, started working with 3D scanning in 2009: ‘I accidentally re-invented a technique called “grey code structured light”, which is a way to quickly triangulate the distance to every pixel in an image using a projector instead of a line laser.’

That summer, Radiohead’s ‘House of Cards’ music video came out, which used a technique that captured 30 frames per second rather than McDonald’s one frame every three. ‘There were about five years of academic research on the technique, but no open-source or easy-to-use implementations. That’s when I really got excited and started developing tools and tutorials for people to make real-time 3D scans.’

‘Democratising technology’, as McDonald puts it, has already led to his work being put to varied use. Though he sees his own projects as experimental and primarily artistic, archeologists have used his tools to scan Greek artefacts, and artists have adapted it to share 3D models of their sculptures.

For McDonald, open platforms are a natural hotbed for innovation: ‘It’s hard for private and proprietary software to compete with thousands of people who dedicate all their time to a project because they love what they’re doing.’

Body-Dismorphia-2

Robert Hodgin / Experiments in body-mapping take the Kinect to a new plane

‘I first started craving something like a Kinect when I started messing around with augmenting a live webcam feed,’ says Robert Hodgin (see ‘Magic box’, Eye 70). ‘I had to draw my depth maps by hand, which was very inaccurate and time-consuming.’

Hodgin’s recent work, showcased online at flight404.com, has seen him explore different types of Kinect-based webcam augmentation, all played out in real time. ‘Body Dysmorphia’ makes the subject’s body swell as if by some digital allergic reaction, morphing into a puffy marshmallow man in lurid colours. Another rendition makes the participant become translucent, in an effect reminiscent of the eponymous extraterrestrial of Hollywood’s Predator films. Others see heads explode in brightly coloured particles and bodies turn to shimmering chrome.

Hodgin’s use of the Kinect is purely experimental (he says that he hasn’t even hooked it up to his Xbox yet). Variations of ‘Body Dysmorphia’ were used for the live visuals at Aphex Twin’s 2010 / 11 New Year’s Eve show in Rome, but otherwise his work has been largely artistic and self-funded so far (‘easily done, since the Kinect has such a low price point’). At the heart of Hodgin’s experiments is an interest in the technology’s power to map environments rapidly. From there, he adds, ‘it is a pretty short leap to being able to use it to allow robots to do real-time obstacle avoidance.’

BYOS_scanning

BYOS_pose

BlablabLAB / ‘Be Your Own Souvenir’: fifteen minutes from pose to plastic figurine

Earlier this year, visitors to La Rambla in Barcelona had a chance to become their own souvenir. As the participant stood on a tiny podium, a 360-degree ‘point cloud’ of their pose was captured by three Kinects. This data was sent to a RepRap 3D printer, from which a miniature figurine was output – all in about fifteen minutes.

Developed by tongue-twistingly titled studio BlablabLAB (Raul Nieves, Jordi Bari and Gerard Rubio), the project was selected in an art-technology-communication contest organised by Arts Santa Mònica, the youth bureau ACJ and the Barcelona public transport service, TMB.

There is definitely a certain charm to the crude, bright yellow, low-res aesthetic of the figurines – but the real magic is in the immediacy by which the physical turns to digital, then back to physical. In the eyes of the BlablabLAB co-founders, this kind of production offers a ‘true revolution’: ‘3D printing could connect the “open” concept, for a long time present in software development, with the hardware sphere, the physical world.’

Kinect: capture gestures

Kinect is an Xbox 360 interface with 3D depth sensors, RGB camera and microphone. It is capable of motion-tracking, facial recognition and voice recognition, enabling users to play games with just speech and gestures. Within weeks of its release in November 2010, a hacker released drivers to the open-source community. Since then, academics, hackers and artists have created applications that use the Kinect for everything from 3D scanning to making robots.

3D printers: fabricate objects

3D printers generate a series of horizontal layers that are deposited on top of one another by the printer. They have been around for a while, but now they are faster, cheaper and capable of better quality. It is possible to fabricate in production-grade plastics and metal, so the technology is increasingly being used to make finished objects rather than for rapid prototyping.

First published in Eye no. 80 vol. 20.

EYE80