Dec 21, 2015 | Comments Off on Experience the Birth of Stars in Ryoichi Kurokawa’s ‘unfold’ 1419
Aug 24, 2015 | Comments Off on Watch as 3-year-old “Action Movie Kid” Fends Off a Shark Attack in His Living Room 1123
Created by digital design studio FIELD, Spectra-3 is an audio-visual light installation premiering at London’s Lumiere light festival on 14th January, 2016. It is a physical-digital sculpture that tells three stories of communication through a choreography of movement, animated lights and spatialised sound.
The project is an ongoing exploration by FIELD in ‘making things come to life’ by using the power of code and to explore new aesthetics and visual languages that may derive from intersections of audio-visual immersive spaces and abstract narratives. Building on their experience working in the fields of keyframed animation, motion capture, emergent and procedural animation systems, physics simulation, to light, motion and sound reactive animation, Spectra-3 brings many of these together into an immersive “live” sculptural form.
In the last few years, our interest in the fusion of digital and physical systems has grown. We’re fascinated by sculpture as a medium – as a semi-timebased and abstract narrative medium, which interacts with its environment through light, perspective, etc. We’re approaching these new physical and sculptural projects with the same generative and code-based thinking that we’ve developed as our workflow over the years:
Spectra-3 was designed and pre-visualised in SideFX Houdini + Octane. Based on their 3D models, their manufacturers Studio Makecreate developed a refined CAD model for the manufacture a structurally sound, moveable object. Finding the right software solution was a long discussion in the FIELD studio. For the detailed, part-handcrafted and part-procedural choreographies they wanted to create, they needed to build their own animation design tool that would run in realtime, have a useful timeline system, give a good sense of preview in realtime, simulate and finally control all elements of the installation.
Some of the team argued for a New-Jersey-style approach (“Worse is better”), while some favoured an MIT-style approach (“Do the Right Thing”) – FIELD’s managing director Vera-Maria Glahn tells CAN. The result is a well-functioning efficient hybrid: a clean, structured architecture, with clear network interfaces via OSC to connect motor control, theatre lights, multichannel audio + audio analysis, haze machine, weather station to the show control.
In the end the team settled on Unity as a framework, and modified the editor with a large number of extensions into a shape where it would become a sophisticated animation and show control system, playing directly off the timeline. So they hacked an offline game editor into a realtime show control. It is similar in approach and inspired by d3, the tool that UVA developed for their complex shows.
For motors, the installation uses industry standard motors and drivers in combination with CanOpen Protocol based communication via Serial Rs232 with a bridge from Unity data to physical sculpture. Light is managed via ethernet to DMX Node using Artnet protocol with 24 separately dimmable lights and one motorised Iris. Digital lighttests have been done in Cinema 4d, Houdini, Unity and a self written c++ raytracer. For safety reasons / react according to weather conditions, the installation measures wind speed, direction and rain using sensors on an Arduino Yún which transmits weather data via osc when plugged in to network. Logic can react to that and set the sculpture in to a safe position. A combination of Logic and Unity serves as preview for movement / speed / light setup and resulting reflections and includes a precise model of sculpture, space light and speaker positions. On top of this FIELD built custom gui and animation system based on keyframes and various interpolations between them. Central Logic – calculates everything and sends it forth to Audio, Motor, Light and communicates with all the controllers and other parts via OSC. Finally, The sound uses FFT (Fast Fourier Transformation) with 1024 Samples divided into 24 bins to map them onto the 24 lights. It is written in c++ using port audio called from Unity, multichannel and spatialized.
Installation is on view at Lumiere London from 14-17 January 2016 / 6.30pm – 10.30pm
King’s Cross, London N1C / West Handyside Canopy, via Granary Square
Follow Lumen on FB