A version of this text is published in “Performance Research 11 ( 4 ) , pp.30–38 © Taylor & Francis Ltd 2006
Will-o’-the-Wisp, Irrlicht, Candelas, nearly every culture has a name for the mysterious blue white lights seen drifting through marshes and meadows. Whether they are lights of trooping faeries, wandering souls, or glowing swamp gas, they all exhibit the same behavior. They dance ahead of people, but when approached, they vanish and reappear just out of reach. Will.0.w1sp creates new dances of these mysterious lights, but just as with the originals, when a viewer ventures too close, the lights scatter, spin, spiral then reform and continue the dance just beyond the viewer’s reach.
Will.0.W1sp is based on real-time particle systems moving dots like fireflies smoothly around an environment. The particles have their own drifting, flowing movement, but also follow the movements of digitized human motion. They shift from one captured sequence to another – performing 30 seconds of one sequence, scattering, then reforming into one minute of another sequence by another dancer. In addition to generating the particle systems, the computer watches the positions of viewers around the installation. If an audience member comes too close to the screen, the movement either shifts to another part of the screen or scatters completely.
The human visual system is fine-tuned to recognize both movement and other human beings. However, the entire human perceptual process attempts to categorize sensations. Once sensations have been placed in their proper categories, most are ignored and only a select few pass into consciousness. This is why we can walk through a city and be only scarcely aware of the thousands of people we pass, but immediately recognise someone who looks or ‘walks like’ a close friend.
Will.0 plays with human perception by giving visitors something that moves like a human being, but denies categorisation. It grabs and holds visitors attention at a very deep level. In order to trigger the parts of the visual system tuned to human movement, the movement driving the particles is captured from live dancers using motion capture techniques. While the installation is running, the system decides whether to smoothly flow from one motion sequence into another, make an abrupt change in movement, or to switch to pedestrian motions such as sitting, walking off screen, etc. These decisions are based on position and movement of observers in the space. The choreography is arranged into series of movement sequences that flow through specific poses. The dancers are given instructions about the quality of movement flow and tempo, but are encouraged to experiment with ways of connecting the postures. Each series of transformation between postures is distilled into 30 or 60 second sequences.
Once the movement sequences are translated into raw data (as explained in the motion data section below), a collection of sequences is given to the Will.0 system. The system runs through the motion to generate overall information such as tempo, fluidity, and spatial positioning. The system then sets transition points every three seconds. For each of these transition points, the system calculates the current posture by looking at the relative angles between each of the 14 control points. It checks the current posture against postures at control points for each of the other movement sequences and builds a movement tree which it can use to flow the movement from one sequence into another.
While the installation is running, the system decides whether to smoothly flow from one sequence into another, make an abrupt change in movement, or to switch to pedestrian motions such as sitting, walking off screen, etc. These decisions are based on position and movement of observers in the space, time of day, or any other variables agreed upon before the installation is started.
The final choreography is a mix of shifting approaches and retreats as the system plays through its sequences and responds to the presence and movements of the audience.
While the installation attempts to present human movement without human beings, it also pulls them into a sonic atmosphere somewhere between installation space and an outdoor space at night. The sound has underlying melodies punctuated by crickets, goat bells, and scruffing sounds from heavy creatures moving in the dark. All this is generated and mixed live by software watching the flow and positions of the particles in the space. The sound software is a Max/MSP patch with custom plug-ins written by Carlos Guedes. Because it uses so much cpu-power to generate the sound, it is run on a separate computer. Particle motion data, overall tempo, x, y, and z offsets are all transmitted from the computer controlling the particles to the sound computer using Open Sound Control streams.
When many visitors walk into an ‘interactive’ environment, the first thing they do is to walk up to the screen and start waving their arms madly to see the piece ‘interact’. Will.0 responds the same way most humans or animals. It scatters and waits for the person to calm down, and stay in one place for a minute. Then it comes to investigate, and possibly perform for them . . . until the person moves too quickly again. Like true will-o-wisps, the digital performers attempt to strike a balance close to the audience but just out of reach. Will.0 uses custom software to track the relationships between audience members and the screen. If viewers walk too close to the left side of the screen, all the movement shifts to the right. If the digital performers become ‘trapped’, i.e., forced to the edges of the screen or caught between two or more viewers, the particles either scatter and flow around the screen, or collapse into a ball and retreat, until they have enough space to return to human form.
As audience members approach the particle dancers, the dancers move to the side, and scatter. As the viewer moves closer to the screen, the flow of the particles becomes more random. If the viewer walks directly up to the image, the pixels scatter randomly and reform somewhere on the screen as far away from the viewer as possible. If the audience does not move, the system follows its branching instructions to decide how to flow from sequence-to-sequence.
The project is presented on a custom designed screen. The screen is slightly larger taller than a human and curves slightly from center to the edges. This keeps the images at a human scale while completely filling the audience’s field of vision. At the same time, it allows movement at the edges to shift and disperse more quickly than movement in the center. The screen is much wider than standard high-definition format (2.2m _6.0m) to create more space for the image to shift and flow. It is driven by 2 synched video projectors.
Oct 19-Nov 10, 2007: “Digital Arts and Magical Moments Ars Electronica Exhibition”, Shanghai eArts, Shanghai, China
Sep 25-27, 2007: ACM Multimedia Interactive Arts Program, ACM Multimedia 2007, Augsburg, DE
May 10-12, 2007: Futurevisual, Futuresonic Festival, Manchester, UK
Apr 27-May 25, 2007: Inspiration to Order, the gallery at Wimbledon College of Art, London, UK
Oct 9-Nov 5, 2006: Inspiration to Order, California State University, Stanislas, US
Sep 3-6, 2006: Digital Resources for the Humanities & Arts, Dartington, UK
Feb 9-13, 2006: Vida 8.0 Art and Artificial Life, ARCO’06 Madrid, ES
Sep 4, 2005: “Listening Between the Lines”, live performance, Ars Electronica, Linz, AU
July 14, 2005: Dance Shorts, Overtom301, Amsterdam, NL
March 16-18, 2005: Waag Society for Old and New Media, Amsterdam, NL
July 4, 2004: Mediamatic Salon, Mediamatic, Amsterdam, NL
Dec 16-20, 2002: Monaco Danse Forum, Monte Carlo, Monaco
Concept/direction: Kirk Woolford
Sound: Carlos Guedes
Movement: Ailed Izurieta,