Motion in Place Platform

 

Work continues on the Motion in Place Platform Project with more capture sessions planned for this summer and a workshop, including one day open to the public on July 9th at the University of Sussex Creativity Zone.

For more info, see: http://www.motioninplace.org/

New Vimeo Channel

Posted by on Mar 25, 2013 in News | 0 comments

The video galleries have been moved to a new Vimeo channel. For more information, please see:

http://vimeo.com/user17271503

Motion in Place presentation at ISEA

Posted by on Sep 17, 2011 in News | 0 comments

 

Kirk Woolford will present a paper on the application of motion capture technologies to understanding of human movement in archeological research as part of the International Society of Electronic Arts (ISEA) annual conference in being held in Istanbul. The paper is part of the Locative Media and Interaction panel on September 20, 2011.

A version of the paper can be found online at:

http://isea2011.sabanciuniv.edu/paper/motion-place-platform-virtual-representations-iron-age-movement

Sally Jane Norman and Kirk Woolford keynote at Digital Futures in Dance

Posted by on Sep 6, 2011 in News | 0 comments

Sally Jane Norman and Kirk Woolford keynote at Digital Futures in Dance

 

Sally Jane Norman and Kirk Woolford will be delivering a keynote presentation on Motion in Place during the “New Technology and Choreographic Thinking” sessions at the Digital Futures in Dance conference.

http://digitalfuturesindance.org.uk/?page_id=25#top

Motion in Place
with Sally Jane Norman & Kirk Woolford
Friday 9th September

Human motion tracking need no longer be confined to normative lab or studio spaces, but is henceforth accommodated by systems deployed “in the field”. Motion capture platforms assembling distributed and hybrid resources are expanding the dance scene into new territories and prompting novel interdisciplinary encounters. The AHRC-funded Motion in Place Platform draws together dancers, media and sound artists and theorists, hard- and software designers, and archaeologists, to collaboratively develop technological prototypes and creative insights into the site-specific spatial and temporal layerings of movement in place.

Biographies:
Sally Jane Norman is a theorist (Doctorat d’état, Paris III) and practitioner, Professor of Performance Technologies and founding Director of the Attenborough Centre for the Creative Arts. Her research on embodiment and technologies in the performing arts has involved collaborations with the Institut International de la Marionnette in Charleville-Mézières (Motion Capture e-Motion Capture workshop), Studio for Electro-Instrumental Music in Amsterdam (Touch Festival and STEIM Artistic Co-Direction), and Zentrum für Kunst und Medientechnologie in Karlsruhe (EU Extended Performance project). Sally Jane launched an interdisciplinary motion capture research strand as founding Director of Culture Lab at Newcastle University (2004-09), before moving to Sussex in 2010.

Kirk Woolford is an artist/designer and software developer who works closely with digital and creative industries and has taught in Media Arts, Design, Fine Art, and Choreography programs in Germany, Holland, the US and UK. Kirk’s practice-led research has been exhibited in international venues including Shanghai eArts, ARCO Madrid, Art Cologne, P.S.1. (MoMA), Venice Biennale, Ars Electronica, ISEA, and SIGGRAPH. He has collaborated on performances with Diller+Scofidio, Charleroi Danses, igloo, Susan Kozel, Frederique Flamand, Fabrizio Plessi, and others. He has worked with various forms of electromagnetic, optical, and inertial motion capture since 1995, and is currently pursuing this research as Principal Investigator on the AHRC Motion in Place Platform.

University of Sussex

Posted by on May 26, 2011 in Uncategorized | 0 comments

Please see my School of Media, Film and Music page at the University of Sussex:


http://www.sussex.ac.uk/mfm/people/mediaandfilm/person/228851

Motion in Place

Posted by on May 26, 2011 in Uncategorized | 0 comments

Motion in Place

 

Please look at the Motion in Place Platform website for more information:

http://www.motioninplace.org/

Echo Locations

Posted by on Sep 13, 2009 in Installations | Comments Off

Echo Locations

The echo locations project is a series of site-specific installations utilising motion sensing to invite observers to slow down, give the site their attention, and be still long enough for ghostly images to form of how people have moved through the site in the past.

MOVING IMAGES The Echo Locations projects build on the motion capture, particle systems, and slow interaction techniques developed for Will.0.w1sp. However, whereas the Will.0.w1sp characters moved through motion sequences captured in a studio, Echo Locations makes a stronger link to specific locations by capturing motion on site. The characters recreated by the particle software then become similar to ghosts, repeating movements which once occurred in the site. The software driving the particle systems creates chains of movement sequences and randomly drops in a sequence from either the original Will.0.w1sp performances, or from an earlier Echo Location. These chance movement sequences form a link between the location, the history of the location, and the history of the project. Just as with the original Will.0.w1sp, installations, if a visitor chases after the ghosts or “Echos”, they flee from that particular location and either reemerge in another location on the site, or scatter into seemingly random forms. Only when visitors to the site are still and quiet will they reform and return to their movements. The intention of the piece is to use interaction to make visitors reflect on their personal impact on an environment as they move through a location and to hint at its history. The movement choreography and styling of the echo characters is intended to hold visitor’s attention long enough for them to become aware of the environment, and the locations of individual screens are chosen as much for visual impact, as for their ability to communicate the former, and current, life of the location.

MOVING AUDIO The installations uses sound in an attempt to awaken curiosity and invite visitors to various locations on the site. The audio environment mixes samples recorded onsite together with simple melodies to create feelings of past inhabitants – whether they be quiet and contemplative as in the case of the ruins of a 6th C Catholic Church overlooking Morecambe Bay, or very loud as in the case of the Storey Institute when the building was owned by the Mechanic’s guilde. The installations also use Will.0.w1sp’s granular synthesis code to generate audio from the movement data. If visitors to the site are cam and still, this data is played out very melodically, but if visitors move around or make sound of their own, the sound from the particle flows becomes very sharp, aggressive scratches and hisses. Just as the motion of the particle dancers evoke the site’s past history, so does the audio environment.

 

Text from SIGGRAPH’08 catalog

 

 

EXHIBITIONS:

Echo Locations was included in the SIGGRAPH 2008 Art and Design “Slow Art” exhibition and an Austin Texas version was commissioned specifically for the SxSWi (South by Southwest Interactive) Frog Design Party in 2009.

 

Credits:

Direction and Graphics: Kirk Woolford

Music: Carlos Guedes

Dance: Patrizia Penev and Charlotte Griffin

Will.0.W1sp (Will-o-wisp)

Posted by on Sep 13, 2007 in Installations | 0 comments

Will.0.W1sp (Will-o-wisp)

A version of this text is published in “Performance Research 11 ( 4 ) , pp.30–38 © Taylor & Francis Ltd 2006

Will-o’-the-Wisp, Irrlicht, Candelas, nearly every culture has a name for the mysterious blue white lights seen drifting through marshes and meadows. Whether they are lights of trooping faeries, wandering souls, or glowing swamp gas, they all exhibit the same behavior. They dance ahead of people, but when approached, they vanish and reappear just out of reach. Will.0.w1sp creates new dances of these mysterious lights, but just as with the originals, when a viewer ventures too close, the lights scatter, spin, spiral then reform and continue the dance just beyond the viewer’s reach.

 

Overview

Will.0.W1sp  is based on real-time particle systems moving dots like fireflies smoothly around an environment. The particles have their own drifting, flowing movement, but also follow the movements of digitized human motion. They shift from one captured sequence to another – performing 30 seconds of one sequence, scattering, then reforming into one minute of another sequence by another dancer. In addition to generating the particle systems, the computer watches the positions of viewers around the installation. If an audience member comes too close to the screen, the movement either shifts to another part of the screen or scatters completely.

 

Movement

The human visual system is fine-tuned to recognize both movement and other human beings. However, the entire human perceptual process attempts to categorize sensations. Once sensations have been placed in their proper categories, most are ignored and only a select few pass into consciousness. This is why we can walk through a city and be only scarcely aware of the thousands of people we pass, but immediately recognise someone who looks or ‘walks like’ a close friend.

Will.0 plays with human perception by giving visitors something that moves like a human being, but denies categorisation. It grabs and holds visitors attention at a very deep level. In order to trigger the parts of the visual system tuned to human movement, the movement driving the particles is captured from live dancers using motion capture techniques. While the installation is running, the system decides whether to smoothly flow from one motion sequence into another, make an abrupt change in movement, or to switch to pedestrian motions such as sitting, walking off screen, etc. These decisions are based on position and movement of observers in the space. The choreography is arranged into series of movement sequences that flow through specific poses. The dancers are given instructions about the quality of movement flow and tempo, but are encouraged to experiment with ways of connecting the postures. Each series of transformation between postures is distilled into 30 or 60 second sequences.

Once the movement sequences are translated into raw data (as explained in the motion data section below), a collection of sequences is given to the Will.0 system. The system runs through the motion to generate overall information such as tempo, fluidity, and spatial positioning. The system then sets transition points every three seconds. For each of these transition points, the system calculates the current posture by looking at the relative angles between each of the 14 control points. It checks the current posture against postures at control points for each of the other movement sequences and builds a movement tree which it can use to flow the movement from one sequence into another.

While the installation is running, the system decides whether to smoothly flow from one sequence into another, make an abrupt change in movement, or to switch to pedestrian motions such as sitting, walking off screen, etc. These decisions are based on position and movement of observers in the space, time of day, or any other variables agreed upon before the installation is started.

The final choreography is a mix of shifting approaches and retreats as the system plays through its sequences and responds to the presence and movements of the audience.

 

Sound.

While the installation attempts to present human movement without human beings, it also pulls them into a sonic atmosphere somewhere between installation space and an outdoor space at night. The sound has underlying melodies punctuated by crickets, goat bells, and scruffing sounds from heavy creatures moving in the dark. All this is generated and mixed live by software watching the flow and positions of the particles in the space. The sound software is a Max/MSP patch with custom plug-ins written by Carlos Guedes. Because it uses so much cpu-power to generate the sound, it is run on a separate computer. Particle motion data, overall tempo, x, y, and z offsets are all transmitted from the computer controlling the particles to the sound computer using Open Sound Control streams.

 

Interaction

When many visitors walk into an ‘interactive’ environment, the first thing they do is to walk up to the screen and start waving their arms madly to see the piece ‘interact’. Will.0 responds the same way most humans or animals. It  scatters and waits for the person to calm down, and stay in one place for a minute. Then it comes to investigate, and possibly perform for them . . . until the person moves too quickly again. Like true will-o-wisps, the digital performers attempt to strike a balance close to the audience but just out of reach. Will.0 uses custom software to track the relationships between audience members and the screen. If viewers walk too close to the left side of the screen, all the movement shifts to the right. If the digital performers become ‘trapped’, i.e., forced to the edges of the screen or caught between two or more viewers, the particles either scatter and flow around the screen, or collapse into a ball and retreat, until they have enough space to return to human form.

As audience members approach the particle dancers, the dancers move to the side, and scatter. As the viewer moves closer to the screen, the flow of the particles becomes more random. If the viewer walks directly up to the image, the pixels scatter randomly and reform somewhere on the screen as far away from the viewer as possible. If the audience does not move, the system follows its branching instructions to decide how to flow from sequence-to-sequence.

 

Presentation

The project is presented on a custom designed screen. The screen is slightly larger taller than a human and curves slightly from center to the edges. This keeps the images at a human scale while completely filling the audience’s field of vision. At the same time, it allows movement at the edges to shift and disperse more quickly than movement in the center. The screen is much wider than standard high-definition format (2.2m _6.0m) to create more space for the image to shift and flow. It is driven by 2 synched video projectors.

 

 

 

 

EXHIBITIONS:

Oct 19-Nov 10, 2007:   “Digital Arts and Magical Moments Ars Electronica Exhibition”, Shanghai eArts, Shanghai, China

Sep 25-27, 2007:          ACM Multimedia Interactive Arts Program, ACM Multimedia 2007, Augsburg, DE

May  10-12, 2007:         Futurevisual, Futuresonic Festival, Manchester, UK

Apr 27-May 25, 2007:    Inspiration to Order, the gallery at Wimbledon College of Art, London, UK

Oct 9-Nov 5, 2006:       Inspiration to Order, California State University, Stanislas, US

Sep  3-6, 2006:             Digital Resources for the Humanities & Arts, Dartington, UK

Feb 9-13, 2006:            Vida 8.0 Art and Artificial Life, ARCO’06 Madrid, ES

Sep 4, 2005:                “Listening Between the Lines”, live performance, Ars Electronica, Linz, AU

July 14, 2005:               Dance Shorts, Overtom301, Amsterdam, NL

March 16-18, 2005:       Waag Society for Old and New Media, Amsterdam, NL

 

Preview:

July 4, 2004:                 Mediamatic Salon, Mediamatic, Amsterdam, NL

Dec 16-20, 2002:          Monaco Danse Forum, Monte Carlo, Monaco

 

Credits:

Concept/direction:      Kirk Woolford

Sound:                         Carlos Guedes

Movement:                 Ailed Izurieta,

Patrizia Penev,

Marjolein Vogels

CÔR: Um Projecto audiovisual interactivo

Posted by on Jun 10, 2003 in Installations | 0 comments

CÔR: Um Projecto audiovisual interactivo

CÔR: Um Projecto audiovisual interactivo Instalação Cibermúsica (Cybermusic Installation) Festival em Obra Aberta 2003

The Casa da Musica Foundation in Oporto, Portugal commissioned an interactive installation to introduce their new building to the public during the Obra Aberta festival in June 2003. The installation was developed by a team comprised of a composer and 3 visual artists with technical support from the National Institute of Engineering and Computer Science.

“La Harpe Sans Cordes” (the harp without strings) was the focal point of the installations. The harp used custom software to track the movements of visitors’ hands and convert them to music. The harp was also connected to an array of high-intensity lights which changed the colour of the theatre based on the movement of guests hands over the harp. The harp lit the building from within and could be seen from several blocks away. In order to reach the harp, visitors walked through a hall lined with custom-built LED tubes. The tubes changed their sound a colour as visitors approached. After seeing the harp, visitors walked through a black & white video intallation which exploded into colour as they approached.

 

CdM2_320

 

EXHIBITIONS:

Jun 12-Jul 6, 2003:   Festival em Obra Aberta, Porto

 

Credits:

Carlos Guedes, Ula Li, Kirk Woolford, INESC (National Institute of Engineering and Computer Science) Porto

CyberSM

Posted by on Sep 14, 1994 in Installations | 0 comments

CyberSM

CyberSM™ Stahl Stenslie and Kirk Woolford

1993-1994

 

CyberSM was based on Sado-Masochistic role playing, after all, V.R. equipment has always looked similar to S&M fetish fashion. The first motion tracking suit was a full-bodied black leather suit covered with shiny chrome plates. Through the CyberSM system, participants built a virtual body linked to their own body. They then handed this virtual body, and to a degree, control of their physical body to the other participant while gaining control of the other participant’s body. The CyberSM system included two suits worn by the participants, a database of 3-D scans used to build the virtual bodies and 2 computers connected with international ISDN telephone lines. Once the participants put on one of the suits, they built a virtual body by picking from the database. When ready, the computers exchanged these proxies, and each participant received the other participant’s body on the screen in front of them. By rotating and zooming the virtual body, participants could place pointer over regions of the virtual body. When the participant clicked a button, the computer transmitted this “touch” over the ISDN lines to the owner of the virtual body. The remote computer translated this “touch” into a physical impulse (vibration or electric shock) generated by the remote participant’s suit. In order to give a greater sense of the other person’s presence, we included a voice connection between the sites, allowing participants to speak to each other. We quickly learned that transmitting a single touch, even one so strong as an electric shock, was no match for the com’munications abilities of a telephone. The system functioned amazingly well as long as both participants spoke the same language. However, during the first exhibition of CyberSM, we had a participant in Paris who spoke only French, and a participant in Cologne who spoke German and English. After they pushed a couple buttons, and mumbled to each other, they got bored and asked somebody to take the “silly suits” off them. I then realized that we had created method of illustrating a telephone conversation with touch. The project allowed people to “touch” each other over a network, but the form of the touch was too limited to allow them to carry on a dialogue. They fell back upon the skills they learned through years of talking on telephones.

 

In order to create a more fulfilling touch over a network, Stahl and I began work on CyberSM III. Aside from many technical changes, CyberSM III changed the interface from a pointer and model of the body, to the physical body as an interface itself. In CyberSM III, the participants touched their own bodies in order to touch the other participant. The suits built for CyberSM III included various touch zones. When the participant touched one of these zones, their computer measured how long the zone was touched and translated this into an intensity for the stimulation at this point. To further enhance the sensation, CyberSM III, stored the last three touches and played all three out simultaneously through The CyberSM projects, confronted not only problems of using the human body as an interface, but cultural perceptions of touch as well. The first, and most obvious cultural problem is our relationship to touch between two people. Because the most common touch between people, indeed for many people, the only touch between people, is sexual, CyberSM was immediately named the first functional cybersex system, and magazines and television programs were full of the distorted capabilities of the amazing CyberSM system (“4,ooo volts of electricity pulsing up and down your legs,” claimed one magazine). However, aside from the media hype, everyone who tried on the suits agreed they were interesting, but far from fulfilling. The suits could only control a very limited number of stimulators, so we placed these stimulators on either the most sensitive regions of the body, or regions which best fit the construction of the suit. When we touch each other, the location and quality of the touch convey more meaning than the simple act of touching. The suits design never allowed this kind of subtlety.

 

EXHIBITIONS:

1993: Voyages Virtuels, Paris

1994: Planet Sex Ball, London

1994: Boulevard Biolek, (Live Broadcast)

 

Credits:

Concept and Direction: Kirk Woolford and Stahl Stenslie