This paper illustrates how the interface between human and the interactive audiovisual space affects the dancer’s choreography in two projects.
Eunsu Kang, University of Akron
Donald Craig, University of Washington
Diana Garcia-Snyder, University of Washington Bothell
Reference this essay: Kang, Eunsu, Donald Craig, and Diana Garcia-Snyder. “Dancing With Interactive Space.” In Leonardo Electronic Almanac 22, no. 2, edited by Senior Editor Lanfranco Aceti, and Editors Candice Bancheri, Ashley Daugherty, and Michael Spicher. Cambridge, MA: LEA / MIT Press, 2017.
Published Online: January 15, 2018
Published in Print: To Be Announced
This paper illustrates how the interface between human and the interactive audiovisual space affects the dancer’s choreography in two projects: Shin’m (S1) and Shin’m 2.0 (S2). The interface of S1 is wearable like a jacket and wired to the ceiling with elastic tubes. The S2 human-space interface has no “physical” presence but invisible layers of ranges and contours detected by the Kinect sensor. The S1 interface offered the dancer a tangible attachment but limited mobility. The S2 interface allowed the dancer to explore full range of speeds and levels while its loss of tangibility, at first, was challenging for building a palpable and instant connection.
Interactive, installation, dance, performance, media art, sound space, kinect, interface
The world is comprised of matter, some of which our bodies can pass through and others they cannot. Living in this world, our bodies constantly draw lines, spots, and shapes, and build a relationship with the space to move through and around this matter. Carefully observing these drawings our bodies create with space, we might ask ourselves: can we call our relationship with the space a dance? Our team, an interdisciplinary group consisting of a media artist, a choreographer/dancer, and a software developer/composer, seeks to investigate how we communicate with the space, how we connect into it, and how we and the space reshape each other.
This team’s first work was PuPaa (2008)  (see Figure 1), a multimedia dance concert. The dancers wore rather uncomfortable costumes with embedded multimedia devices including a projector, several small speakers, and a small camera. Their struggle and frustration of being held captive by the constraints of the costumes was, at the beginning, enormous. As professional dancers, they ultimately embraced their costumes as if they were extensions of their bodies and progressively created unique and extraordinary movements using them. Through this experience, we learned: (1) what the dancer wears greatly affects the dancer’s movements; and (2) for non-professionals, a wearable interface could be too big an obstacle to overcome for building an intuitive communication with the space.
Our team’s goal is to make an interactive space not only for dancers but also for the general audience member who turns into the participant upon walking into the interactive space. Thus, we experimented with what we call the human-space interface in two different ways: (1) Let them wear a flexible wearable jacket interface (Shin’m, 2009) ; (2) Let them wear nothing by using the Kinect sensor  (Shin’m 2.0, 2011) .
Media art is a versatile field that integrates multiple disciplines. There is almost certainly no doubt that there has been a significant growth of the number of interdisciplinary projects hybridizing dance idioms and interactive technologies. Searching “interactive dance,” Google shows 170,000,000 results and YouTube shows 20,500 results. With “dance technology,” one will find 40,600 on YouTube and 640,000,000 on Google (accessed August 15, 2012). Blogs such as dance-tech.net  do frequent updates that show new activities from around the world. The integral relationship between the abundance of these activities and the development of computer vision, interactive technologies, and especially gesture recognition, is inevitable. Early on, Myron Krueger showed his pioneering works of computer visuals interacting with human body shape and movement with works like “Videoplace”  in the 1970s. More recently, Golan Levin has shown another way of using interactive computer vision technology for a stage performance by utilizing human voice and movement as inputs with his work “Messa di Voce,”  and the Australian dance company Chunky Move  demonstrated how these technologies can be applied to professional dance performances, such as “Glow”  that premiered in 2006 and is still being performed. The appearance of the Kinect sensor made similar approaches cheaper and easily installable. Furthermore, projection mapping or dynamic projection surface researches such as the dynamic projection surfaces research at the Rensselaer Polytechnic University  are providing even more possibilities and capabilities to the interactive relationship between the human body and the space. As a part of this ocean and flow, our team intends to share, in this paper, our experiences of experimenting with two different types of human-space interface.
Our aim is to illustrate and compare how the interface between human (dancer) and the interactive audiovisual space affected the dancer’s choreography in two projects: Shin’m (S1) and Shin’m 2.0 (S2). S1 and S2 share the same fundamental concept and core design elements. However, their human-space interfaces and technical approaches have been strikingly different due to the engagement of the Kinect sensor in the development of S2 in 2011.
First, we will briefly introduce the concept and design of S1 and S2. In the following section, technical details will be reviewed. Section 4 will describe the choreographer/dancer’s experience of the two projects, focusing on the differences incurred by two types of human-space interface: the custom-made wearable jacket interface and the Kinect sensor interface. Our conclusion will be presented at the end of the paper.
The Interactive Space
Shin’m (S1) and Shin’m 2.0 (S2) are interactive spaces. They interact with the dancer or the participant and transform themselves as a result of the bodily conversation between the body and the space. The space consists of light (video) space delivered by either one or multiple video projections and of sound space shaped by three-dimensional sound movement technology. They are designed to function as both a dance performance partner and an interactive installation that anyone can experience.
a.) Shared Concept and Basic Elements
Shin’m 2.0 (S2), despite its technical dissimilarity, inherited from the Shin’m project (S1) the core concept and basic design elements: nebula-bubbles and computer generated water-like sounds. The title Shin’m means body (shin) and sound (um) in Korean. This project was focused on the sound space interaction at the beginning of its development and expanded into an audiovisual system having more dynamic visual interactions. Its initial concept was that the body “wears” its “sound body,” extends its limbs into the space as they move, and over time eventually fills the space like a sound-web with accumulated traces of the extended limbs of the “sound body.” This concept has evolved into the body that entangles with the space and reshapes as the space reshapes; it is a metamorphosis of the body perceptually merged with the space.
The nebula-bubbles, seen as the bright particles in Figure 2, are common to both versions of the interactive spaces. They represent a circulating view from the micro to macro level of our universe and its fluidity. This fluid space of S2 is filled with nebula-bubbles constantly circulating through a “black hole” in the center. The nebula-bubbles of S1 stream in a spiral as the default form of this “world” and disappear when the participant’s gesture triggers other “worlds.” (In S1, six arm gestures activate six different interactions. One of the gestures opens one of two hidden worlds, and it depends on the location of the participant at the moment of the gesture.)
Different sets of digitally generated or recorded sounds have used S1 and S2 to enhance the illusion of the experience of submerging into the fluid space. However, they use the same water-like sound as the default, indicating their shared definition of the space; the fluid space organically interacting with the body like the water in which we swim.
The basic components of S1 and S2 are the space, the people (the audience, the dancer, the participant), and the interface connecting them.
S1 progresses over three stages. First, the dancer appears in the middle of the audience waiting outside of the room where the interactive space is installed. The audience activates the dancer by pushing, touching, blowing air, and then brings her to the room of the interactive space. The second stage is the dancer’s performance of interacting with the space. At the third stage, the audience becomes the participant and they interact with the space by themselves (Figure 3).
S2 also starts with the outside performance. Unlike S1, S2 is often installed in the same room with other interactive spaces such as in Membranes (2011)  or Fluid Cave (2011).  Its choreography integrates all of them into one dance performance that concludes with the S2. After the dance performance, S2 remains as an interactive installation where participants can dive in and “swim.”
c.) Human-Space Interface
There are two interfaces used in S1: a wireless jacket interface as a costume for the dancer and a wired jacket interface for both the dancer and the participant (see Figure 4). At the first stage of the performance, the dancer wears a wireless jacket interface with two LED light sources and a hat with an embedded Bluetooth speaker. The Bluetooth speaker projects the default sound of the interactive space when the dancer is activated by the audience-participant. Two LED light sources on the wireless jacket interface let the interactive space react to her movements once she jumps into the room. The wired jacket interface in the room is hung from the ceiling by thin latex tubes, which allow anyone to easily drag the wired jacket-interface to the end of the room. In conjunction with the surround sound system consisting of six speakers mounted on the walls, at the end of the jacket's sleeves there are also hidden two small speakers. If one is standing in the center of the space, and one's arm moves away from their body, the sound appears to fly away into space. The wired jacket interface is made out of lycra fabric so that it can easily fit most people.
There is no visible or touchable human-space interface in S2. By adapting the Kinect sensor, our team was able to deliver the experiment that we had hoped to conduct since the PuPaa project. This was the experiment of having no "physical" interface that was worn or touched by the participant. Our tentative conclusion was that this would be more encouraging for the general audience-participant’s intuitive connection with the interactive space. The Kinect sensor was installed on the ceiling diagonally looking down. In this way, it was possible to sense with one sensor both height and distance change of the target object (the dancer or the participant) from the sensor. More technical details of these interfaces will be described in the following section.
Technologies of the Human-Space Interface
a.) Technical Overview
The Shin'm (S1) project application is written in openFrameworks  and uses camera vision to sense two bright LEDs. These LEDs are attached near the palms of the jacket interface sleeves. The dancer or the participant wears the jacket interface and their locations and arm movements are calculated by the location and distance between the two LED lights. OpenFrameworks already includes all the libraries needed, including openCV,  OSCPack,  and a simple particle system added by one of our team members. The webcam captures an image of the space. This image is converted to greyscale and the pixels compared against a threshold such that pixels brighter than the threshold would be maximized and those duller are minimized. This threshold value is tunable so that it can be adjusted for the performance space. The black and white image is processed using a function from the openCV library to get the contours of the brightest areas of the webcam image. If the system is tuned and the space set up properly, the LEDs should be the only bright spots the system sees.
Shin'm 2.0 (S2) is a GLUT application written in C++. It uses the Kinect sensor and captures its depth map to detect the participant’s shape and movements. The location of the Kinect sensor needs to be carefully considered based on the desired detection range and any possible obstacles near the sensor. The application uses the freenect library  for straightforward access to the depth map from the Kinect sensor. The continuously updated depth map is converted in its representation of distance to meters. Three ranges of distances are of interest. For each of the ranges, a greyscale image is made and all depth values in that range are set to 255 and values outside the range are set to zero. This image is analyzed by a function in the openCV library to create the contours of whatever is in the measured range. There is a particle system that manages the nebula-bubbles. It uses a simple "gravity" model, where the bubbles are attracted to all sources of "gravity." When there is nobody in the space, the one point of "gravity" is at the center and the particles spiral in toward it. In the range farthest from the Kinect sensor, when somebody first enters the space, the nebula-bubbles are attracted to the contour but not constrained by it. In the next zone nearer to the Kinect sensor, the contour is drawn sparsely with bubbles and the spawned bubbles of the particle system are constrained to be within the drawn contours. The zone nearest the Kinect sensor also spawns the nebula-bubbles within the measured contour but they are pushed rapidly away from the center of the contour.
b.) Comparison of S1 and S2
For effecting changes in the audiovisual space, the use of the Kinect sensor versus bright LEDs is the most significant difference between S1 and S2. The use of LEDs for S1 requires that the space be dark, so the points of light are easily recognized. The software is easily confused by spurious light sources (e.g., an "Exit" sign). Since the Kinect sensor uses infrared to create a depth map, excess light or an absence of light does not affect it.
Along with these light issues, another difference between S1 and S2 can be found in their controls. The LEDs are in the hands of the participant and they can hide one or both of them from the camera by turning their hands. As they must use the lights if they are going to exercise control over or interact with the space, this interface requires a certain degree of precision and practice from the participant. Getting the video/multi-touch mode to happen requires effort, and the appearance of the visual space in these modes is significantly different. On the other hand, the Kinect sensor captures the "contour" of the participant, allowing greater control over the precise, momentary appearance of the visual space. The different modes in this case are much less different.
The sound system for S2 is a non-standard arrangement of speakers, with two on the floor and two mounted on the walls. Simple intensity panning is used for the sound spatialization. S1 has a fairly typical arrangement of speakers, where the spatial location of sound other than that coming from the embedded speakers corresponds or maps roughly to the visual field. In S2, this mapping does not align as closely.
Dancing with the Interactive Space
The choreographer of S1 and S2 took not only the dancer’s movements into account but also every detail of the technology, performance space, allotted performance time, and group dynamics of the audience. The audience perspective was also considered for the choreography because they were active participants of the dance. Keeping all these elements in mind, the key points for building a basic structure for the choreography were the constraints of the costume in S1 and the detection range of the sensor in S2. The choreography developed based on a highly structured improvisation, which means it keeps a basic structure but leaves enough room for changes and adaptations in detail during the performance.
In S1, the dancer was attached to cables that limited the working area and restrained the dancer's mobility. In terms of the creation of dance movement, this apparent weakness of the S1 interface was at the same time the strength of the choreography because this challenge set clear physical limits and boundaries on the body of the dancer. These boundaries, which limited the movement of the dancer in the space, provided clear kinesthetic instructions to the choreographer and created a solid structure for movement. The wired jacket installed inside the room became the focus of attention for the performance as well as a magical instrument, making the performance more palpable and tangible.
The dancer sometimes pushed the elasticity of the jacket interface to the limit by moving further from the center of the room. This attempt created a moment of great tension and of relief when the dancer is dragged back to the center. In one of those moments, the dancer also used the LEDs as a means to undermine the limits of the dance. She flashed them to the audience and revealed their faces with the light. This moment brought the audience into the dance and expanded the perimeter of it. When the dancer was running clockwise and entangled herself with cables, it also created a tension that was released by running counter clockwise, quickly unwinding the cables. At the end of the dance, in a last tense, dramatic gesture, she held the cables stretched out to their limit and paused, waiting for the cables to snap back, pulling the jacket off and setting her free.
Losing the focal point that the jacket interface created in S1, S2 offered an open and nearly limitless space. It was another challenge to the choreographer as the absence of tangibility could make the dancer feel lost in the limitless possibilities with no solid ground on which to start. Along with this freedom given to the dancer, also the dancer’s character was not pre-defined by the physical shape of the wearable interface, so the dancer was able to morph into various imaginable shapes.
In S2, the dancer was able to play with a full range of speed (tempo) and contrasts. In comparison to the S1 dance situated around the medium level, the limitless space of S2 allowed the dancer to explore all three levels: low, medium, and high including jumps. She was able to run rapidly without inhibition throughout the performance space. The performances ended with a subtle passage in which she slowly hid behind the blind spot of the Kinect sensor, crouching on the floor in the corner with a cloud of nebula-bubbles moving over her skin. The choreographer felt the space became "an intimate space of meditation, a place where the dancer can access a deeper sense of herself" (Figure 5).
This paper has overviewed the Shin’m (S1) and the Shin’m 2.0 (S2) projects focusing on the development and use of their human-space interfaces. Our conclusion is that the wearable interface of S1 brought a certain tangibility, sometimes pushing the choreography toward unique inventions, yet limited the space and mobility of the dancer. The Kinect sensor interface of S2 opened a nearly limitless space for the dancer to explore the full range of speeds and levels. However, its loss of tangibility at first challenged the dancer. For the general audience-participant, the critical moment for immersion seems to be the time between when they enter the room and when they come into the interface— when they wear the jacket interface in S1 and when they step into the Kinect sensor detection area in S2. They spent less time hesitating during S2 than we anticipated. Once they were in, no substantial difference of intensity was found in their interaction behaviors that we could observe. There were a few more unexpected behaviors at S1, such as wearing the jacket interface like a trouser or two people wearing each sleeve and moving like conjoined twins. Our team intends to continue experimenting with the Kinect sensor to build a human-space interface that does not limit the range of human movements or imagination, but with the hope of bringing back the lost tangibility into the relationship between the human and the interactive space by enhancing the perceptual touch in their audiovisual experiences.
References and Notes
 Eunsu Kang, Diana Garcia-Snyder, and Donald Craig, “PuPaa,” Eunsu Kang’s official website, 2008, http://kangeunsu.com/pupaa/index.htm (accessed August 15, 2012).
 Eunsu Kang, Donald Craig, and Diana Garcia-Snyder, “Shin’m,” Eunsu Kang’s official website, 2009, http://kangeunsu.com/shinm/index.htm (accessed August 15, 2012).
 Microsoft News Center, "New Xbox 360, Kinect Sensor and ‘Kinect Adventures’ - Get All Your Controller-Free Entertainment in One Complete Package," Microsoft.com, July 20, 2010, http://www.microsoft.com/en-us/news/press/2010/jul10/07-20KinectPackagePR.aspx (accessed August 15, 2012).
 Eunsu Kang, Donald Craig, and Diana Garcia-Snyder, “Shin’m 2.0,” Eunsu Kang’s official website, 2011, http://kangeunsu.com/data/page/2.html (accessed August 15, 2012).
 Dance-Tech.Net, “Dance-Tech.Net,” http://dance-tech.net (accessed August 15, 2012).
 “Videoplace,” Wikipedia, last modified November 28, 2013, http://en.wikipedia.org/wiki/Videoplace.
 Golan Levin and Zachary Lieberman, “Messa di Voce,” Golan Levin’s official website, 2003, http://www.flong.com/projects/messa (accessed February 1, 2014).
 Chunky Move, “Chunky Move,” Chunky Move’s official website, http://chunkymove.com.au (accessed August 15, 2012).
 Chunky Move, “Glow,” Chunky Move’s official website, 2006, http://chunkymove.com.au/Our-Works/Current-Productions/Glow.aspx (accessed August 15, 2012).
 Computer Graphics @ RPI, “Research Project: Dynamic Projection Surfaces in EMPAC,” the website of Rensselaer Computer Science, n.d., http://graphics.cs.rpi.edu/empac (accessed August 15, 2012).
 Eunsu Kang, Donald Craig, and Diana Garcia-Snyder, “Membranes,” Eunsu Kang’s official website, 2011, http://kangeunsu.com/data/page/1.html (accessed August 15, 2012).
 Eunsu Kang, Donald Craig, and Diana Garcia-Snyder, “Fluid Cave,” Eunsu Kang’s official website, 2011, http://kangeunsu.com/data/page/3.html (accessed August 15, 2012).
 openFrameworks’ official website, http://www.openframeworks.cc/ (accessed August 15, 2012).
 OpenCV’s official website, http://opencv.org (accessed August 15, 2012).
 The official website of Open Sound Control, http://opensoundcontrol.org (accessed August 15, 2012).
 OpenKinect’s official website, http://openkinect.org (accessed August 15, 2012).
Eunsu Kang is an international media artist from Korea. She creates interactive spaces that embrace people, penetrate them, and transform using interactive video, spatialized sound, site-specific installation, and performing art idioms. Creating interdisciplinary projects, her signature has been seamless integration of art disciplines and innovative techniques. Her work has been invited to numerous places around the world including Japan, China, Switzerland, Sweden, France, Germany, and the US. All nine of her solo shows, consisting of individual or collaborative projects, were invited or awarded. She has won the Korean National Grant for Arts three times. Her research has been presented at conferences such as ACM, ICMC, and ISEA. Kang earned her Ph.D. in Digital Arts and Experimental Media from DXARTS at the University of Washington. She received an MA in Media Arts and Technology from UCSB and an MFA from the Ewha Woman's University. She is currently an Assistant Professor of New Media Art at the University of Akron in Ohio, USA. Her website is here: http://KangEunsu.Com
Diana García-Snyder is originally from Mexico and now living in Seattle. Her interest is the synthesis and integration of butoh dance, somatic practices, collaboration and community building techniques, Eastern and Western spirituality, and neuropsychology research. She holds a Master of Fine Arts degree in Dance Research and Pedagogy from the University of Washington, a Bachelors degree in Graphic Design from Universidad Autónoma Metropolitana in Mexico City, and she received her ballet training with honors at the Royal Academy of Dancing (London, UK /Mexico City) and modern dance training at Columbia College in Chicago. She is also a pilates and yoga instructor, and both of these somatic practices are very important in her daily training and teaching practices. Diana has performed with various dance companies touring Mexico, the US, and Central America for over 20 years, and has taught various forms of dance including modern/contemporary dance, butoh, ballet, improvisation, video dance, and choreography among others. Her website is here: http://diana-garcia.com
Donald Craig earned his DMA in Music Composition from the University of Washington in 2009. He has studied with Joel Durand, Kenneth Benshoof, Richard Karpen, and Juan Pampin. He also plays guitar and has studied with Steven Novacek. His dissertation "Symphony By Numbers" was a large visual music work, for which he developed his own software. He developed the software for the latest artworks of Eunsu Kang (http://kangeunsu.com), recently shown in Seoul and New York. He won Honorable Mention at the 2011 Punto y Raya Festival (http://www.puntoyrayafestival.com/premiados11_eng.php) for his work of visual music "Midnight at Loch Ness." He has a strong interest in equal temperaments and plans to use them in his ongoing visual music projects. He can be contacted at email@example.com and his website (still under construction) is here: http://realizedsound.net/rhomboid