Skip to main content
SearchLoginLogin or Signup

V’Oct(Ritual)

This paper looks at the technical and compositional methodologies used in the realization of V’Oct(Ritual) (2011), with particular reference to the choices made in the mapping of sensor elements in various spatialization functions.

Published onNov 08, 2017
V’Oct(Ritual)
·

V’Oct(Ritual): The Anatomy of an Interactive Composition


Mark A. Bokowiec
University of Huddersfield
Department of Music
Huddersfield, UK
Email: [email protected]
Web: www.bodycoder.com
Reference this essay: Bokowiec, Mark. “V’Oct(Ritual): The Anatomy of an Interactive Composition.” In Leonardo Electronic Almanac 22, no. 2, edited by Senior Editor Lanfranco Aceti, and Editors Candice Bancheri, Ashley Daugherty, and Michael Spicher. Cambridge, MA: LEA / MIT Press, 2017.
Published Online: January 15, 2018
Published in Print: To Be Announced
ISSN: 1071-4391
ISBN: Forthcoming
https://contemporaryarts.mit.edu/pub/voct

 

Abstract

This paper looks at the technical and compositional methodologies used in the realization of V’Oct(Ritual) (2011), with particular reference to the choices made in the mapping of sensor elements in various spatialization functions. Kinaesonics [1] will be discussed in relation to the coding of real-time one-to-one mapping of sound to gesture and its expression in terms of hardware and software design. 

Keywords

Live interaction, bodycoder, kinaesonics, collaboration, embodiment

 

Introduction

Composing for kinaesonic interaction is an interdisciplinary activity that is not confined to music alone. In terms of my own work with the Bodycoder System, composition extends to the framing of the physicality of the performer—their kinaesonic gestural control of live sound processing, spatialization, and navigation of a Max/MSP environment in performance. Other compositional layers include the live automation of sound diffusion (the physical movement of sound within a multi-channel speaker system), the programming of a range of evolving real-time instances initiated by the performer, and the design of a large palette of sound processing objects.

The Bodycoder System comprises a sensor array worn on the body of a performer, combining up to sixteen channels of switched and proportional sensor inputs sending control data via radio to a Max/MSP environment. The sensor array enables a performer to generate, affect, manipulate, and control all aspects of a multimedia performance that is composed of both audio and image material. The Bodycoder is a flexible system that can be reconfigured at both the hardware/sensor and software levels according to creative and aesthetic needs. Bend sensors placed on the levers of the performer’s body enable acute and sensitive sound manipulation, while switch elements provide the performer with the means of navigating, orchestrating, and determining the nature of certain pre-defined compositional structures. Switches can be assigned a variety of functions from software patch to software patch and from preset to preset. Similarly, the expressivity/sensitivity and range of each of the bend sensors can be changed, pre-determinately, from preset to preset. Composing for the Bodycoder System is therefore concerned with both sound composition and the construction and orchestration of multiple strands and layers of live interaction; the qualities, complexities, and interrelations of which change from moment to moment within a piece. At the heart of the composition is a performer, situated in a Vitruvian manner at the center of a live and interactive performance environment. The Bodycoder System is not an instrument as such, rather it facilitates an instrumental use of the body. The performer is not a puppet, prisoner, or servant of the environment, but a creative entity whose physical presence is encoded in degrees and ratios for virtuosic interaction with the world in which they exist.

 

Figure 1 - Illustrative diagram for V’Oct(Ritual), 2011. Showing multiple compositional layers. © Bodycoder, 2011. Used with permission.

 

V’Oct(Ritual) places the audience inside a sonic space that is a mirror of that inhabited by the performer. Placing the audience inside this meta-Vitruvian space puts them inside the dialogic center of the composition, where the correlation between movement (seen) and sound processing (heard) is experienced simultaneously with the performer. This sensual experience of direct correlation to kinaesonic events is deepened by the more ambiguous sense of there being another unseen presence perceived as an automating entity working symbiotically with the distinctly live operations of the performer. The dimensionality of what is directly perceived and what is felt as present but remains veiled within the performance is important for the aesthetics of V’Oct(Ritual). This centers on the transforming nature of the dialogic relationship that is formed through the interaction of the analogue and the digital—the live and the programmed. What is veiled, but nevertheless present, is the digital ‘ground’ or environment of the work.

 

MSP Design/Patcher Anatomy

The Bodycoder performer operates within an interactive environment that is largely composed within Max/MSP. The Max/MSP design for V’Oct(Ritual) is based around the principles of granular sampling and compression looping. The main DSP patcher includes two eight-channel compression loopers (each including an eight-channel low-pass filter), two eight-channel granulators, and three eight-channel spatializers. The first compression looping patcher consists of eight recording/playback buffers; the size of each buffer is variably preset via message boxes stored in patch presets, which are recalled by the performer. This patcher is designed so that with the onset of a recording command, generated by the activation of a dedicated finger switch, the eight buffers are sequentially loaded with the vocal input. The sizes of the buffers are designed so that various rhythmic and pulsating effects are achieved, each buffer output routed to individual output channels. Additionally, the eight looping buffers are connected to eight lowpass filters that can operate in one of two modes. The first mode enables the filter to cut off frequencies controlled by the left wrist sensor, whilst the mix of live to filtered signal is controlled by the right wrist sensor. The second mode, activated by a dedicated finger switch, routes a sample and hold function to control the filter frequencies of the eight lores filters. The sample and hold timing is designed to provide syncopated, filter cut-off frequencies to be fed to the eight output channels. The second compression looper operates by recording into pairs of recording/playback buffers designated front narrow, front wide, rear wide, and rear narrow. In this case, the recording of the live vocal signal is sequentially loaded into the front pair of recording buffers through to the rear pair of recording buffers. The size of each pair of buffers is variably pre-set via message boxes stored in patch presets, which are recalled by the performer. The two granulators each output eight, equally spaced grain phases that are either connected to a discrete output channel or mixed and fed to one of the three spatialization processors. Granular pre-set message boxes are recalled by the activation of a dedicated finger switch to recall such values as grain duration, pitch, pitch randomization, and pitch quantize. Additionally, these message boxes contain sensor scaling values to enable various ranges of granular scrolling. A master patcher handles all the signal routing and processing patcher activation and muting.

The sensor array for V’Oct(Ritual) employs twelve switched inputs; four finger switches on the right hand data glove provide individual bend sensor activation and deactivation, while eight finger switches mounted on the left hand glove provide utility functions such as Max/MSP patch/preset selection and granular sampling and recording (see Figure 2). There is one bend sensor located on each elbow and one on each wrist; the mapping and programmed expressivity (sensor scaling) of each sensor element can be changed during the course of the work. As in all previous works created for the Bodycoder System, the performer is required to control all aspects of the performance with no off-stage intervention from the mixing desk/computer system. In V’Oct(Ritual) this includes patch/preset navigation, initiation of granular sampling, compression recording, activation, routing, control of filter and pitch processes, and initiation and gestural (kinaesonic) control of various spatialization routines.

 

Figure 2 - V’Oct(Ritual), 2011. Right and Left hand (detail) showing expressive and utilitarian functions. © Bodycoder, 2011. Used with permission.

 

Mapping Strategies for Spatialization

One unusual feature of V’Oct(Ritual) is the combination of automated (programmed) and live (performer controlled) spatialization, with the performer deciding the appropriate mode of spatialization and when it is appropriate to take control of sonic diffusion. First, there is automated spatialization, which operates in two modes. Each mode is unique to each of the two different granulator abstractions. The first mode operates by randomly positioning each granulator phase signal across individual output channels. The width and speed of panning is pre-set and stored for recall by the performer. The second mode moves the granulator phases through a sequence of preset trajectories that are again recalled by the performer as part of the patch preset recall sequence.

Second, there is gesturally controlled spatialization, which operates in three modes. The first mode is enabled by the simultaneous activation of both wrist sensors. This effectively mixes the eight grain phases of the active granulator into a pair of signals, each composed of four grain phases. These mixed granular pairs are routed so that one pair can be gesturally panned between the right hand side front and rear channels and the left hand side front and rear channels using the sensor elements located on the right and left wrists respectively. The second spatialization mode is enabled by the activation of an individual wrist sensor that effectively routes a mix of all grain phases to one of two rotational spatializers, the right wrist controlling a panning in a counterclockwise direction and the left wrist controlling a panning in a clockwise direction. The remaining spatializer is selected by the operation of a dedicated finger switch. Once this switch has been detected, a mix of all eight granulator phases is routed to a triggered panner. Subsequent detection of this finger switch pans the combined signal from its current location to a randomly selected output channel. The duration of each pan trajectory is dynamically controllable by the right wrist sensor, operating in a range of 0 to 2500mS.

 

Kinaesonics

The relationship between gesture and sound is programmed in the Max/MSP environment, where gesture is mapped and scaled to sound processing whether that is filtering, modulating, or scrolling through a granular loop buffer. By changing scaling ranges for a particular gesture to sound event, the physical control of a particular kinaesonic relationship is altered. Large gestural ranges can be scaled for a smaller arch of sonic effect, and small gestural ranges can be scaled for a large sonic effect. Thus a huge choice of kinaesonic physicalities is made available through the use of scaling permutations. Changes in mapping and scaling within individual message boxes produce different kinaesonic sensitivities from moment to moment within a piece. Scaling calculations equate to the size or range of sound processing that can be controlled and affected across, for example, the 180 degree bend of the arm. A range of scaling across a variety of presets provides a blend of different types and physical qualities of kinaesonic expression; Figure 3 shows a basic idea of this.

 

Figure 3 - Notional graphic representation of the effects of scaling for kinaesonic expression. © Bodycoder, 2011. Used with permission.

 

A larger scaling range gives the performer a broader range of processing to control. This potentially requires more physical precision and/or offers the opportunity for the performer to selectively ‘play’ areas of processing. Choices are, of course, subject to compositional preferences, so scaling to gestural expressivity of live sound processing is always a negotiation between what the composition requires at that moment and the practicality of physical control from the point of view of the performer. This negotiation of gestural expressivity evolves throughout the development and rehearsal stages of a piece. Wilson-Bokowiec states that,

 

…sometimes, even though the arm is scaled for a massive range of gestural control, I’m only going to a small portion of that range. This is something that evolves through the development of a piece—the initial intention might be to manipulate a sound across the full range, but when you [sic] hear it, you [sic] might only want me to articulate a portion of it. If we go on working on a section/expression like this, it very quickly gets into my body memory (my kinaesthetic memory), then, rather than re-scaling that event for a small range (which would change the quality of the gesture for me), it’s easier for me to keep the range wide—even though it’s technically much harder to hit a specific portion or pitch within a wider range. [2]

Since the inception of the Bodycoder System in 1995, [3] I have worked collaboratively with a single performer, Julie Wilson-Bokowiec. Working collaboratively is not only a conscious artistic choice, but one that is necessitated by the live interactive nature of the work. In terms of the interactive vocal works, V’Oct(Ritual) and those of the Vox Circuit Trilogy (2007), the acoustic vocalizations of the performer form the raw input material of the pieces—this is difficult to simulate or indeed imagine without the presence of the performer. The composition of kinaesonic expression—sensor scaling, mapping, and response within the Max/MSP environment—impacts upon the physicality of the performer. It is therefore necessary that the performer participates in decisions that prescribe her physicality. Because of the level of real-time control and responsibilities for both the initiation and navigation of the Max/MSP environment that is part of the realization of the live performance, it is also necessary that the performer has a complete understanding of the larger hardware and software architecture of the piece. This knowledge can only really be established through the performer’s participation in the compositional/developmental phases of the creation of a piece. Such knowledge affords the performer both security within the live performance/composition and a level of autonomy that excludes the need for outside intervention. This, I believe, produces a truer level of virtuosity, not simply in terms of quality of gestural and vocal expressivity, but also in terms of the performer’s self-determined control within the precomposed structures.

 

Vocal and Live Processing

The interactive relationship between the analog body and the digital environment of the piece finds a second syntactical intimacy in the voice. The acoustic voice is important not only with respect to its unprocessed presence within the sonic landscape, but more crucially in the manner in which it interacts with live processing. The timbre, pitch, and energy of the acoustic voice enlivens, activates, and articulates certain electro-acoustic processes. A key part of the development of our interactive vocal works is concerned with identifying the qualities of acoustic vocal input that result in sonically rich interactions. The same concerns inform the choice of phrasing, melody construction, the quality of accents, and the use of natural forms of vocal filtering. This is executed by changing the shape of the mouth and the muscular use of the throat and the larynx, which are generically thought of as extended vocal techniques that also include a range of ethnic modes of vocal production.

In V’Oct(Ritual) the performer is responsible for the articulation and consistent production of two simultaneously intertwining and interactive sound elements. The first is the acoustic vocal source that also acts as a kind of carrier (raw timbre), a catalyst (initiator), and a participant (part of a co-existent duality)—to characterize just a few of its roles. The second is the live—often multifaceted—electro-acoustic soundscape, part of the articulation of which is gesturally embodied by the performer. This soundscape is often constructed out of layers of sonic voices built compositionally through the use of granular samples, pedal notes, and multiple live looping recorders, in combination with additional real-time DSP processing. Here the interaction between the analog and the digital is at its most complex, as Wilson-Bokowiec describes:

 

The qualities of interaction between my acoustic voice and the electroacoustic processes that build into active sonic landscapes is, I think, best described as a relationship of ratios. What I mean by this is that there is a kind of live negotiation that goes on between the acoustic voice and the processed. In performance I am acutely aware of what is activating and therefore exerting the most influence over the other. At points in the work I sometimes have to listen more intently to the electroacoustic consequences of that activation than I do to my own acoustic vocalization—listening to the other electroacoustic score and adjusting the acoustic voice almost intuitively. At such moments it’s almost as if my own voice acts as an instrumentalist for the electroacoustic processing instrument as it literally plays the processing. At other moments the ratio flips and I feel the processing working more in counterpoint or as a duality with the acoustic voice. [4]

Conclusion: Anatomy and Embodiment

The composition for the Bodycoder System requires the articulation of a range of interrelated elements and types of expression including the body (gestures) of the performer. The making process therefore involves the configuration and simultaneous negotiation of all elements and expressive types, including the gestural and control parameters of the performer. The process of composition effectively collapses aspects of music composition, programming, sound design, gestural manipulation, spatialization, and performance into one multi-dimensional activity. Theories of embodied practice—in terms of both digital and analog music making—have been exposed in Leman (2008) from the point of cognition and digital interface design [5], and Gritten and King (2006) [6] in an edited text about who examines the semantics of gesture from the experiential perspective of both the musician and the audience. However, few studies have revealed the innate syntactical interrelatedness of gesture, sound, and space as it is configured within an inter-medial compositional process, or how this is embodied and encoded in the flesh, software and hardware, within compositional architectures and scores in general. Within the limited space of this paper I have tried to stress how such an embodied practice is itself configured within a collaborative artistic process. Such a process enables the performer to develop his or her performance skills alongside the technological development of the system environment and my own compositional ambitions. The importance and, indeed, necessity of working inside such a process to practically negotiate levels and qualities of interaction cannot be overstated. Composing for kinaesonic interaction is an interdisciplinary activity, working across levels of expression: vocal, sonic, programmed, and gestural (kinaesonic); it has necessarily led to the development of innovative new methodologies and approaches to composition that strive to move beyond the ‘trigger and response’ model of interaction to more acute, intimate, [7] complex, and precise modes of live electroacoustic and kinaesonic expression. It has necessitated a step-change in the way we view interactive composition and the making process, suggesting new aesthetic models and philosophies.

Design and compositional strategies for V’Oct(Ritual), like previous works, ensures that the performer has total structural, navigational, and expressive control without third party intervention from the mixing desk or computer, in order to facilitate a high and exacting level of virtuosity. The act of advancing into the area of performer controlled spatialization in V’Oct(Ritual) is a new development in our work with the Bodycoder System that extends our interactive aesthetic and poses some interesting technical challenges. It is an area of interactive and electro-acoustic music practice that for a number of years has been generating debate with regard to the authority of the performers over the diffusion of their own instruments. Simon Emmerson suggests, “we might consider giving the performer some say over what happens in projection of field information,” in order to “complete our idealized control revolution.” [8] For us this is not simply an aesthetic concern, but extends into more fundamental areas of performer identity and self-determination that connect strongly with an ethics of control that challenges the subjectification and reduction of the individual to the status of user—a rationalized, coherent, and stable actor.

The issue of agency is a fundamental principle that has shaped the nature of our practice and use of technology from its inception in 1995. That the performer is ‘enmeshed’ as opposed to ‘enslaved’ within the system, composition, and situation of performance is important for us. A primary concern for practically embodied agency offers a strong lens through which to actively work against the preemptive loss of physical agency that pre-configures technology in general and if solutions are not carefully negotiated can, in terms of performance, reinforce a master/slave ontology that diminishes the very concept of interaction. Agency is not strictly conceived of as the personal attributes or capacities of individuals, but is an effect that exists fundamentally in its operations. It could be said that in a small way the Bodycoder System is configured as a micro-performance world in which such agency is enacted and embraced. The liveness of our work, as well as the use of voice and gesture, exposes the vagaries of the human body and opens the work up to the possibility of failure. The choice to work principally with one-to-one kinaesonic control and live processing is a strategy that allows audiences to more easily sense the hyper-presence of the body/gesture extended as sonic effect and movement, and expanded within the space of the diffusion system. Situating the audience at the center of the diffusion system, in effect, enfolds and brings them into the gestural/sonic field of the performer’s extended body, breaking the boundary between performer and audience.

 

References and Notes

[1] The term kinaesonic is derived from a composite of two words: ‘kinaesthetic’ meaning the movement principles of the body and ‘sonic’ meaning sound. Kinaesonics therefore refers to the mapping of sonic effects to bodily movements and is used to describe a particular form of interactive arts practice associated with the gestural manipulation and real-time processing of electro-acoustic music.

[2] Interview with Julie Wilson-Bokowiec, Dartington College, 2010 (unpublished).

[3] A more complete picture of the evolution of the Bodycoder System technology, practice, and aesthetics can be traced through various publications: M. Bromwich, “A Single Performer Controlled Interface For Electronic Dance/Music Theatre,” in Proceedings of the International Computer Music Conference (1995), 491-492; D. Hemment, “Bodycoder and the Music of Movement,” in Mute Magazine 10 (1998): 34-39; M. Bromwich and J. Wilson, “Bodycoder: a Sensor Suit and Vocal Performance Mechanism for Real-time Performance,” in Proceedings of the International Computer Music Conference (1998), 292-295; J. Wilson-Bokowiec and M. Bromwich, “Lifting Bodies: Interactive Dance – Finding new methodologies in the motifs prompted by new technology – a critique and progress report with particular reference to the Bodycoder System,” in Organised Sound 5, no. 1 (2000): 9-16. M.A. Bokowiec and J. Wilson-Bokowiec, “Spiral Fiction,” in Organised Sound 8, no. 8 (2003): 279-287. J. Wilson-Bokowiec and M.A.Bokowiec, “Kinaesonics: The Intertwining Relationship of Body and Sound,” in “Bodily Instruments and Instrumental Bodies,” special issue, Contemporary Music Review, special Issue: 25, nos. 1 & 2 (2006): 47-58.

[4] Interview with Julie Wilson-Bokowiec.

[5] Marc Leman, Embodied Music Cognition and Mediation Technologies (Cambridge, MA: MIT Press, 2008).

[6] Anthony Gritten and Elaine King, Music and Gesture (Hampshire and Burlington: Ashgate Publishing Limited, 2006).

[7] Richard Moore uses the term ‘Control Intimacy’; “determining the match between the variety of musically desirable sounds produced and the psycho-physiological capabilities of a practiced performer. It is based on the performer’s subjective impression of the feedback control lag between the moment a sound is heard, a change is made by the performer, and the time when the effect of that control change is heard." In Richard F Moore,“The Dysfunctions of MIDI,” Computer Music Journal 12 no.1 (1998): 21.

[8] Simon Emmerson, Living Electronic Music (Hampshire and Burlington: Ashgate Publishing Limited, 2007), 96.

 

Author Biography

Dr. Mark Bokowiec is the manager of the electro-acoustic music studios and SPIRAL (Spacialization and Interactive Research Lab) at the University of Huddersfield UK. Mark lectures in interactive performance, interface and system design and composition. Interactive instrument and installation commissions include the LiteHarp for the Science Museum London. Large works for the Bodycoder System include Spiral Fiction for the cultural programme of the Commonwealth Games, Manchester. Cyborg Dreaming for the Science Museum, London. Zeitgeist for KlangArt Festival and Lifting Bodies premiered at the Trafo Theatre, Budapest as featured artists at the Hungarian Computer Music Foundation Festival NEW WAVES supported by the British Council. Interactive vocal works for soloist and Bodycoder System include The Suicided Voice created in residency at the Banff Centre, Canada, and Hand-to-Mouth and Etch created in residency at the Confederation Centre for the Arts, PEI Nova Scotia. The current repertoire of large scale multi-channel works includes V’Oct(Ritual) created in residency at Dartington College of Arts, and PythiaDelphine:21 developed in Athens in 2016 and premiered at the International Animart Festival in Delphi, Greece in the same year.

Comments
0
comment
No comments here
Why not start the discussion?