“Sensing Voice”*
*a longer version of this piece is forthcoming in Senses & Society 6 (2), July 2011.
In 2007, I received an invitation to a recital that would take place in my bathroom; the artist offered to present an underwater concert in my tub. My reaction? “Crazy,” I thought. “Why go to the trouble of singing in an element so far from ideal?” After a year of mulling it over, though, I finally realized that what I had dismissed for its hopeless impracticality might—precisely because it was impractical—offer fresh perspectives on singing and listening by resituating these familiar activities in vastly unfamiliar territory.
The underwater singing practiced by contemporary American soprano and performance artist Juliana Snapper challenges audiences to confront their unexamined assumptions about the relationships between the voice and materiality, the sensed and the singular. How do the physical and sensory properties of singers’ and listeners’ bodies affect and participate in the music we create and the sounds we hear? How do the physical space within and the matter through which sound travels shape what we hear? And how do the relations between these aspects affect what it feels like to sing, and what it is possible to hear?
During the spring of 2010, while I was working on an article about Snapper’s project and teaching a seminar on the multi-sensory aspects of music, Snapper offered to mount a participatory version of her project for my class. She took us through some exercises in a large swimming pool in downtown Los Angeles. The first exercise paired us up; one person gently held the other under water, while the person underwater made sounds. I was paired with Natalia, who shouted––but with my ears above the water I couldn’t hear her voice.
So we tried another strategy: one person made sounds underwater while the rest of us put our heads and ears in – and then we could hear her. We found that the deeper into the water we descended, the more difficult it was to sing high notes. Fast tempi were also difficult to maintain; Natalia’s attempt resulted in muddled sounds.
We found that the deeper into the water we descended, the more difficult it was to sing high notes. Fast tempi were also difficult to maintain; Natalia’s attempt resulted in muddled sounds. Surprisingly, while sung sounds didn’t seem very loud, small internal throat sounds were incredibly powerful. These exercises demonstrate how much the medium through which sound waves flow affects their characteristics: their speed, direction and so on. It also shows that in order to register sound, the listening body (including the head) must be immersed in the material through which the sound flows.
The next exercise linked the six of us together by the arms; three participants stood in a line, with their backs against three others. We sang in a drone-like manner, playing with our voices above the water, at its surface, and then slowly sinking into it. We felt the sonic vibrations largely through direct contact with each other’s bodies. Of course sound also passed through the air and water, but because the most immediate path was from one body to another, this was the sensation that overpowered us.
At the end of the day, gathered around the poolside fireplace, we discussed how different singing felt in a liquid environment. We’d discovered that aural experience is predicated on physical contact with sound waves through shared media, in this case water and air, flesh and bone. We noted that the shared medium makes a great differenceto how we experience the voice, and that the sound we ultimately hear depends partly on what is sung, and partly on the medium through which it passes (and how our bodies interact with that medium). In other words, in Snapper’s workshop we discovered that sound is a multi-sensory experience, tactile as well as aural. It also became clear that sound and music involve much more than traditional theories and notation can capture. (For a more thorough discussion of the differences between singing and sound underwater and in air, please see my forthcoming article.)
- At the Standard poolside fireplace.
- Hyunjong Lee and Zachary Wallmark.
- Sam Baltimore and Alexandra Apolloni.
- Jill Rogers.
I would like to underscore here that the character of a given sound source is not stable. Instead it is dependent on specific material conditions, and on particular relationships between the elements involved. A sound signal will move with a given speed depending on the material––air, water, metal, glass, etc.––through which it is propelled. As humans register the sound it will move more or less directly through the ear drum or bones (and then transfer to the inner ear) depending on the relationship between the material through which it is propelled and the materiality of the ear. The part of the body that registers sound also plays a role in its apparent directionality.
For example, our ability to hear in “stereo”––two distinct signals, left and right––is the result of sound entering our bodies from two directions (two ears). Because we most frequently deal with sound as it is propelled through air, we take this as a given and adjust our musical and acoustic research (and thus our concert halls and (performance spaces accordingly).
By highlighting the material aspects of sound and their reception, Snapper reminds us that what we hear depends as much on our materiality, physicality, and cultural and social histories as it does on so-called objective measurements (decibel level, soundwave count, or score), which are themselves mere images. Our experience of sound is a triangulation of events in which physical impulses (sonic vibrations), our bodies’ encultured capacity to receive these vibrations, and how we have been taught to understand them are at constant play, and subject to negotiation. In the experience of sound, what becomes clear is not a stable explanation of what sound or music is. Instead we are led to understand that each such account is a composite manifestation of our perception of sound at a given moment in time and place.
Listening to Robots Sing: GarageBand on the iPad
I recently had the opportunity to fool around with the iPad2’s new GarageBand suite. Enticed by the intuitive touch interface I soon found myself lost within the device’s labyrinthine architecture. Every poke, prod and press brought me to a new screen with a bevy of exciting options. A touch to create a drum loop, a tickle to evoke some reverb, and a brush to strum a guitar. I was one with the machine; it was a truly cybernetic, kinesthetic moment. This may sound naïve, but I had never realized how many tools were available to electronic musicians, or how intuitive using these tools could be. As digital tools to create music become more accessible and more intuitive, what is the role of the human in understanding their use? Further, what strategies can we adopt when listening to these creations?
This question may seem a bit outdated to those who have been researching post-humanist phenomena since the digital boom in the mid-nineties. Often conflicting perspectives regarding the negotiation of the human and the digital have been considered in the last decade or so. Some like Donna Haraway, Pierre Lévy, and even Ray Kurzweil offer particularly optimistic readings of the post-human (although for radically different reasons). While scholars like Nancy Baym and Jaron Lanier have offered decisively more sober readings of the problematic. They argue that splits between the human and post-human, or analog and digital are false dichotomies. Truth be told, none of the theorists above adequately address my feelings on this topic. Producing music with a digital audio suite makes me defensive of my humanism and it is by its very nature a project of preservation.
The algorithmic tools packaged within digital audio suites encourage a sense of aesthetic preservation. Tools like GarageBand’s Smart Guitar, Smart Drums, Smart Bass, various arpeggiators and Appleloops encourage the user to program music on a high level where the nuance of serendipity and improvisation play second fiddle to the overall sonic contours of a piece. Although the user is provided the tools to intervene and program music in a very specific way, it is by default a distinctly different experience than that of playing a guitar or piano. The ghost of the algorithm haunts such performances; reminding the user that these acts of spontaneous creation are no longer the default but deliberate…. This sense of deliberate improvisation forces me into a reflexive space where I am acutely aware of the mediations occurring within my performance. Succinctly, I must defend a sense of self within my creation. If I yield to the algorithms that seek to help me compose, I destroy all sense of the human within my work. Simply turning on robots and watching them sing.
For this reason, I propose an aesthetic of preservation as a way to understand the ways in which we listen to works created by digital audio suites. As algorithmic aids become more advanced and commonplace in music, the human becomes a less essential aspect of the form. Understanding what has been deliberately included in spite the seductive algorithmic environment is ultimately a project that seeks to recover the human in the machine; perhaps even, a project doomed from the start, as we grow ever closer to the means of our artistic production.
AT
Magnasanti – Check out the results of my collaboration with Colin Germain on GarageBand!








































Recent Comments