Today the SO! Thursday stream inaugurates a four-part series entitled Hearing the UnHeard, which promises to blow your mind by way of your ears. Our Guest Editor is Seth Horowitz, a neuroscientist at NeuroPop and author of The Universal Sense: How Hearing Shapes the Mind (Bloomsbury, 2012), whose insightful work on brings us directly to the intersection of the sciences and the arts of sound.
That’s where he’ll be taking us in the coming weeks. Check out his general introduction just below, and his own contribution for the first piece in the series. — NV
Welcome to Hearing the UnHeard, a new series of articles on the world of sound beyond human hearing. We are embedded in a world of sound and vibration, but the limits of human hearing only let us hear a small piece of it. The quiet library screams with the ultrasonic pulsations of fluorescent lights and computer monitors. The soothing waves of a Hawaiian beach are drowned out by the thrumming infrasound of underground seismic activity near “dormant” volcanoes. Time, distance, and luck (and occasionally really good vibration isolation) separate us from explosive sounds of world-changing impacts between celestial bodies. And vast amounts of information, ranging from the songs of auroras to the sounds of dying neurons can be made accessible and understandable by translating them into human-perceivable sounds by data sonification.
Four articles will examine how this “unheard world” affects us. My first post below will explore how our environment and evolution have constrained what is audible, and what tools we use to bring the unheard into our perceptual realm. In a few weeks, sound artist China Blue will talk about her experiences recording the Vertical Gun, a NASA asteroid impact simulator which helps scientists understand the way in which big collisions have shaped our planet (and is very hard on audio gear). Next, Milton A. Garcés, founder and director of the Infrasound Laboratory of University of Hawaii at Manoa will talk about volcano infrasound, and how acoustic surveillance is used to warn about hazardous eruptions. And finally, Margaret A. Schedel, composer and Associate Professor of Music at Stonybrook University will help readers explore the world of data sonification, letting us listen in and get greater intellectual and emotional understanding of the world of information by converting it to sound.
– Guest Editor Seth Horowitz
Although light moves much faster than sound, hearing is your fastest sense, operating about 20 times faster than vision. Studies have shown that we think at the same “frame rate” as we see, about 1-4 events per second. But the real world moves much faster than this, and doesn’t always place things important for survival conveniently in front of your field of view. Think about the last time you were driving when suddenly you heard the blast of a horn from the previously unseen truck in your blind spot.
Hearing also occurs prior to thinking, with the ear itself pre-processing sound. Your inner ear responds to changes in pressure that directly move tiny little hair cells, organized by frequency which then send signals about what frequency was detected (and at what amplitude) towards your brainstem, where things like location, amplitude, and even how important it may be to you are processed, long before they reach the cortex where you can think about it. And since hearing sets the tone for all later perceptions, our world is shaped by what we hear (Horowitz, 2012).
But we can’t hear everything. Rather, what we hear is constrained by our biology, our psychology and our position in space and time. Sound is really about how the interaction between energy and matter fill space with vibrations. This makes the size, of the sender, the listener and the environment, one of the primary features that defines your acoustic world.
You’ve heard about how much better your dog’s hearing is than yours. I’m sure you got a slight thrill when you thought you could actually hear the “ultrasonic” dog-training whistles that are supposed to be inaudible to humans (sorry, but every one I’ve tested puts out at least some energy in the upper range of human hearing, even if it does sound pretty thin). But it’s not that dogs hear better. Actually, dogs and humans show about the same sensitivity to sound in terms of sound pressure, with human’s most sensitive region from 1-4 kHz and dogs from about 2-8 kHz. The difference is a question of range and that is tied closely to size.
Most dogs, even big ones, are smaller than most humans and their auditory systems are scaled similarly. A big dog is about 100 pounds, much smaller than most adult humans. And since body parts tend to scale in a coordinated fashion, one of the first places to search for a link between size and frequency is the tympanum or ear drum, the earliest structure that responds to pressure information. An average dog’s eardrum is about 50 mm2, whereas an average human’s is about 60 mm2. In addition while a human’s cochlea is spiral made of 2.5 turns that holds about 3500 inner hair cells, your dog’s has 3.25 turns and about the same number of hair cells. In short: dogs probably have better high frequency hearing because their eardrums are better tuned to shorter wavelength sounds and their sensory hair cells are spread out over a longer distance, giving them a wider range.
Then again, if hearing was just about size of the ear components, then you’d expect that yappy 5 pound Chihuahua to hear much higher frequencies than the lumbering 100 pound St. Bernard. Yet hearing sensitivity from the two ends of the dog spectrum don’t vary by much. This is because there’s a big difference between what the ear can mechanically detect and what the animal actually hears. Chihuahuas and St. Bernards are both breeds derived from a common wolf-like ancestor that probably didn’t have as much variability as we’ve imposed on the domesticated dog, so their brains are still largely tuned to hear what a medium to large pseudo wolf-like animal should hear (Heffner, 1983).
But hearing is more than just detection of sound. It’s also important to figure out where the sound is coming from. A sound’s location is calculated in the superior olive – nuclei in the brainstem that compare the difference in time of arrival of low frequency sounds at your ears and the difference in amplitude between your ears (because your head gets in the way, making a sound “shadow” on the side of your head furthest from the sound) for higher frequency sounds. This means that animals with very large heads, like elephants, will be able to figure out the location of longer wavelength (lower pitched) sounds, but probably will have problems localizing high pitched sounds because the shorter frequencies will not even get to the other side of their heads at a useful level. On the other hand, smaller animals, which often have large external ears, are under greater selective pressure to localize higher pitched sounds, but have heads too small to pick up the very low infrasonic sounds that elephants use.
But you as a human are a fairly big mammal. If you look up “Body Size Species Richness Distribution” which shows the relative size of animals living in a given area, you’ll find that humans are among the largest animals in North America (Brown and Nicoletto, 1991). And your hearing abilities scale well with other terrestrial mammals, so you can stop feeling bad about your dog hearing “better.” But what if, by comic-book science or alternate evolution, you were much bigger or smaller? What would the world sound like? Imagine you were suddenly mouse-sized, scrambling along the floor of an office. While the usual chatter of humans would be almost completely inaudible, the world would be filled with a cacophony of ultrasonics. Fluorescent lights and computer monitors would scream in the 30-50 kHz range. Ultrasonic eddies would hiss loudly from air conditioning vents. Smartphones would not play music, but rather hum and squeal as their displays changed.
And if you were larger? For a human scaled up to elephantine dimensions, the sounds of the world would shift downward. While you could still hear (and possibly understand) human speech and music, the fine nuances from the upper frequency ranges would be lost, voices audible but mumbled and hard to localize. But you would gain the infrasonic world, the low rumbles of traffic noise and thrumming of heavy machinery taking on pitch, color and meaning. The seismic world of earthquakes and volcanoes would become part of your auditory tapestry. And you would hear greater distances as long wavelengths of low frequency sounds wrap around everything but the largest obstructions, letting you hear the foghorns miles distant as if they were bird calls nearby.
But these sounds are still in the realm of biological listeners, and the universe operates on scales far beyond that. The sounds from objects, large and small, have their own acoustic world, many beyond our ability to detect with the equipment evolution has provided. Weather phenomena, from gentle breezes to devastating tornadoes, blast throughout the infrasonic and ultrasonic ranges. Meteorites create infrasonic signatures through the upper atmosphere, trackable using a system devised to detect incoming ICBMs. Geophones, specialized low frequency microphones, pick up the sounds of extremely low frequency signals foretelling of volcanic eruptions and earthquakes. Beyond the earth, we translate electromagnetic frequencies into the audible range, letting us listen to the whistlers and hoppers that signal the flow of charged particles and lightning in the atmospheres of Earth and Jupiter, microwave signals of the remains of the Big Bang, and send listening devices on our spacecraft to let us hear the winds on Titan.
Here is a recording of whistlers recorded by the Van Allen Probes currently orbiting high in the upper atmosphere:
When the computer freezes or the phone battery dies, we complain about how much technology frustrates us and complicates our lives. But our audio technology is also the source of wonder, not only letting us talk to a friend around the world or listen to a podcast from astronauts orbiting the Earth, but letting us listen in on unheard worlds. Ultrasonic microphones let us listen in on bat echolocation and mouse songs, geophones let us wonder at elephants using infrasonic rumbles to communicate long distances and find water. And scientific translation tools let us shift the vibrations of the solar wind and aurora or even the patterns of pure math into human scaled songs of the greater universe. We are no longer constrained (or protected) by the ears that evolution has given us. Our auditory world has expanded into an acoustic ecology that contains the entire universe, and the implications of that remain wonderfully unclear.
Exhibit: Home Office
This is a recording made with standard stereo microphones of my home office. Aside from usual typing, mouse clicking and computer sounds, there are a couple of 3D printers running, some music playing, largely an environment you don’t pay much attention to while you’re working in it, yet acoustically very rich if you pay attention.
This sample was made by pitch shifting the frequencies of sonicoffice.wav down so that the ultrasonic moves into the normal human range and cuts off at about 1-2 kHz as if you were hearing with mouse ears. Sounds normally inaudible, like the squealing of the computer monitor cycling on kick in and the high pitched sound of the stepper motors from the 3D printer suddenly become much louder, while the familiar sounds are mostly gone.
This recording of the office was made with a Clarke Geophone, a seismic microphone used by geologists to pick up underground vibration. It’s primary sensitivity is around 80 Hz, although it’s range is from 0.1 Hz up to about 2 kHz. All you hear in this recording are very low frequency sounds and impacts (footsteps, keyboard strikes, vibration from printers, some fan vibration) that you usually ignore since your ears are not very well tuned to frequencies under 100 Hz.
Finally, this sample was made by pitch shifting the frequencies of infrasonicoffice.wav up as if you had grown to elephantine proportions. Footsteps and computer fan noises (usually almost indetectable at 60 Hz) become loud and tonal, and all the normal pitch of music and computer typing has disappeared aside from the bass. (WARNING: The fan noise is really annoying).
The point is: a space can sound radically different depending on the frequency ranges you hear. Different elements of the acoustic environment pop up depending on the type of recording instrument you use (ultrasonic microphone, regular microphones or geophones) or the size and sensitivity of your ears.–
Featured image by Flickr User Jaime Wong.
Seth S. Horowitz, Ph.D. is a neuroscientist whose work in comparative and human hearing, balance and sleep research has been funded by the National Institutes of Health, National Science Foundation, and NASA. He has taught classes in animal behavior, neuroethology, brain development, the biology of hearing, and the musical mind. As chief neuroscientist at NeuroPop, Inc., he applies basic research to real world auditory applications and works extensively on educational outreach with The Engine Institute, a non-profit devoted to exploring the intersection between science and the arts. His book The Universal Sense: How Hearing Shapes the Mind was released by Bloomsbury in September 2012.
REWIND! If you liked this post, check out …
Learning to Listen Beyond Our Ears– Owen Marshall
This is Your Body on the Velvet Underground– Jacob Smith
World Listening Day took place last week, and as I understand it, it is all about not taking sound for granted – an admirable goal indeed! But it is worth taking a moment to consider what sorts of things we might be taking for granted about sound as a concept when we decide that listening should have its own holiday.
One gets the idea that soundscapes are like giant pandas on Endangered Species Day – precious and beautiful and in need of protection. Or perhaps they are more like office workers on Administrative Professionals’ Day – crucial and commonplace, but underappreciated. Does an annual day of listening imply an interruption of the regularly scheduled three hundred and sixty four days of “looking”? I don’t want to undermine the valuable work of the folks at the World Listening Project, but I’d argue it’s equally important to consider the hazards of taking sound and listening for granted as premises of sensory experience in the first place. As WLD has passed, let us reflect upon ways we can listen beyond our ears.
At least since R. Murray Schafer coined the term, people have been living in a world of soundscapes. Emily Thompson provides a good definition of the central concept of the soundscape as “an aural landscape… simultaneously a physical environment and a way of perceiving that environment; it is both a world and a culture constructed to make sense of that world.”(117) As an historian, Thompson was interested in using the concept of soundscape as a way of describing a particular epoch: the modern “machine age” of the turn of the 20th century.
Anthropologist Tim Ingold has argued that, though the concept that listening is primarily something that we do within, towards, or to “soundscapes” usefully counterbalanced the conceptual hegemony of sight, it problematically reified sound, focusing on “fixities of surface conformation rather than the flows of the medium” and simplifying our perceptual faculties as “playback devices” that are neatly divided between our eyes, ears, nose, skin, tongue, etc.
Stephan Helmreich took Ingold’s critique a step further, suggesting that soundscape-listening presumes a a particular kind of listener: “emplaced in space, [and] possessed of interior subjectivities that process outside objectivities.” Or, in less concise but hopefully clearer words: When you look at the huge range of ways we experience the world, perhaps we’re limiting ourselves if we confine the way we account for listening experiences with assumptions (however self-evident they might seem to some of us) that we are ‘things in space’ with ‘thinking insides’ that interact with ‘un-thinking outsides.’ Jonathan Sterne and Mitchell Akiyama, in their chapter for the Oxford Handbook of Sound Studies, put it the most bluntly, arguing that
Recent decades of sensory history, anthropology, and cultural studies have rendered banal the argument that the senses are constructed. However, as yet, sound scholars have only begun to reckon with the implications for the dissolution of our object of study as a given prior to our work of analysis.(546)
Here they are referring to the problem of the technological plasticity of the senses suggested by “audification” technologies that make visible things audible and vice-versa. SO!’s Jennifer Stoever-Ackerman has also weighed in on the social contingency of the “listening ear,” invoking Judith Butler to describe it as “a socially-constructed filter that produces but also regulates specific cultural ideas about sound.” In various ways, here, we get the sense that not only is listening a good way to gain new perspectives, but that there are many perspectives one can have concerning the question of what listening itself entails.
But interrogating the act of listening and the sounds towards which it is directed is not just about good scholarship and thinking about sound in a properly relational and antiessentialist way. It’s even less about tsk-tsking those who find “sound topics” intrinsically interesting (and thus spend inordinate amounts of time thinking about things like, say, Auto-Tune.) Rather, it’s about taking advantage of listening’s potential as a prying bar for opening up some of those black boxes to which we’ve grown accustomed to consigning our senses. Rather than just celebrating listening practices and acoustic ecologies year after year, we should take the opportunity to consider listening beyond our current conceptions of “listening” and its Western paradigms.
For example, when anthropologist Kathryn Lynn Guerts first tried to understand the sensory language of the West African Anlo-Ewe people, she found a rough but ready-enough translation for “hear” in the verb se or sese. The more she spoke with people about it, however, the more she felt the limitations of her own assumptions about hearing being, simply, the way we sense sounds through our ears. As one of her informants put it, “Sese is hearing – not hearing by the ear but a feeling type of hearing”(185). As it turns out, according to many Anlo-ewe speakers, our ability to hear the sounds of the world around us is by no means an obviously discrete element of some five-part sensorium, but rather a sub-category of a feeling-in-the-body, or seselelame. Geurts traces the ways in which the prefix se combines with other sensory modes, opening up the act of hearing as it goes along: sesetonume, for example, is a category that brings together sensations of “eating, drinking, breathing, regulation of saliva, sexual exchanges, and also speech.” Whereas English speakers are more inclined to contrast speech with listening as an act of expression rather than perception, for the Anlo-Ewe they can be joined together into a single sensory experience.
The ways of experiencing the world intimated by Geurts’ Anlo-Ewe interlocutors play havoc with conventionally “transitive,” western understandings of what it means to “sense” something (that is, to be a subject sensing an object) let alone what it means to listen. When you listen to something you like, Geurts might suggest to us that liking is part of the listening. Similarly, when you listen to yourself speak, who’s to say the feeling of your tongue against the inside of your mouth isn’t part of that listening? When a scream raises the hairs on the back of your neck, are you listening with your follicles? Are you listening to a song when it is stuck in your head? The force within us that makes us automatically answer “no” to questions of this sort is not a force of our bodies (they felt these things together after all), but a force of social convention. What if we tried to protest our centuries-old sensory sequestration? Give me synaesthesia or give me death!
Indeed, synaesthesia, or the bleeding-together of sensory modes in our everyday phenomenological experience, shows that we should loosen the ear’s hold on the listening act (both in a conceptual and a literal sense – see some of the great work at the intersections of disability studies and sound studies). In The Phenomenology of Perception, Maurice Merleau-Ponty put forth a bold thesis about the basic promiscuity of sensory experience:
Synaesthetic perception is the rule, and we are unaware of it only because scientific knowledge shifts the centre of gravity of experience, so that we have unlearned how to see, hear, and generally speaking, feel, in order to deduce, from our bodily organization and the world as the physicist conceives it, what we are to see, hear and feel. (266)
Merleau-Ponty, it should be said, is not anti-science so much as he’s interested in understanding the separation of the senses as an historical accomplishment. This allows us to think about and carry out the listening act in even more radical ways.
Of course all of this synaesthetic exuberance requires a note to slow down and check our privilege. As Stoever-Ackerman pointed out:
For women and people of color who are just beginning to decolonize the act of listening that casts their alternatives as wrong/aberrant/incorrect—and working on understanding their listening, owning their sensory orientations and communicating them to others, suddenly casting away sound/listening seems a little like moving the ball, no?
To this I would reply: yes, absolutely. It is good to remember that gleefully dismantling categories is by no means always the best way to achieve wider conceptual and social openness in sound studies. There is no reason to think that a synaesthetic agenda couldn’t, in principle, turn fascistic. The point, I think, is to question the tools we use just as rigorously as the work we do with them.
Owen Marshall is a PhD candidate in Science and Technology Studies at Cornell University. His dissertation research focuses on the articulation of embodied perceptual skills, technological systems, and economies of affect in the recording studio. He is particularly interested in the history and politics of pitch-time correction, cybernetics, and ideas and practices about sensory-technological attunement in general.
REWIND!…If you liked this post, you may also dig:
Snap, Crackle, Pop: The Sonic Pleasures of Food–Steph Ceraso
After a rockin’ (and seriously informative) series of podcasts from Leonard J. Paul, a Drrty South banger dropped by SO! Regular Regina Bradley, and a screamtastic meditation from Yvon Bonenfant, our summer Sound and Pleasure series serves up some awesomeness on a platter this week with the return of Steph Ceraso, who makes us wish all those food pics on instagram came with recordings. Take a big bite out of this! --JS, Editor-in-Chief
Lightly I tap the burnt surface with a cold metal spoon until it cracks; it fractures like a fine layer of sugary glass; silent, smooth custard mixes with the sticky sweet crunch of the caramelized shards.
An otherwise bland and unmemorable dessert, crème brûlée is always my go-to treat. The sonic pleasures of this indulgence keep me coming back: the tapping, cracking, crunching.
Though the taste and visual presentation of food usually get most of the hype, it’s no secret that sound can amplify the enjoyment and delight of eating. Indeed, sound has become an increasingly important ingredient in the design, advertising, and experience of food: from “junk” food to gourmet dining. What is especially fascinating and disconcerting about this strategic use of sound is the powerful connection between pleasure and sensory manipulation. To my mind, the myriad ways sound is employed to manipulate perceptions of food underscores the need to pay more attention to when, how, and why sound influences our thoughts, feelings, and sensory experiences.
* * *
Food engineers and marketing teams have been taking advantage of the pleasures of sound for years. Rice Krispies’ “Snap, Crackle, Pop” trademark has been around since the late 1920s. And of course there are Pop Rocks, my favorite sounding retro product. The carbonated sugar crystals were invented in the 1950s, but thanks to commercials that celebrated the candy in all of its sonic glory, Pop Rocks’ popularity reached a fever pitch in the 1970s and it’s still going strong today. The official Pop Rocks website boasts that the product continues to be the “leading popping candy brand worldwide.”
Sound is a crucial part of the pleasurable experience of food’s packaging, too. Consider Pringles’ famous “Once you pop you can’t stop” slogan. A neatly stacked chip cylinder with a pleasant-sounding lid is marketed as a refreshing alternative to crinkly chip bags.
Designing sound for the things that contain food may seem like a silly marketing gimmick, but the sounds of packaging can make or break the product. For instance, in an attempt to make its SunChips brand more environmentally friendly, in 2010 Frito-Lay introduced a compostable chip bag. Consumers found it to be ridiculously noisy and complained. The bag had so many haters, in fact, that a facebook group called “SORRY I CAN’T HEAR YOU OVER THIS SUN CHIPS BAG” attracted nearly 30,000 fans. Sales fell, and the financial loss caused Frito-Lay to go back to the un-environmentally friendly bags. Just this year, the company introduced yet another version of the compostable bag. It’s too early to tell if consumers will deem its sound acceptable.
While many companies strive to hit the right note when it comes to the pleasurable sounds of food and its packaging, recent research on taste and sound has been more focused on how external sounds affect the experience of eating. In a noteworthy study, the food company Unilever and the University of Manchester found that the experience of sweetness and saltiness in food decreased in relation to high levels of background noise (perhaps one of the reasons that airplane food generally sucks). They also identified a correlation between the increased volume of background noise and the eater’s perception of crunchiness and freshness.
Additionally, the Crossmodal Laboratory at Oxford University run by professor Charles Spence got a lot of press for discovering that low-pitched sounds tend to bring out bitter flavors while high-pitched sounds heighten the sweetness of food. Go grab a snack (chocolate or coffee work best) and you can try this experiment for yourself.
Armed with scientific knowledge, many chefs and entrepreneurs have been teaming up to put these ideas into practice. For a limited time London restaurant House of Wolf served what they called a “sonic cake pop.” The treat came with a phone number that presented callers with the choice of pushing 1 for sweet (to hear a high-frequency sound) and 2 for bitter (to hear a low-frequency sound). The experiment was a success. People seemed to want to hear their cake and eat it too. The same Guardian article reports that Ben and Jerry’s plans to put QR codes on its packaging so that customers can use their smartphones to access sounds that compliment the flavor of ice cream they are eating.
For some, making sound a more prominent feature of eating experiences is more than a fun experiment or savvy marketing strategy: it’s a full-blown artistic performance. World-renowned chef Heston Blumenthal uses sound to draw attention to the holistic sensory experience of dining. His dish “Sound of the Sea,” for example, consists of seafood, edible seaweed, tapioca that looks like sand, decorative shells, and an iPod so that diners can listen to the sounds of the ocean.
Blumenthal has also performed sound experiments while eaters spooned up his bacon and egg ice cream (Yep. That’s a thing!). When the sound of bacon frying in a pan was played, people rated the bacon flavor of the ice cream to be more intense than the egg flavor, and vice versa when the sound was clucking chickens.
In a similar vein, Boston chef Jason Bond and composer Ben Houge have paired up to create food operas, or what they call “audio-gustatory events.” They use real-time musical scoring techniques based off of Houge’s work in video games to design eating experiences that explicitly link sound and taste.
Clearly, when it comes to the pleasures (and displeasures) of eating, sound matters. I’ll admit that I’m a fan of the more imaginative, experimental uses of sound in experiences like the food opera or Blumenthal’s edible sonic creations. There is a sense of play and discovery in these designed experiences; and, people know what they are signing up for and willingly choose to participate. Such endeavors have the potential to heighten participants’ sensitivity to how sound figures into eating and other kinds of everyday activities.
Yet, along with the sonic branding and marketing of edible products, these experiments raise some troubling questions about the relationship between pleasure and sensory manipulation: When is it wrong or unethical to use sensory manipulation to create pleasurable experiences? At what point does manipulation become pleasurable? Is all pleasure a form of manipulation?
Perhaps more significantly, the ways that people are applying scientific knowledge about sound and taste opens up another can of worms: What are the implications of trying to standardize pleasurable sounds via commercial products? What kinds of bodies are invited to participate in pleasurable sensory experiences, or not? I’m thinking particularly of individuals who are deaf and hard-of-hearing, or who have different cultural cues when it comes to recognizing a sound as “pleasurable.”
The sounds of food do not necessarily have to be engineered to be pleasurable. However, because new information about the relationship between sound and other senses is being used to explicitly and implicitly manipulate our experiences, it seems that there is a real need for cultivating a keener, more critical sensory awareness. This means questioning when, how, and why sound is being employed to create pleasurable experiences in a range of products and environments; it means paying careful attention to the ways that sound interacts with all of our senses to influence everyday experiences. So, the next time you’re having what seems to be a simple “feel good” eating experience, be sure to open your ears along with your mouth.
Featured image by Flickr user Wizetux, CC BY 2.0
Steph Ceraso received her doctorate in 2013 from the University of Pittsburgh, specializing in rhetoric and composition, pedagogy, sound studies, and digital media. In addition to being a three-peat guest writer for Sounding Out!, her work has been featured in Currents in Electronic Literacy, HASTAC, and Fembot Collective. She is also the coeditor of a special “Sonic Rhetorics” issue of Harlot. Her current book project, Sounding Composition, Composing Sound, examines how expansive, consciously embodied listening and sonic composing practices can deepen our knowledge of multimodal engagement and production. Steph will be joining the faculty in the English department at the University of Maryland, Baltimore County this fall. You can find more about her research, media projects, and teaching at http://www.stephceraso.com.
REWIND! . . .If you liked this post, you may also dig:
On Sound and Pleasure: Meditations on the Human Voice– Yvon Bonenfant
After a rockin’ (and seriously informative) series of podcasts from Leonard J. Paul–a three part “Inside the Game Sound Designer’s Studio”– and a post on sound and black women’s sexual freedom from SO! Regular Regina Bradley, our summer Sound and Pleasure series keeps doin’ it and doin’ it and doin’ it well, this week with a beautiful set of meditations from scholar, artist, performer, and voice activist, Yvon Bonenfant. EVERYBODY SCREAM!!!--JS, Editor-in-Chief
What I have to say about sound and pleasure can mostly be summed up this way: everyone deserves to take profound pleasure in their body’s sound.
Not only this, everyone deserves to both engage passionately with social sound and negotiate the exchange of social sound on pleasurable terms.
Like other expressive systems, however, these inalienable sonic human rights are mostly ignored, curtailed, or otherwise ‘disciplined and punished’ in the Foucauldian sense by our social systems. So, we are mostly neurotic, or otherwise hung up on, what kinds of sounds we make, where and when. We fetishise sound, particularly virtuosically framed sound, because it is part of a series of sublimated impulses, or we repress it because we think we aren’t supposed to emit it, or we ignore it.
In any given human relationship within which all parties can vocalize, the voice is an evident, key relational tool. It is full of gesture and meaning and text and sends rapid-fire, complex, layered, even self-contradictory or oxymoronic messages. It is a truly tangled web, and of course, for those who can use speech, transmits language.
However, I’d like to disentangle our sound from our language for a moment. Indeed, sound is not necessary in order to develop and transmit linguistically carried ideas, information and impulses. It has long been accepted that sign languages are fully developed languages, with intricate grammatical systems, vocabularies, and all of the other features of spoken languages. It is thus not necessary to use sound as a carrier of language. Yet if we have a voice, we almost always use sound to carry our language. And we force deaf people to try to fake having a voice and to fake listening to voices through lip reading and gesturing.
The last twenty years has seen a real boom in speculation and even scientific experiments that theorise why human bodily sound – the most evident aspect of which is our vocal sound – is so important to us. Musicology, biomusicology, evolutionary psychology, neuropsychology, and cultural studies of many kinds have tried to account for this. I have my own favorite reason, one I’ve tried to describe in a number of scholarly articles. This is that sound is much like touch. Like, yet unalike. It reaches and vibrates bodies, but at distance. It voyages through space in other ways, but it evokes haptic responses.
Sound isn’t solid, but it takes up space. This is expressed by Stephen Connor within his concept of the vocalic body. When we sound, there is a resonant field of vibration that moves through matter, which behaves according to the laws of physics – it vibrates molecules. This vibratory field leaves us, but is of us, and it voyages through space. Other people hear it. Other people feel it.
I’ve said that sound is like touch. However, one key way that it is not like touch is that it can do this thing. It can leave our bodies and travel away from us. We don’t need to grip it. We don’t need to hold on. And once emanated, it is out of our control.
More than one emanation can co-exist within matter. Their vibrations interact with one another, waves colliding and travelling in similar or different directions, and the vocalic bodies that they represent are morphed, hybridized: they intersect and invent composite bodies.
We hear the resulting harmonies. Historically policed into ‘consonances’ and ‘dissonances’, we have the power to let the negativizing connotations of either of these words go and simply hear the results of the collisions. Voices sounding simultaneously create choreographies of gesture that can be jubilant, depressing, assertive, aggressive, delightful, morose… or many of these simultaneously and in rapid alternation.
The fields of human sound in which we bathe are a continually self-knitting web of sensation. They are full of gestures pregnant with intention, filled with improvisatory spontaneity, success, failure and experimentation. They are filled with a desire to act upon matter, and to reach and engage one another.
My Ukrainian-origin mother was ‘loud’, I guess, at least by Anglo-Saxon standards, and her voice was timbrally very rich. And my father was a radio announcer (he disliked being called a DJ immensely, even though he worked in commercial radio and worked on shows that spun discs – he preferred being associated with talking). His voice was also very rich, as well as extremely crafted. It could be pointed and severe: a weapon. He had professional command of its qualities. We were not a quiet family; none of us were vocal wallflowers. But were our soundings pleasure-filled? Certainly, we were allowed to make lots of sound in some circumstances. However, just being allowed to be loud – though it might sometimes be a pleasure – does not necessarily lead to a pleasure-filled dynamic. Weightlifting makes us stronger, but it doesn’t necessarily feel good.
The amount of sound and whether ‘lots’ of it, or heightenings of its qualities – lots of amplitude, or lots of other kinds of distinctness, let’s say things like pitch or emotional timbre – are key variable features of family life in our cultures. Sound takes us directly into the meatiest of interpersonal dynamics – the dynamics of space and gesture, the dynamics of who takes up space with their sound and when. Families are, of course, microcosms of this sonic dynamic, but any group within which we generate relationships and encounters is subject to this dynamic, too. Our very own bodies end up developing what Thomas Csordas might call a ‘somatic mode’ that embodies our experience of these dynamics.
Whether we start from psychodynamic, neuropsychiatric, or even habitus-based models, it’s clear that repressing the expression of bodily sound regulates breathing impulses and other metabolic processes in ways that might become, well, habits.
Let’s put this in other ways.
The classic, Freudian, psychodynamic model of neurosis – as disputed as it is, and with all of its colonial, sexist, homophobic, racist and even abuse-denying overtones – did at least one thing for our understanding of what repressed emotion does. Repressed emotion affects the body.
Today, a popular understanding of this kind of emotional repression from a biophysical perspective might be: the use of the conscious mind to hold back emotional flow, and along with it, the emotional qualities of certain associations, memories, or even the content of the memories themselves.
Repressing this thing we might call emotional flow represses the voice. The literal, physical voice. Now, this kind of repression of the voice can become what Freudians would call unconscious. To allow it out isn’t any longer a choice that can be made, because we’re so used to holding back, that we don’t realize we’re doing it any more.
Somatics have taught us, through the contended practices of the body psychotherapies descended from Wilhelm Reich’s work, or Bonnie Bainbridge Cohen’s Body-Mind Centering, or any numerous other somatic practices – from certain styles of yoga through to Zen meditation and beyond – that emotional flow is at least partly dependent on how we breathe. And neuropsychology and physiology bear this out.
Whatever might ‘cause’ an emotion – and the roots of the causes of emotion are a source of debate – once it gets going, it isn’t just a thought process. Emotion is meaty and full of pumping hormones and breath pattern alterations and gestures and rushes of fluid. Chemicals get released. Chemicals get washed away. Heart rates speed up and slow down. Our breath rises and falls and its patterns change. Digestion patterns speed up or slow down or get interrupted. What happens in the body affects the body. What happens in the body affects the voice. Ever heard that kind of voice that seems hardened against the world? Or that media voice – the voice that is carefully shaped to invoke reason? Maybe these vocalisers can never let go of that sound: maybe it’s the only sound they can do, now. It’s just too habitual to let it change.
So, these habits can become so habitual that we don’t notice them anymore. We might change our breathing in some way to modify our expressive states. Because the exact nature of the sound our voices make is exquisitely dependent on how we breathe, and on everything else we do with our bodies, it then changes as well. Our choices to not let impulses flow – and the breath is only one bodily impulse among many – get caught up in this web. What were once choices can become embedded, difficult, and stubborn. To go far beyond the psychoanalytic and neurophysiological models, we can end up embodying a culture of these choices, and invent together a cultural body that regulates vocal sound based on groups of people making similar choices or playing by similar rules of sonic exchange.
This can end up perpetuating itself within our very tissues, and it can be an incredibly subtle dynamic to identify and shift. The way we embody the complexities of how we structure our physical and psychological engagement with the world – the ways we breathe, look, move, gesture… the ensemble of these is how Bourdieu defined the habitus. Where these complexities start and end is perhaps an infinite loop, a continual cycle of turning and exchange and influence flowing from ourselves to our culture and back again. Our bodies are cultural, counter-cultural, infra-cultural, extra-cultural bodies: we react to culture; we interact with it: we take positions.
Sound – who gets to do it, and when and how – is negotiated, with others, but also, within our own bodies. The traces that others leave there, the things we might call sonic and vocal inhibitions, tensions, these held-back-nesses, eventually become ours to carry, live with, and/or dissolve. They are gifted to us by our culture…. by our environment… by our experience … and by our bodies themselves.
We negotiate sounding.
Pleasure is negotiated, too.
We do this to our children: we shut them up. Oh, of course, we also facilitate their sound, and some do this more than others. But even if we give them sonic liberty at home, someone will shut them up, somewhere. We all know and we all remember being silenced as children by somebody, or at least, made to raise our hands in a classroom to ensure one speaker at a time, chosen by the authority in question. Later, teenagers, more often girls than boys, are called mouthy. The mouth: implicitly loud, and if too active, implicitly offensive. The term has been used against feminists, every identity we might include within LGBTI+, African-Americans, and the list goes on.
The wet, open, loud, loud mouth, just ready to mouth off, just ready to make trouble with its irritating, nasty, and above all, bothersome noise – bothersome because it makes us have to react – to have to consider the existence, the needs, the demands of those we might otherwise ignore – that moist orifice can be a source of great pleasure.
And on the score of that poor mouthy mouth, let’s consider some other colloquial terms, like ‘sucker’. Sucking is bad, apparently. It expresses need. Thumb out of the mouth! Stop wanting intimacy, reassurance, warmth, contact, and above all stop wanting to satisfy your hard-wired, biological need to suck for comfort and food (my little child). And you there, you sexually active adult! You fucking cocksucker. You ass-licker. That gaping mouth should shut itself up: its gooey pleasures are disgusting. These pleasures involve direct skin-to-skin contact.
Perhaps there is a revolution to be had, in the simple facilitation of gape-mouthed drool.
The vocal tract – that long tunnel surrounded by tongue and palates and teeth and various bits of throat, with at its bottom, the resonant buzz of elastic membranes, through which air is squeezed – also grips the world with direct contact. It’s not just a resonating and sound-shaping cave.
I’m making some artworks for children and families right now, and I group them together under the project moniker “Your Vivacious Voice” [See SO! Amplifies post from 6/19/14 to learn more about the free Voice Bubbles App aspect of YB’s project—ed]. I’m collaborating with some scientists and clinician-scientists on this project. They all work with the voice – in psycholinguistics, in understanding infant language acquisition, in voice medicine, and even in laryngeal surgery. We interview these scientists, and use inspiration from our conversations as sources of metaphors for art-making.
One of these is the head Speech and Language Therapist at the Royal National Ear, Nose and Throat Hospital in London, Dr Ruth Epstein. She sees and/or oversees some of the most difficult cases of vocal problems in the whole of the UK. When we asked her what concerns she’d most like us to address in artworks for children and families, she responded along the lines of: please, find a way to get through to them that voice is contact, human contact. She has begun using communication skills, such as eye contact and turn-taking exercises, in addition to vocal skills, in families with children who have injured voices – because she realized at some point that in many of these families, the near exclusive modality of contact was yelling: yelling without contact – without relationship.
The contactless yell is the thrashing arm that somehow remains alone in a void. It’s a yell that might strike if it lands on other flesh, but somehow doesn’t grip, and can’t convert to a caress. It can’t hold… it only punches.
This reminds me of a rockish tune by Carole Pope and Rough Trade from the Canadiana of my childhood – the refrain went:
It hit me like, it hit me like, it hit me like a slap, oh-oh-oh, all touch…
All touch and all touch and no contact…..
Back to our children, and to us.
Bodily sound can be a pointed weapon. It can be violent, in that it can frighten, dominate, attack, evoke deep fear, and engage other mechanisms of terror and control and subjugation, and that it can attempt to annihilate our ability to recognize the existence of others. We can drown out others’ sounds. We can drown out their gesture. We can drown their vocalic bodies in our own through amplitude and clashes of timbral spectra. We can shut them up.
Let us consider, here, the desire for amplification and how amplified sound represents an exaggeration of this power, a cybernetic enhancement of the ability to dominate with our emanating waves. We can drown out the social ability for whole groups to hear anyone but ourselves.
However, if, in our cultural environments, everyone is allowed to sound – if, indeed, we facilitate social environments in which everyone’s sound is welcome, then those who are subjected to vocal and sonic violence have an incredible counter-power to this power: they have the power to make sound too.
Although making sound back to violent sound, back to annihilating sound, is not always easy, possible or permitted, it is a power that can’t be easily erased. And we can almost always feel, if not cognitively hear, our own sound vibrate within our own skulls and through our own bones, no matter what is coming from the outside, no matter what waves of vocalic body are streaming toward us. Our sound waves continue to exist, even if transformed.
We can give voice to ourselves. We can change our habits. We can expand away from them.
It isn’t even necessary to fight back. It’s only necessary to vibrate.
And we can take it further.
We can actively encourage each other’s sound. We can actively encourage our children’s sound. We can actively encourage social sound. We can actively encourage a dance with others’ voices. We can facilitate, make space for, enjoy being touched by, the uniqueness of other voices. We can play with how our voices collide and create children with the vocalic bodies of others. After all, our composite vocal bodies are the products of our intensive exchange. We can jublilate in the massages we receive by making our own sound, by vibrating our own skulls, flesh, blood, lymph, interstitial fluid, and the air near us, and we can make it so that we can engage in passionate exchange with the vibrations of others.
This might be something like music. Or other kinds of art. Or it might be simple conversation. Or it might be cooing with a baby. Or it might be making comforting sounds while a toddler cries. Or it might be screaming with rage together.
What it always is, though, is focusing on, opening up to, enjoying the dynamics of the dance of individual, idiosyncratic, messy, fleshly, bodily, sonic emanations reacting with one another.
In the end, the policing of our sound is under our control. We can find ways to unpolice, and enjoy the unbridledness of our sound.
Our bodily sound is a means of engaging passionately with relationship and of glorying in its results.
Featured image: “Faces 529″ by Flickr user Greg Peverill-Conti, CC BY-NC-ND 2.0
Yvon Bonenfant is Reader in Performing Arts at the University of Winchester. He likes voices that do what voices don’t usually do, and he likes bodies that don’t do what bodies usually do. He makes art starting from these sounds and movements. These unusual, intermedia works have been produced in 10 countries in the last 10 years, and his writing published in journals such as Performance Research, Choreographic Practices, and Studies in Theatre and Performance. He currently holds a Large Arts Award from the Wellcome Trust and funding from Arts Council England to collaborate with speech scientists on the development of a series of participatory, extra-normal voice artworks for children and families; see www.yourvivaciousvoice.com. Despite his air of Lenin, he does frighteningly accurate vocal imitations of both Axl Rose and Jon Bon Jovi. www.yvonbonenfant.com.
REWIND! . . .If you liked this post, you may also dig:
This Is Your Body on the Velvet Underground– Jacob Smith