CLICK HERE TO DOWNLOAD: Sounding Out! Podcast #32: The World Listening Update – 2014 Edition
SUBSCRIBE TO THE SERIES VIA ITUNES
ADD OUR PODCASTS TO YOUR STITCHER FAVORITES PLAYLIST
Listen in as Eric Leonardson and Monica Ryan celebrate World Listening Day 2014 by reflecting on the work of R. Murray Schafer and the World Soundscape Project. Interviewees Professor Sabine Breitsameter of Hochschule Darmstadt (Germany) and Professor Barry Truax of Simon Fraser University (Canada) discuss the impact of Schafer’s ideas and offer commentary on contemporary threads within the field of Acoustic Ecology. How does does Acoustic Ecology help us to think through today’s complex environments and how can listeners like you make a difference?
Co-Authors of this podcast:
Eric Leonardson is a Chicago-based audio artist and teacher. He has devoted a majority of his professional career to unorthodox approaches to sound and its instrumentation with a broad understanding of texture, atmosphere and microtones. He is President of the World Forum for Acoustic Ecology and founder of the Midwest Society for Acoustic Ecology, and Executive Director of the World Listening Project. Leonardson is an Adjunct Associate Professor in the Department of Sound at The School of the Art Institute of Chicago.
Monica Ryan is an instructor and audio artist from Chicago. Currently her work explores spatialized sound recording and playback techniques along with interactive sound environments. She teaches in several institutions in Chicago, including The School of the Art Institute of Chicago and Columbia College.
Tom Haigh is a British post production sound mixer, composer, and phonography enthusiast, now residing in Chicago. As a staff engineer at ARU Chicago, he works with clients in advertising, media, and independent film.
Featured image: Used through a CC BY license. Originally posted by Ky @Flickr.
REWIND! . . .If you liked this post, you may also dig:
Sounding Out! Podcast #7: Celebrate World Listening Day with the World Listening Project- Eric Leonardon, Monica Ryan, and Tom Haigh
SO! Amplifies: Eric Leonardson and World Listening Day 18 July 2014- Eric Leonardson
Sounding Out! Podcast (#18): Listening to the Tuned City of Brussels, Day 3: “Ephemeral Atmospheres”- Felicity Ford and Valeria Merlini
After a rockin’ (and seriously informative) series of podcasts from Leonard J. Paul, a Drrty South banger dropped by SO! Regular Regina Bradley, and a screamtastic meditation from Yvon Bonenfant, our summer Sound and Pleasure series serves up some awesomeness on a platter this week with the return of Steph Ceraso, who makes us wish all those food pics on instagram came with recordings. Take a big bite out of this! --JS, Editor-in-Chief
Lightly I tap the burnt surface with a cold metal spoon until it cracks; it fractures like a fine layer of sugary glass; silent, smooth custard mixes with the sticky sweet crunch of the caramelized shards.
An otherwise bland and unmemorable dessert, crème brûlée is always my go-to treat. The sonic pleasures of this indulgence keep me coming back: the tapping, cracking, crunching.
Though the taste and visual presentation of food usually get most of the hype, it’s no secret that sound can amplify the enjoyment and delight of eating. Indeed, sound has become an increasingly important ingredient in the design, advertising, and experience of food: from “junk” food to gourmet dining. What is especially fascinating and disconcerting about this strategic use of sound is the powerful connection between pleasure and sensory manipulation. To my mind, the myriad ways sound is employed to manipulate perceptions of food underscores the need to pay more attention to when, how, and why sound influences our thoughts, feelings, and sensory experiences.
* * *
Food engineers and marketing teams have been taking advantage of the pleasures of sound for years. Rice Krispies’ “Snap, Crackle, Pop” trademark has been around since the late 1920s. And of course there are Pop Rocks, my favorite sounding retro product. The carbonated sugar crystals were invented in the 1950s, but thanks to commercials that celebrated the candy in all of its sonic glory, Pop Rocks’ popularity reached a fever pitch in the 1970s and it’s still going strong today. The official Pop Rocks website boasts that the product continues to be the “leading popping candy brand worldwide.”
Sound is a crucial part of the pleasurable experience of food’s packaging, too. Consider Pringles’ famous “Once you pop you can’t stop” slogan. A neatly stacked chip cylinder with a pleasant-sounding lid is marketed as a refreshing alternative to crinkly chip bags.
Designing sound for the things that contain food may seem like a silly marketing gimmick, but the sounds of packaging can make or break the product. For instance, in an attempt to make its SunChips brand more environmentally friendly, in 2010 Frito-Lay introduced a compostable chip bag. Consumers found it to be ridiculously noisy and complained. The bag had so many haters, in fact, that a facebook group called “SORRY I CAN’T HEAR YOU OVER THIS SUN CHIPS BAG” attracted nearly 30,000 fans. Sales fell, and the financial loss caused Frito-Lay to go back to the un-environmentally friendly bags. Just this year, the company introduced yet another version of the compostable bag. It’s too early to tell if consumers will deem its sound acceptable.
While many companies strive to hit the right note when it comes to the pleasurable sounds of food and its packaging, recent research on taste and sound has been more focused on how external sounds affect the experience of eating. In a noteworthy study, the food company Unilever and the University of Manchester found that the experience of sweetness and saltiness in food decreased in relation to high levels of background noise (perhaps one of the reasons that airplane food generally sucks). They also identified a correlation between the increased volume of background noise and the eater’s perception of crunchiness and freshness.
Additionally, the Crossmodal Laboratory at Oxford University run by professor Charles Spence got a lot of press for discovering that low-pitched sounds tend to bring out bitter flavors while high-pitched sounds heighten the sweetness of food. Go grab a snack (chocolate or coffee work best) and you can try this experiment for yourself.
Armed with scientific knowledge, many chefs and entrepreneurs have been teaming up to put these ideas into practice. For a limited time London restaurant House of Wolf served what they called a “sonic cake pop.” The treat came with a phone number that presented callers with the choice of pushing 1 for sweet (to hear a high-frequency sound) and 2 for bitter (to hear a low-frequency sound). The experiment was a success. People seemed to want to hear their cake and eat it too. The same Guardian article reports that Ben and Jerry’s plans to put QR codes on its packaging so that customers can use their smartphones to access sounds that compliment the flavor of ice cream they are eating.
For some, making sound a more prominent feature of eating experiences is more than a fun experiment or savvy marketing strategy: it’s a full-blown artistic performance. World-renowned chef Heston Blumenthal uses sound to draw attention to the holistic sensory experience of dining. His dish “Sound of the Sea,” for example, consists of seafood, edible seaweed, tapioca that looks like sand, decorative shells, and an iPod so that diners can listen to the sounds of the ocean.
Blumenthal has also performed sound experiments while eaters spooned up his bacon and egg ice cream (Yep. That’s a thing!). When the sound of bacon frying in a pan was played, people rated the bacon flavor of the ice cream to be more intense than the egg flavor, and vice versa when the sound was clucking chickens.
In a similar vein, Boston chef Jason Bond and composer Ben Houge have paired up to create food operas, or what they call “audio-gustatory events.” They use real-time musical scoring techniques based off of Houge’s work in video games to design eating experiences that explicitly link sound and taste.
Clearly, when it comes to the pleasures (and displeasures) of eating, sound matters. I’ll admit that I’m a fan of the more imaginative, experimental uses of sound in experiences like the food opera or Blumenthal’s edible sonic creations. There is a sense of play and discovery in these designed experiences; and, people know what they are signing up for and willingly choose to participate. Such endeavors have the potential to heighten participants’ sensitivity to how sound figures into eating and other kinds of everyday activities.
Yet, along with the sonic branding and marketing of edible products, these experiments raise some troubling questions about the relationship between pleasure and sensory manipulation: When is it wrong or unethical to use sensory manipulation to create pleasurable experiences? At what point does manipulation become pleasurable? Is all pleasure a form of manipulation?
Perhaps more significantly, the ways that people are applying scientific knowledge about sound and taste opens up another can of worms: What are the implications of trying to standardize pleasurable sounds via commercial products? What kinds of bodies are invited to participate in pleasurable sensory experiences, or not? I’m thinking particularly of individuals who are deaf and hard-of-hearing, or who have different cultural cues when it comes to recognizing a sound as “pleasurable.”
The sounds of food do not necessarily have to be engineered to be pleasurable. However, because new information about the relationship between sound and other senses is being used to explicitly and implicitly manipulate our experiences, it seems that there is a real need for cultivating a keener, more critical sensory awareness. This means questioning when, how, and why sound is being employed to create pleasurable experiences in a range of products and environments; it means paying careful attention to the ways that sound interacts with all of our senses to influence everyday experiences. So, the next time you’re having what seems to be a simple “feel good” eating experience, be sure to open your ears along with your mouth.
Featured image by Flickr user Wizetux, CC BY 2.0
Steph Ceraso received her doctorate in 2013 from the University of Pittsburgh, specializing in rhetoric and composition, pedagogy, sound studies, and digital media. In addition to being a three-peat guest writer for Sounding Out!, her work has been featured in Currents in Electronic Literacy, HASTAC, and Fembot Collective. She is also the coeditor of a special “Sonic Rhetorics” issue of Harlot. Her current book project, Sounding Composition, Composing Sound, examines how expansive, consciously embodied listening and sonic composing practices can deepen our knowledge of multimodal engagement and production. Steph will be joining the faculty in the English department at the University of Maryland, Baltimore County this fall. You can find more about her research, media projects, and teaching at http://www.stephceraso.com.
REWIND! . . .If you liked this post, you may also dig:
On Sound and Pleasure: Meditations on the Human Voice– Yvon Bonenfant
Welcome to our new series Sculpting the Film Soundtrack, which brings you new perspectives on sound and filmmaking. As Guest Editor, we’re honored and delighted to have Katherine Spring, Associate Professor of Film Studies at Wilfrid Laurier University. Spring is the author of an exciting and important new book Saying it With Songs: Popular Music and the Coming of Sound to Hollywood Cinema. Read it! You’ll find an impeccably researched work that’s the definition of how the history of film sound and media convergence ought to be written.
But before rushing back to the early days, stick around here on SO! for the first of our three installments in Sculpting the Film Soundtrack.
It’s been 35 years since film editor and sound designer Walter Murch used the sounds of whirring helicopter blades in place of an orchestral string section in Apocalypse Now, in essence blurring the boundary between two core components of the movie soundtrack: music and sound effects. This blog series explores other ways in which filmmakers have treated the soundtrack as a holistic entity, one in which the traditional divisions between music, effects, and speech have been disrupted in the name of sculpting innovative sonic textures.
In three entries, Benjamin Wright, Danijela Kulezic-Wilson, and Randolph Jordan will examine the integrated soundtrack from a variety of perspectives, including technology, labor, aesthetic practice, theoretical frameworks, and suggest that the dissolution of the boundaries between soundtrack categories can prompt us to apprehend film sound in new ways. If, as Murch himself once said, “Listening to interestingly arranged sounds makes you hear differently,” then the time is ripe for considering how and what we might hear across the softening edges of the film soundtrack.
- Guest Editor Katherine Spring
Composing a sound world for Man of Steel (2013), Zack Snyder’s recent Superman reboot, had Hans Zimmer thinking about telephone wires stretching across the plains of Clark Kent’s boyhood home in Smallville. “What would that sound like,” he said in an interview last year. “That wind making those telephone wires buzz – how could I write a piece of music out of that?” The answer, as it turned out, was not blowing in the wind, but sliding up and down the scale of a pedal steel guitar, the twangy lap instruments of country music. In recording sessions, Zimmer instructed a group of pedal steel players to experiment with sustains, reverb, and pitches that, when mixed into the final track, accompany Superman leaping over tall buildings at a single bound.
His work on Man of Steel, just one of his most recent films in a long and celebrated career, exemplifies his unique take on composing for cinema. “I would have been just as happy being a recording engineer as a composer,” remarked Zimmer last year in an interview to commemorate the release of a percussion library he created in collaboration with Spitfire Audio, a British sample library developer. “Sometimes it’s very difficult to stop me from mangling sounds, engineering, and doing any of those things, and actually getting me to sit down and write the notes.” Dubbed the “HZ01 London Ensembles,” the library consists of a collection of percussion recordings featuring many of the same musicians who have performed for Zimmer’s film scores, playing everything from tamtams to taikos, buckets to bombos, timpani to anvils. According to Spitfire’s founders, the library recreates Zimmer’s approach to percussion recording by offering a “distillation of a decade’s worth of musical experimentation and innovation.”
In many ways, the collection is a reminder not just of the influence of Zimmer’s work on contemporary film, television, and video game composers but also of his distinctive approach to film scoring, one that emphasizes sonic experimentation and innovation. Having spent the early part of his career as a synth programmer and keyboardist for new wave bands such as The Buggles and Ultravox, then as a protégé of English film composer Stanley Myers, Zimmer has cultivated a hybrid electronic-orchestral aesthetic that uses a range of analog and digital oscillators, filters, and amplifiers to twist and augment solo instrument samples into a synthesized whole.
Zimmer played backup keyboards on “Video Killed the Radio Star.”
In a very short time, Zimmer has become a dominant voice in contemporary film music with a sound that blends melody with dissonance and electronic minimalism with rock and roll percussion. His early Hollywood successes, Driving Miss Daisy (1989) and Days of Thunder (1990), combined catchy themes and electronic passages with propulsive rhythms, while his score for Black Rain (1989), which featured taiko drums, electronic percussion, and driving ostinatos, laid the groundwork for an altogether new kind of action film score, one that Zimmer refined over the next two decades on projects such as The Rock (1996), Gladiator (2000), and The Pirates of the Caribbean series.
What is especially intriguing about Zimmer’s sound is the way in which he combines the traditional role of the composer, who fashions scores around distinct melodies (or “leitmotifs”), with that of the recording engineer, who focuses on sculpting sounds. Zimmer may not be the first person in the film business to experiment with synthesized tones and electronic arrangements – you’d have to credit Bebe and Louis Barron (Forbidden Planet, 1956), Vangelis (Chariots of Fire, 1981), Jerry Goldsmith (Logan’s Run, 1976), and Giorgio Moroder (Midnight Express, 1981) for pushing that envelope – but he has turned modern film composing into an engineering art, something that few other film composers can claim.
One thing that separates Zimmer’s working method from that of other composers is that he does not confine himself to pen and paper, or even keyboard and computer monitor. Instead, he invites musicians to his studio or a sound stage for an impromptu jam session to find and hone the musical syntax of a project. Afterwards, he returns to his studio and uses the raw samples from the sessions to compose the rest of the score, in much the same way that a recording engineer creates the architecture of a sound mix.
“There is something about that collaborative process that happens in music all the time,” Zimmer told an interviewer in 2010. “That thing that can only happen with eye contact and when people are in the same room and they start making music and they are fiercely dependent on each other. They cannot sound good without the other person’s part.”
Zimmer facilitates the social and aesthetic contours of these off-the-cuff performances and later sculpts the samples into the larger fabric of a score. In most cases, these partnerships have provided the equivalent of a pop hook to much of Zimmer’s output: Lebo M’s opening vocal in The Lion King (1994), Johnny Marr’s reverb-heavy guitar licks in Inception, Lisa Gerrard’s ethereal vocals in Gladiator and Black Hawk Down (2002), and the recent contributions of the so-called “Magnificent Six” musicians to The Amazing Spider Man 2 (2014).
The melodic hooks are simple but infectious – even Zimmer admits he writes “stupidly simple music” that can often be played with one finger on the piano. But what matters most are the colors that frame those notes and the performances that imbue those simple melodies with a personality. Zimmer’s work on Christopher Nolan’s Dark Knight trilogy revolves around a deceptively simple rising two-note motif that often signifies the presence of the caped crusader, but the pounding taiko hits and bleeding brass figures that surround it do as much to conjure up images of Gotham City as cinematographer Wally Pfister’s neo-noir photography. The heroic aspects of the Batman character are muted in Zimmer’s score except for the presence of the expansive brass figures and taiko hits, which reach an operatic crescendo in the finale, where the image of Batman escaping into the blinding light of the city is accompanied by a grand statement of the two-note figure backed by a driving string ostinato. Throughout the series, a string ostinato and taikos set the pace for action sequences and hint at the presence of Batman who lies somewhere in the shadows of Gotham.
Zimmer’s expressive treatment of musical colors also characterizes his engineering practices, which are more commonly used in the recording industry. Music scholar Paul Théberge has noted that the recording engineer’s interest in an aesthetic of recorded musical “sound” led to an increased demand for control over the recording process, especially in the early days of multitrack rock recording where overdubbing created a separate, hierarchical space for solo instruments. Likewise for Zimmer, it’s not just about capturing individual sounds from an orchestra but also layering them into a synthesized product. Zimmer is also interested in experimenting with acoustic performances, pushing musicians to play their instruments in unconventional ways or playing his notes “the wrong way,” as he demonstrates here in the making of the Joker’s theme from The Dark Knight:
The significance of the cooperative aspects of these musical performances and their treatment as musical “colors” to be modulated, tweaked, and polished rests on a paradoxical treatment of sound. While he often finds his sound world among the wrong notes, mistakes, and impromptu performances of world musicians, Zimmer is also often criticized for removing traces of an original performance by obscuring it with synth drones and distortion. In some cases, like in The Peacemaker (1997), the orchestration is mushy and sounds overly processed. But in other cases, the trace of a solo performance can constitute a thematic motif in the same way that a melody serves to identify place, space, or character in classical film music. Compare, for instance, Danny Elfman’s opening title theme for Tim Burton’s Batman (1989) and Zimmer’s opening title music for The Dark Knight. While Elfman creates a suite of themes around a central Batman motif, Zimmer builds a sparse sound world that introduces a sustained note on the electric cello that will eventually be identified with the Joker. It’s the timbre of the cello, not its melody, that carries its identifying features.
To texture the sounds in Man of Steel, Zimmer also commissioned Chas Smith, a Los Angeles-based composer, performer, and exotic instrument designer to construct instruments from “junk” objects Smith found around the city that could be played with a bow or by hand while also functioning as metal art works. The highly abstract designs carry names that give some hint to their origins – “Bertoia 718” named after modern sculptor and furniture designer Harry Bertoia; “Copper Box” named for the copper rods that comprise its design; and “Tin Sheet” that, when prodded, sounds like futuristic thunderclaps.
Smith’s performances of his exotic instruments are woven into the fabric of the score, providing it with a sort of musical sound design. Consider General Zod’s suite of themes and motifs, titled “Arcade” on the 2-disc version of the soundtrack. The motif is built around a call-and-answer ostinato for strings and brass that is interrupted by Smith’s sculptural dissonance. It’s the sound of an otherworldly menace, organic but processed, sculpted into a conventional motif-driven sound world.
Zimmer remains a fixture in contemporary film music partly because, as music critic Jon Burlingame has pointed out, he has a relentless desire to search for fresh approaches to a film’s musical landscape. This pursuit begins with his extracting of sounds and colors from live performances and electronically engineering them during the scoring process. Such heightened attention to sound texture and color motivated the creation of the Spitfire percussion library, but can only hint at the experimentation and improvisational nature that goes into Zimmer’s work. In each of his film scores, the music tells a story that is tailored to the demands of the narrative, but the sounds reveal Zimmer’s urge to manipulate sound samples until they are, in his own words, “polished like a diamond.”
Ben Wright holds a Provost Postdoctoral Fellowship from the University of Southern California in the School of Cinematic Arts. In 2011, he received his Ph.D. in Cultural Studies from the Institute for Comparative Studies in Literature, Art and Culture at Carleton University. His research focuses on the study of production cultures, especially exploring the industrial, social, and technological effects of labor structures within the American film industry. His work on production culture, film sound and music, and screen comedy has appeared in numerous journals and anthologies. He is currently completing a manuscript on the history of contemporary sound production, titled Hearing Hollywood: Art, Industry, and Labor in Hollywood Film Sound.
All images creative commons.
REWIND! . . .If you liked this post, you may also dig:
After a rockin’ (and seriously informative) series of podcasts from Leonard J. Paul–a three part “Inside the Game Sound Designer’s Studio”– and a post on sound and black women’s sexual freedom from SO! Regular Regina Bradley, our summer Sound and Pleasure series keeps doin’ it and doin’ it and doin’ it well, this week with a beautiful set of meditations from scholar, artist, performer, and voice activist, Yvon Bonenfant. EVERYBODY SCREAM!!!--JS, Editor-in-Chief
What I have to say about sound and pleasure can mostly be summed up this way: everyone deserves to take profound pleasure in their body’s sound.
Not only this, everyone deserves to both engage passionately with social sound and negotiate the exchange of social sound on pleasurable terms.
Like other expressive systems, however, these inalienable sonic human rights are mostly ignored, curtailed, or otherwise ‘disciplined and punished’ in the Foucauldian sense by our social systems. So, we are mostly neurotic, or otherwise hung up on, what kinds of sounds we make, where and when. We fetishise sound, particularly virtuosically framed sound, because it is part of a series of sublimated impulses, or we repress it because we think we aren’t supposed to emit it, or we ignore it.
In any given human relationship within which all parties can vocalize, the voice is an evident, key relational tool. It is full of gesture and meaning and text and sends rapid-fire, complex, layered, even self-contradictory or oxymoronic messages. It is a truly tangled web, and of course, for those who can use speech, transmits language.
However, I’d like to disentangle our sound from our language for a moment. Indeed, sound is not necessary in order to develop and transmit linguistically carried ideas, information and impulses. It has long been accepted that sign languages are fully developed languages, with intricate grammatical systems, vocabularies, and all of the other features of spoken languages. It is thus not necessary to use sound as a carrier of language. Yet if we have a voice, we almost always use sound to carry our language. And we force deaf people to try to fake having a voice and to fake listening to voices through lip reading and gesturing.
The last twenty years has seen a real boom in speculation and even scientific experiments that theorise why human bodily sound – the most evident aspect of which is our vocal sound – is so important to us. Musicology, biomusicology, evolutionary psychology, neuropsychology, and cultural studies of many kinds have tried to account for this. I have my own favorite reason, one I’ve tried to describe in a number of scholarly articles. This is that sound is much like touch. Like, yet unalike. It reaches and vibrates bodies, but at distance. It voyages through space in other ways, but it evokes haptic responses.
Sound isn’t solid, but it takes up space. This is expressed by Stephen Connor within his concept of the vocalic body. When we sound, there is a resonant field of vibration that moves through matter, which behaves according to the laws of physics – it vibrates molecules. This vibratory field leaves us, but is of us, and it voyages through space. Other people hear it. Other people feel it.
I’ve said that sound is like touch. However, one key way that it is not like touch is that it can do this thing. It can leave our bodies and travel away from us. We don’t need to grip it. We don’t need to hold on. And once emanated, it is out of our control.
More than one emanation can co-exist within matter. Their vibrations interact with one another, waves colliding and travelling in similar or different directions, and the vocalic bodies that they represent are morphed, hybridized: they intersect and invent composite bodies.
We hear the resulting harmonies. Historically policed into ‘consonances’ and ‘dissonances’, we have the power to let the negativizing connotations of either of these words go and simply hear the results of the collisions. Voices sounding simultaneously create choreographies of gesture that can be jubilant, depressing, assertive, aggressive, delightful, morose… or many of these simultaneously and in rapid alternation.
The fields of human sound in which we bathe are a continually self-knitting web of sensation. They are full of gestures pregnant with intention, filled with improvisatory spontaneity, success, failure and experimentation. They are filled with a desire to act upon matter, and to reach and engage one another.
My Ukrainian-origin mother was ‘loud’, I guess, at least by Anglo-Saxon standards, and her voice was timbrally very rich. And my father was a radio announcer (he disliked being called a DJ immensely, even though he worked in commercial radio and worked on shows that spun discs – he preferred being associated with talking). His voice was also very rich, as well as extremely crafted. It could be pointed and severe: a weapon. He had professional command of its qualities. We were not a quiet family; none of us were vocal wallflowers. But were our soundings pleasure-filled? Certainly, we were allowed to make lots of sound in some circumstances. However, just being allowed to be loud – though it might sometimes be a pleasure – does not necessarily lead to a pleasure-filled dynamic. Weightlifting makes us stronger, but it doesn’t necessarily feel good.
The amount of sound and whether ‘lots’ of it, or heightenings of its qualities – lots of amplitude, or lots of other kinds of distinctness, let’s say things like pitch or emotional timbre – are key variable features of family life in our cultures. Sound takes us directly into the meatiest of interpersonal dynamics – the dynamics of space and gesture, the dynamics of who takes up space with their sound and when. Families are, of course, microcosms of this sonic dynamic, but any group within which we generate relationships and encounters is subject to this dynamic, too. Our very own bodies end up developing what Thomas Csordas might call a ‘somatic mode’ that embodies our experience of these dynamics.
Whether we start from psychodynamic, neuropsychiatric, or even habitus-based models, it’s clear that repressing the expression of bodily sound regulates breathing impulses and other metabolic processes in ways that might become, well, habits.
Let’s put this in other ways.
The classic, Freudian, psychodynamic model of neurosis – as disputed as it is, and with all of its colonial, sexist, homophobic, racist and even abuse-denying overtones – did at least one thing for our understanding of what repressed emotion does. Repressed emotion affects the body.
Today, a popular understanding of this kind of emotional repression from a biophysical perspective might be: the use of the conscious mind to hold back emotional flow, and along with it, the emotional qualities of certain associations, memories, or even the content of the memories themselves.
Repressing this thing we might call emotional flow represses the voice. The literal, physical voice. Now, this kind of repression of the voice can become what Freudians would call unconscious. To allow it out isn’t any longer a choice that can be made, because we’re so used to holding back, that we don’t realize we’re doing it any more.
Somatics have taught us, through the contended practices of the body psychotherapies descended from Wilhelm Reich’s work, or Bonnie Bainbridge Cohen’s Body-Mind Centering, or any numerous other somatic practices – from certain styles of yoga through to Zen meditation and beyond – that emotional flow is at least partly dependent on how we breathe. And neuropsychology and physiology bear this out.
Whatever might ‘cause’ an emotion – and the roots of the causes of emotion are a source of debate – once it gets going, it isn’t just a thought process. Emotion is meaty and full of pumping hormones and breath pattern alterations and gestures and rushes of fluid. Chemicals get released. Chemicals get washed away. Heart rates speed up and slow down. Our breath rises and falls and its patterns change. Digestion patterns speed up or slow down or get interrupted. What happens in the body affects the body. What happens in the body affects the voice. Ever heard that kind of voice that seems hardened against the world? Or that media voice – the voice that is carefully shaped to invoke reason? Maybe these vocalisers can never let go of that sound: maybe it’s the only sound they can do, now. It’s just too habitual to let it change.
So, these habits can become so habitual that we don’t notice them anymore. We might change our breathing in some way to modify our expressive states. Because the exact nature of the sound our voices make is exquisitely dependent on how we breathe, and on everything else we do with our bodies, it then changes as well. Our choices to not let impulses flow – and the breath is only one bodily impulse among many – get caught up in this web. What were once choices can become embedded, difficult, and stubborn. To go far beyond the psychoanalytic and neurophysiological models, we can end up embodying a culture of these choices, and invent together a cultural body that regulates vocal sound based on groups of people making similar choices or playing by similar rules of sonic exchange.
This can end up perpetuating itself within our very tissues, and it can be an incredibly subtle dynamic to identify and shift. The way we embody the complexities of how we structure our physical and psychological engagement with the world – the ways we breathe, look, move, gesture… the ensemble of these is how Bourdieu defined the habitus. Where these complexities start and end is perhaps an infinite loop, a continual cycle of turning and exchange and influence flowing from ourselves to our culture and back again. Our bodies are cultural, counter-cultural, infra-cultural, extra-cultural bodies: we react to culture; we interact with it: we take positions.
Sound – who gets to do it, and when and how – is negotiated, with others, but also, within our own bodies. The traces that others leave there, the things we might call sonic and vocal inhibitions, tensions, these held-back-nesses, eventually become ours to carry, live with, and/or dissolve. They are gifted to us by our culture…. by our environment… by our experience … and by our bodies themselves.
We negotiate sounding.
Pleasure is negotiated, too.
We do this to our children: we shut them up. Oh, of course, we also facilitate their sound, and some do this more than others. But even if we give them sonic liberty at home, someone will shut them up, somewhere. We all know and we all remember being silenced as children by somebody, or at least, made to raise our hands in a classroom to ensure one speaker at a time, chosen by the authority in question. Later, teenagers, more often girls than boys, are called mouthy. The mouth: implicitly loud, and if too active, implicitly offensive. The term has been used against feminists, every identity we might include within LGBTI+, African-Americans, and the list goes on.
The wet, open, loud, loud mouth, just ready to mouth off, just ready to make trouble with its irritating, nasty, and above all, bothersome noise – bothersome because it makes us have to react – to have to consider the existence, the needs, the demands of those we might otherwise ignore – that moist orifice can be a source of great pleasure.
And on the score of that poor mouthy mouth, let’s consider some other colloquial terms, like ‘sucker’. Sucking is bad, apparently. It expresses need. Thumb out of the mouth! Stop wanting intimacy, reassurance, warmth, contact, and above all stop wanting to satisfy your hard-wired, biological need to suck for comfort and food (my little child). And you there, you sexually active adult! You fucking cocksucker. You ass-licker. That gaping mouth should shut itself up: its gooey pleasures are disgusting. These pleasures involve direct skin-to-skin contact.
Perhaps there is a revolution to be had, in the simple facilitation of gape-mouthed drool.
The vocal tract – that long tunnel surrounded by tongue and palates and teeth and various bits of throat, with at its bottom, the resonant buzz of elastic membranes, through which air is squeezed – also grips the world with direct contact. It’s not just a resonating and sound-shaping cave.
I’m making some artworks for children and families right now, and I group them together under the project moniker “Your Vivacious Voice” [See SO! Amplifies post from 6/19/14 to learn more about the free Voice Bubbles App aspect of YB’s project—ed]. I’m collaborating with some scientists and clinician-scientists on this project. They all work with the voice – in psycholinguistics, in understanding infant language acquisition, in voice medicine, and even in laryngeal surgery. We interview these scientists, and use inspiration from our conversations as sources of metaphors for art-making.
One of these is the head Speech and Language Therapist at the Royal National Ear, Nose and Throat Hospital in London, Dr Ruth Epstein. She sees and/or oversees some of the most difficult cases of vocal problems in the whole of the UK. When we asked her what concerns she’d most like us to address in artworks for children and families, she responded along the lines of: please, find a way to get through to them that voice is contact, human contact. She has begun using communication skills, such as eye contact and turn-taking exercises, in addition to vocal skills, in families with children who have injured voices – because she realized at some point that in many of these families, the near exclusive modality of contact was yelling: yelling without contact – without relationship.
The contactless yell is the thrashing arm that somehow remains alone in a void. It’s a yell that might strike if it lands on other flesh, but somehow doesn’t grip, and can’t convert to a caress. It can’t hold… it only punches.
This reminds me of a rockish tune by Carole Pope and Rough Trade from the Canadiana of my childhood – the refrain went:
It hit me like, it hit me like, it hit me like a slap, oh-oh-oh, all touch…
All touch and all touch and no contact…..
Back to our children, and to us.
Bodily sound can be a pointed weapon. It can be violent, in that it can frighten, dominate, attack, evoke deep fear, and engage other mechanisms of terror and control and subjugation, and that it can attempt to annihilate our ability to recognize the existence of others. We can drown out others’ sounds. We can drown out their gesture. We can drown their vocalic bodies in our own through amplitude and clashes of timbral spectra. We can shut them up.
Let us consider, here, the desire for amplification and how amplified sound represents an exaggeration of this power, a cybernetic enhancement of the ability to dominate with our emanating waves. We can drown out the social ability for whole groups to hear anyone but ourselves.
However, if, in our cultural environments, everyone is allowed to sound – if, indeed, we facilitate social environments in which everyone’s sound is welcome, then those who are subjected to vocal and sonic violence have an incredible counter-power to this power: they have the power to make sound too.
Although making sound back to violent sound, back to annihilating sound, is not always easy, possible or permitted, it is a power that can’t be easily erased. And we can almost always feel, if not cognitively hear, our own sound vibrate within our own skulls and through our own bones, no matter what is coming from the outside, no matter what waves of vocalic body are streaming toward us. Our sound waves continue to exist, even if transformed.
We can give voice to ourselves. We can change our habits. We can expand away from them.
It isn’t even necessary to fight back. It’s only necessary to vibrate.
And we can take it further.
We can actively encourage each other’s sound. We can actively encourage our children’s sound. We can actively encourage social sound. We can actively encourage a dance with others’ voices. We can facilitate, make space for, enjoy being touched by, the uniqueness of other voices. We can play with how our voices collide and create children with the vocalic bodies of others. After all, our composite vocal bodies are the products of our intensive exchange. We can jublilate in the massages we receive by making our own sound, by vibrating our own skulls, flesh, blood, lymph, interstitial fluid, and the air near us, and we can make it so that we can engage in passionate exchange with the vibrations of others.
This might be something like music. Or other kinds of art. Or it might be simple conversation. Or it might be cooing with a baby. Or it might be making comforting sounds while a toddler cries. Or it might be screaming with rage together.
What it always is, though, is focusing on, opening up to, enjoying the dynamics of the dance of individual, idiosyncratic, messy, fleshly, bodily, sonic emanations reacting with one another.
In the end, the policing of our sound is under our control. We can find ways to unpolice, and enjoy the unbridledness of our sound.
Our bodily sound is a means of engaging passionately with relationship and of glorying in its results.
Featured image: “Faces 529″ by Flickr user Greg Peverill-Conti, CC BY-NC-ND 2.0
Yvon Bonenfant is Reader in Performing Arts at the University of Winchester. He likes voices that do what voices don’t usually do, and he likes bodies that don’t do what bodies usually do. He makes art starting from these sounds and movements. These unusual, intermedia works have been produced in 10 countries in the last 10 years, and his writing published in journals such as Performance Research, Choreographic Practices, and Studies in Theatre and Performance. He currently holds a Large Arts Award from the Wellcome Trust and funding from Arts Council England to collaborate with speech scientists on the development of a series of participatory, extra-normal voice artworks for children and families; see www.yourvivaciousvoice.com. Despite his air of Lenin, he does frighteningly accurate vocal imitations of both Axl Rose and Jon Bon Jovi. www.yvonbonenfant.com.
REWIND! . . .If you liked this post, you may also dig:
This Is Your Body on the Velvet Underground– Jacob Smith