Archive | Disability Studies RSS for this section

Sounds of Science: The Mystique of Sonification

NYU_full

Hearing the Unheard IIWelcome to the final installment of Hearing the UnHeardSounding Out!s series on what we don’t hear and how this unheard world affects us. The series started out with my post on hearing, large and small, continued with a piece by China Blue on the sounds of catastrophic impacts, and Milton Garcés piece on the infrasonic world of volcanoes. To cap it all off, we introduce The Sounds of Science by professor, cellist and interactive media expert, Margaret Schedel.

Dr. Schedel is an Associate Professor of Composition and Computer Music at Stony Brook University. Through her work, she explores the relatively new field of Data Sonification, generating new ways to perceive and interact with information through the use of sound. While everyone is familiar with informatics, graphs and images used to convey complex information, her work explores how we can expand our understanding of even complex scientific information by using our fastest and most emotionally compelling sense, hearing.

– Guest Editor Seth Horowitz

With the invention of digital sound, the number of scientific experiments using sound has skyrocketed in the 21st century, and as Sounding Out! readers know, sonification has started to enter the public consciousness as a new and refreshing alternative modality for exploring and understanding many kinds of datasets emerging from research into everything from deep space to the underground. We seem to be in a moment in which “science that sounds” has a special magic, a mystique that relies to some extent on misunderstandings in popular awareness about the processes and potentials of that alternative modality.

For one thing, using sound to understand scientific phenomena is not actually new. Diarist Samuel Pepys wrote about meeting scientist Robert Hooke in 1666 that “he is able to tell how many strokes a fly makes with her wings (those flies that hum in their flying) by the note that it answers to in musique during their flying.” Unfortunately Hooke never published his findings, leading researchers to speculate on his methods. One popular theory is that he tied strings of varying lengths between a fly and an ear trumpet, recognizing that sympathetic resonance would cause the correct length string to vibrate, thus allowing him to calculate the frequency. Even Galileo used sound, showing the constant acceleration of a ball due to gravity by using an inclined plane with thin moveable frets. By moving the placement of the frets until the clicks created an even tempo he was able to come up with a mathematical equation to describe how time and distance relate when an object falls.

Illustration from Robert Hooke's Micrographia (1665)

Illustration from Robert Hooke’s Micrographia (1665)

There have also been other scientific advances using sound in the more recent past. The stethoscope was invented in 1816 for auscultation, listening to the sounds of the body. It was later applied to machines—listening for the operation of the technological gear. Underwater sonar was patented in 1913 and is still used to navigate and communicate using hydroacoustic phenomenon. The Geiger Counter was developed in 1928 using principles discovered in 1908; it is unclear exactly when the distinctive sound was added. These are all examples of auditory display [AD]; sonification-generating or manipulating sound by using data is a subset of AD. As the forward to the The Sonification Handbook states, “[Since 1992] Technologies that support AD have matured. AD has been integrated into significant (read “funded” and “respectable”) research initiatives. Some forward thinking universities and research centers have established ongoing AD programs. And the great need to involve the entire human perceptual system in understanding complex data, monitoring processes, and providing effective interfaces has persisted and increased” (Thomas Hermann, Andy Hunt, John G. Neuhoff, Sonification Handbook, iii)

Sonification clearly enables scientists, musicians and the public to interact with data in a very different way, particularly compared to the more numerous techniques involving vision. Indeed, because hearing functions quite differently than vision, sonification offers an alternative kind of understanding of data (sometimes more accurate), which would not be possible using eyes alone. Hearing is multi-directional—our ears don’t have to be pointing at a sound source in order to sense it. Furthermore, the frequency response of our hearing is thousands of times more accurate than our vision. In order to reproduce a moving image the sampling rate (called frame-rate) for film is 24 frames per second, while audio has to be sampled at 44,100 frames per second in order to accurately reproduce sound. In addition, aural perception works on simultaneous time scales—we can take in multiple streams of audio data at once at many different dynamics, while our pupils dilate and contract, limiting how much visual data we can absorb at a single time. Our ears are also amazing at detecting regular patterns over time in data; we hear these patterns as frequency, harmonic relationships, and timbre.

Image credit: Dr. Kevin Yager, data measured at X9 beamline, Brookhaven National Lab.

Image credit: Dr. Kevin Yager, Brookhaven National Lab.

But hearing isn’t simple, either. In the current fascination with sonification, the fact that aesthetic decisions must be made in order to translate data into the auditory domain can be obscured. Headlines such as “Here’s What the Higgs Boson Sounds Like” are much sexier than headlines such as “Here is What One Possible Mapping of Some of the Data We Have Collected from a Scientific Measuring Instrument (which itself has inaccuracies) Into Sound.” To illustrate the complexity of these aesthetic decisions, which are always interior to the sonification process, I focus here on how my collaborators and I have been using sound to understand many kinds of scientific data.

My husband, Kevin Yager, a staff scientist at Brookhaven National Laboratory, works at the Center for Functional Nanomaterials using scattering data from x-rays to probe the structure of matter. One night I asked him how exactly the science of x-ray scattering works. He explained that X-rays “scatter” off of all the atoms/particles in the sample and the intensity is measured by a detector. He can then calculate the structure of the material, using the Fast Fourier Transform (FFT) algorithm. He started to explain FFT to me, but I interrupted him because I use FFT all the time in computer music. The same algorithm he uses to determine the structure of matter, musicians use to separate frequency content from time. When I was researching this post, I found a site for computer music which actually discusses x-ray scattering as a precursor for FFT used in sonic applications.

To date, most sonifications have used data which changes over time – a fly’s wings flapping, a heartbeat, a radiation signature. Except in special cases Kevin’s data does not exist in time – it is a single snapshot. But because data from x-ray scattering is a Fourier Transform of the real-space density distribution, we could use additive synthesis, using multiple simultaneous sine waves, to represent different spatial modes. Using this method, we swept through his data radially, like a clock hand, making timbre-based sonifications from the data by synthesizing sine waves using with the loudness based on the intensity of the scattering data and frequency based on the position.

We played a lot with the settings of the additive synthesis, including the length of the sound, the highest frequency and even the number of frequency bins (going back to the clock metaphor – pretend the clock hand is a ruler – the number of frequency bins would be the number of demarcations on the ruler) arriving eventually at set of optimized variables.

Here is one version of the track we created using 10 frequency bins:

.

Here is one we created using 2000:

.

And here is one we created using 50 frequency bins, which we settled on:

.

On a software synthesizer this would be like the default setting. In the future we hope to have an interactive graphic user interface where sliders control these variables, just like a musician tweaks the sound of a synth, so scientists can bring out, or mask aspects of the data.

To hear what that would be like, here are a few tracks that vary length:

.

.

.

Finally, here is a track we created using different mappings of frequency and intensity:

.

Having these sliders would reinforce to the scientists that we are not creating “the sound of a metallic alloy,” we are creating one sonic representation of the data from the metallic alloy.

It is interesting that such a representation can be vital to scientists. At first, my husband went along with this sonification project as more of a thought experiment rather than something that he thought would actually be useful in the lab, until he heard something distinct about one of those sounds, suggesting that there was a misaligned sample. Once Kevin heard that glitched sound (you can hear it in the video above), he was convinced that sonification was a useful tool for his lab. He and his colleagues are dealing with measurements 1/25,000th the width of a human hair, aiming an X-ray through twenty pieces of equipment to get the beam focused just right. If any piece of equipment is out of kilter, the data can’t be collected. This is where our ears’ non-directionality is useful. The scientist can be working on his/her computer and, using ambient sound, know when a sample is misaligned.

procedure

It remains to be seen/heard if the sonifications will be useful to actually understand the material structures. We are currently running an experiment using Mechanical Turk to determine this kind of multi-modal display (using vision and audio) is actually helpful. Basically we are training people on just the images of the scattering data, and testing how well they do, and training another group of people on the images plus the sonification and testing how well they do.

I’m also working with collaborators at Stony Brook University on sonification of data. In one experiment we are using ambisonic (3-dimensional) sound to create a sonic map of the brain to understand drug addiction. Standing in the middle of the ambisonic cube, we hope to find relationships between voxels, a cube of brain tissue—analogous to pixels. When neurons fire in areas of the brain simultaneously there is most likely a causal relationship which can help scientists decode the brain activity of addiction. Computer vision researchers have been searching for these relationships unsuccessfully; we hope that our sonification will allow us to hear associations in distinct parts of the brain which are not easily recognized with sight. We are hoping to leverage the temporal pattern recognition of our auditory system, but we have been running into problems doing the sonification; each slice of data from the FMRI has about 300,000 data points. We have it working with 3,000 data points, but either our programming needs to get more efficient, or we have to get a much more powerful computer in order to work with all of the data.

On another project we are hoping to sonify gait data using smartphones. I’m working with some of my music students and a professor of Physical Therapy, Lisa Muratori, who works on understanding the underlying mechanisms of mobility problems in Parkinsons’ Disease (PD). The physical therapy lab has a digital motion-capture system and a split-belt treadmill for asymmetric stepping—the patients are supported by a harness so they don’t fall. PD is a progressive nervous system disorder characterized by slow movement, rigidity, tremor, and postural instability. Because of degeneration of specific areas of the brain, individuals with PD have difficulty using internally driven cues to initiate and drive movement. However, many studies have demonstrated an almost normal movement pattern when persons with PD are provided external cues, including significant improvements in gait with rhythmic auditory cueing. So far the research with PD and sound has be unidirectional – the patients listen to sound and try to match their gait to the external rhythms from the auditory cues.In our system we will use bio-feedback to sonify data from sensors the patients will wear and feed error messages back to the patient through music. Eventually we hope that patients will be able to adjust their gait by listening to self-generated musical distortions on a smartphone.

As sonification becomes more prevalent, it is important to understand that aesthetic decisions are inevitable and even essential in every kind of data representation. We are so accustomed to looking at visual representations of information—from maps to pie charts—that we may forget that these are also arbitrary transcodings. Even a photograph is not an unambiguous record of reality; the mechanics of the camera and artistic choices of the photographer control the representation. So too, in sonification, do we have considerable latitude. Rather than view these ambiguities as a nuisance, we should embrace them as a freedom that allows us to highlight salient features, or uncover previously invisible patterns.

__

Margaret Anne Schedel is a composer and cellist specializing in the creation and performance of ferociously interactive media. She holds a certificate in Deep Listening with Pauline Oliveros and has studied composition with Mara Helmuth, Cort Lippe and McGregor Boyle. She sits on the boards of 60×60 Dance, the BEAM Foundation, Devotion Gallery, the International Computer Music Association, and Organised Sound. She contributed a chapter to the Cambridge Companion to Electronic Music, and is a joint author of Electronic Music published by Cambridge University Press. She recently edited an issue of Organised Sound on sonification. Her research focuses on gesture in music, and the sustainability of technology in art. She ran SUNY’s first Coursera Massive Open Online Course (MOOC) in 2013. As an Associate Professor of Music at Stony Brook University, she serves as Co-Director of Computer Music and is a core faculty member of cDACT, the consortium for digital art, culture and technology.

Featured Image: Dr. Kevin Yager, data measured at X9 beamline, Brookhaven National Lab.

Research carried out at the Center for Functional Nanomaterials, Brookhaven National Laboratory, is supported by the U.S. Department of Energy, Office of Basic Energy Sciences, under Contract No. DE-AC02-98CH10886.

tape reelREWIND! ….. If you liked this post, you might also like:

The Noises of Finance–Nicholas Knouf

Revising the Future of Music Technology–Aaron Trammell

A Brief History of Auto-Tune–Owen Marshall

Pleasure Beats: Using Sound for Experience Enhancement 

"Biophonic Garden" by Flickr user Rene Passet, CC BY-NC-ND 2.0

Sound and Pleasure2After a rockin’ (and seriously informative) series of podcasts from Leonard J. Paul, a Drrty South banger dropped by SO! Regular Regina Bradley, a screamtastic meditation from Yvon Bonenfant, a heaping plate of food sounds from Steph Ceraso,  and crowd chants courtesy of  Kariann Goldschmidts work on live events in Brazil, our summer Sound and Pleasure comes to a stirring (and more intimate) conclusion.  Tune into Justyna Stasiowskas frequency below. And thanks for engaging the pleasure principle this summer!--JS, Editor-in-Chief

One of my greatest pleasures is lying in bed, eyes closed and headphones on. I attune to a single stimuli while being enveloped in sound. Using sensory deprivation techniques like blindfolding and isolating headphones is a simple recipe for relaxation, but the website Digital Drugs offers you more. A user can play their mp3 files and surround themselves with an acoustical downpour that increases and then develops into gradient waves. The user feels as if in a hailstorm, surrounded by this constant gritty aural movement. Transfixed by the feeling of noise, the outside seems indistinguishable from inside.

Screenshot courtesy of the author

Screenshot courtesy of the author

Sold by the i-Doser company, Digital Drugs use mp3 files to deliver binaural beats in order to “simulate a desired experience.” The user manual advises lying in a dark and silent room with headphones on when listening to the recording. Simply purchase the mp3, and fill the prescription by listening. Depending on user needs, the experience can be preprogrammed with a specific scenario. This way users can condition themselves using Digital Drugs in order to feel a certain way. The user can control the experience by choosing the “student” or “confidence” dose suggestive of whether you’d like your high like a mild dose of marijuana or an intense dose of cocaine. The receiver is able to perceive every reaction of their body as a drug experience, which they themselves produced. The “dosing” of these aural drugs is restricted by a medical warning and “dose advisors” are available for consultation.

Screenshot, courtesy of the author

Screenshot courtesy of the author

Thus, the overall presentation of Digital Drugs resembles a crisscross of medicine and narcotic clichés with the slogan “Binaural Brainwave doses for every imaginable mood.” While researching the phenomena of Digital Drugs, I have tried not to dismiss them as another gimmick or a new age meditation prop. Rather, I argue the I-Doser company offers a simulation of a drug experience by using the discourse of psychoactive substances to describe sounds: the user becomes an actor taking part in a performance.

By tracing these strategies on a macro and micro scale I show a body emerging from a new paradigm of health. I argue that we have become a psychosomatic creature called the inFORMational body: a body that is formed by information, which shapes practices of health undertaken to feel good and form us. This body is networked, much like a fractal, and connects different agencies operating both in macro (society) and micro (individual) scales.

Macroscale Epidemy: The Power of Drug Representation 

Heinrich Wilhelm Dove described binaural beats in 1839 as a specific brain stimuli resulting in low-frequency pulsations perceivable when two tones at slightly different frequencies are presented separately through stereo headphones to each of the subject’s ears. The difference between tones must be relatively small, only up to 30 Hz, and the tones themselves must not exceed 1000 Hz. Subsequently, scientific authorities presented the phenomena as a tool in stimulating the brain in neurological affliction therapy. Gerard Oster described the applications in 1968 and the Monroe Institute later continued this research in order to use binaural beats in meditation and “expanding consciousness” as a crucial part of self-improvement programs.

I-Doser then molded this foundational research into a narrative presenting binaural beats as a brain stimulation for a desired experience. The binaural beats can be simply understood as an acoustic phenomena with application in practices like meditation or medical therapy.

I-Doser also employs the unverified claims about binaural beats into a narration that consists of the scattered information about research; it connects these authorities with YouTube recordings of human reactions to Digital Drugs. Video testimonies of Digital Drugs users caused a considerable stir among both parents and teachers in American schools two years ago. An American school even banned mp3 players as a precautionary measure. In the You Tube video one can see a person lying with headphones on. After a while we see an involuntary body movement that in some videos might resemble a seizure. Losing control over one’s body becomes the highlight of the footage alongside a subjective account also present in the video. The body movements are framed as a drug experience both for the viewer who is a vicarious witness and the participant who has an active experience.

This type of footage as evidence was popularized as early as the 1960s when military footage showed reactions to psychoactive substances such as LSD.

In the same manner as the Digital Drugs video, the army footage highlights the process of losing control over one’s body, complete with subjective testimonies as evidence of the psychoactive substance’s power.

This kind of visualization is usually fueled by paranoia, akin to Cold War fears, depicting daily attacks by an invisible enemy upon unaware subjects. The information of the authority agencies about binaural beats created a reference base that fueled the concern framing the You Tube videos as evidence of drug experience. It shows that the angst isn’t triggered by technology, in this case Digital Drugs, but by the form in which the “invisible attack” is presented: through sound waves. The manner of framing is more important than the hypothetical action itself. Context then changes recognition.

Microscale Paradigm Shift: Health as Feeling 

On an individual level, did feeling better always mean being healthy? In Histoire des pratiques de santé. Le sain et le malsain depuis le MoyenAge, Georges Vigarello, continuator of the Foucault School of Biopolitics, explains that well-being became a medicalized condition in the 20th century with growing attention to mental health. Being healthy was no longer only about the good condition of the body but became a state of mind; feeling was important as an overall recognition of oneself. In the biopolitical perspective, Vigarello points out, health became more than just the government’s concern for individual well-being but was maintained by medical techniques and technologies.

In the case of Digital Drugs the well-being of children was safely governed by parents and media coverage creating prevention in schools from the “sound drugs.” Similarly, the UAE called for a ban on “hypnotic music” citing it as an illegal drug like cannabis or ecstasy. Using this perspective, I would add that feeling better, then, becomes a never-ending warfare; well-being becomes understood as a state (as in condition and as in governed territory).

Well-being is also an obligation to society, carried out by specific practices. What does a healthy lifestyle actually mean? Its meaning includes self-governance: controlling yourself, keeping fit, discipline (embodying the rules). In order to do it you need guidance: the need for authorities (health experts and trainers) and common knowledge (the “google it” modus operandi). All of these agencies create a strategy to make you feel good every day and have a high performance rate. Digital Drugs, then, become products that promise to boost up your energy, make you more endurable, and extend your mind capabilities. High performance is redefined as a state that enables instant access to happiness, pleasure, relaxation.

"Submerged" by Flickr user Rene Passet, CC BY-NC-ND 2.0

“Submerged” by Flickr user Rene Passet, CC BY-NC-ND 2.0

inFORMational Body 

Vigarello reflects that understanding health in terms of low/high performance—itself based on the logic of consumption—created the concept of a limitless enhancement. Here, he refers to the information model, connecting past assumptions about health with a technique of self-governing. It is based on senses and an awareness of oneself using “intellectual” practices like relaxation and “probing oneself” (or knowing what vitamins you should take). The medical apparatus’s priority, moreover, shifted from keeping someone in good health to maintaining well-being. The subjective account became the crucial element of a diagnosis, supporting itself on information from different sources in order to imply the feeling of a limitless “better.” This strategy relies strongly on the use of technologies, the consideration of a sensual aspect and self-recognition—precisely the methodology used for Digital Drugs’ focus on enhancing wellbeing.

Still, this inFORMational body needs a regulatory system. How do we know that we really feel better? Apart from the media well-being campaign (and the amount of surveillance it involves), we are constantly asked about our health status in the common greeting phrase, but its unheimlich-ness only becomes apparent for non-anglo-saxon speakers. These checkpoint techniques become an everyday instrument of discipline and rely on an obligation to express oneself in social interactions.

So how do we feel? As for now, everything seems “OK.”

Featured image: “Biophonic Garden” by Flickr user Rene Passet, CC BY-NC-ND 2.0

Justyna Stasiowska is a PhD student in the Performance Studies Department at Jagiellonian University. She is preparing a dissertation under the working title: “Noise. Performativity of Sound Perception” in which she argue that frequencies don’t have a strictly programmed effect on the receiver and the way of experiencing sounds is determined by the frames or modes of perception, established by the situation and cognitive context. Justyna earned her M.A in Drama and Theater Studies. Her thesis was devoted to the notion of liveness in the context of the strategies used by contemporary playwrights to manipulate the recipients’ cognitive apparatus using the DJ figure. You can find her on Twitter and academia.edu.

tape reelREWIND!…If you liked this post, check out:

Papa Sangre and the Construction of Immersion in Audio Games–Enongo Lumumba-Kasongo

On Sound and Pleasure: Meditations on the Human VoiceYvon Bonenfant

This is Your Body on the Velvet Underground–Jacob Smith

 

On Sound and Pleasure: Meditations on the Human Voice

3041080778_b8f1040016_b

Sound and Pleasure2After a rockin’ (and seriously informative) series of podcasts from Leonard J. Paul–a three part “Inside the Game Sound Designer’s Studio”– and a post on sound and black women’s sexual freedom from SO! Regular Regina Bradley, our summer Sound and Pleasure series keeps doin’ it and doin’ it and doin’ it well, this week with a beautiful set of meditations from scholar, artist, performer, and voice activist, Yvon Bonenfant. EVERYBODY SCREAM!!!--JS, Editor-in-Chief

1

What I have to say about sound and pleasure can mostly be summed up this way: everyone deserves to take profound pleasure in their body’s sound.

Not only this, everyone deserves to both engage passionately with social sound and negotiate the exchange of social sound on pleasurable terms.

Like other expressive systems, however, these inalienable sonic human rights are mostly ignored, curtailed, or otherwise ‘disciplined and punished’ in the Foucauldian sense by our social systems.  So, we are mostly neurotic, or otherwise hung up on, what kinds of sounds we make, where and when. We fetishise sound, particularly virtuosically framed sound, because it is part of a series of sublimated impulses, or we repress it because we think we aren’t supposed to emit it, or we ignore it.

2

"DSC_0296" by Flickr user Anastasia CW, CC BY-NC-SA 2.0

“DSC_0296″ by Flickr user Anastasia CW, CC BY-NC-SA 2.0

In any given human relationship within which all parties can vocalize, the voice is an evident, key relational tool. It is full of gesture and meaning and text and sends rapid-fire, complex, layered, even self-contradictory or oxymoronic messages. It is a truly tangled web, and of course, for those who can use speech, transmits language.

However, I’d like to disentangle our sound from our language for a moment. Indeed, sound is not necessary in order to develop and transmit linguistically carried ideas, information and impulses. It has long been accepted that sign languages are fully developed languages, with intricate grammatical systems, vocabularies, and all of the other features of spoken languages.  It is thus not necessary to use sound as a carrier of language. Yet if we have a voice, we almost always use sound to carry our language. And we force deaf people to try to fake having a voice and to fake listening to voices through lip reading and gesturing.

The last twenty years has seen a real boom in speculation and even scientific experiments that theorise why human bodily sound – the most evident aspect of which is our vocal sound – is so important to us. Musicology, biomusicology, evolutionary psychology, neuropsychology, and cultural studies of many kinds have tried to account for this. I have my own favorite reason, one I’ve tried to describe in a number of scholarly articles. This is that sound is much like touch. Like, yet unalike. It reaches and vibrates bodies, but at distance. It voyages through space in other ways, but it evokes haptic responses.

3

Sound isn’t solid, but it takes up space. This is expressed by Stephen Connor within his concept of the vocalic body.  When we sound, there is a resonant field of vibration that moves through matter, which behaves according to the laws of physics – it vibrates molecules. This vibratory field leaves us, but is of us, and it voyages through space. Other people hear it. Other people feel it.

"GAELLE" by Flickr user Pauline Thomas, CC BY-NC 2.0

“GAELLE” by Flickr user Pauline Thomas, CC BY-NC 2.0

I’ve said that sound is like touch. However, one key way that it is not like touch is that it can do this thing. It can leave our bodies and travel away from us. We don’t need to grip it. We don’t need to hold on. And once emanated, it is out of our control.

More than one emanation can co-exist within matter. Their vibrations interact with one another, waves colliding and travelling in similar or different directions, and the vocalic bodies that they represent are morphed, hybridized: they intersect and invent composite bodies.

We hear the resulting harmonies. Historically policed into ‘consonances’ and ‘dissonances’, we have the power to let the negativizing connotations of either of these words go and simply hear the results of the collisions. Voices sounding simultaneously create choreographies of gesture that can be jubilant, depressing, assertive, aggressive, delightful, morose… or many of these simultaneously and in rapid alternation.

The fields of human sound in which we bathe are a continually self-knitting web of sensation. They are full of gestures pregnant with intention, filled with improvisatory spontaneity, success, failure and experimentation. They are filled with a desire to act upon matter, and to reach and engage one another.

4

My Ukrainian-origin mother was ‘loud’, I guess, at least by Anglo-Saxon standards, and her voice was timbrally very rich. And my father was a radio announcer (he disliked being called a DJ immensely, even though he worked in commercial radio and worked on shows that spun discs – he preferred being associated with talking). His voice was also very rich, as well as extremely crafted. It could be pointed and severe: a weapon. He had professional command of its qualities. We were not a quiet family; none of us were vocal wallflowers. But were our soundings pleasure-filled? Certainly, we were allowed to make lots of sound in some circumstances. However, just being allowed to be loud – though it might sometimes be a pleasure – does not necessarily lead to a pleasure-filled dynamic. Weightlifting makes us stronger, but it doesn’t necessarily feel good.

The amount of sound and whether ‘lots’ of it, or heightenings of its qualities – lots of amplitude, or lots of other kinds of distinctness, let’s say things like pitch or emotional timbre – are key variable features of family life in our cultures. Sound takes us directly into the meatiest of interpersonal dynamics – the dynamics of space and gesture, the dynamics of who takes up space with their sound and when. Families are, of course, microcosms of this sonic dynamic, but any group within which we generate relationships and encounters is subject to this dynamic, too. Our very own bodies end up developing what Thomas Csordas might call a ‘somatic mode’ that embodies our experience of these dynamics.

"Scream" by Flickr user madamepsychosis, CC BY-NC-ND 2.0

“Scream” by Flickr user madamepsychosis, CC BY-NC-ND 2.0

Whether we start from psychodynamic, neuropsychiatric, or even habitus-based  models, it’s clear that repressing the expression of bodily sound regulates breathing impulses and other metabolic processes in ways that might become, well, habits.

Let’s put this in other ways.

The classic, Freudian, psychodynamic model of neurosis – as disputed as it is, and with all of its colonial, sexist, homophobic, racist and even abuse-denying overtones – did at least one thing for our understanding of what repressed emotion does. Repressed emotion affects the body.

Today, a popular understanding of this kind of emotional repression from a biophysical perspective might be: the use of the conscious mind to hold back emotional flow, and along with it, the emotional qualities of certain associations,  memories, or even the content of the memories themselves.

Repressing this thing we might call emotional flow represses the voice. The literal, physical voice. Now, this kind of repression of the voice can become what Freudians would call unconscious. To allow it out isn’t any longer a choice that can be made, because we’re so used to holding back, that we don’t realize we’re doing it any more.

Somatics have taught us, through the contended practices of the body psychotherapies descended from Wilhelm Reich’s work, or Bonnie Bainbridge Cohen’s Body-Mind Centering, or any numerous other somatic practices – from certain styles of yoga through to Zen meditation and beyond – that emotional flow is at least partly dependent on how we breathe. And neuropsychology and physiology bear this out.

"Screaming Out My Hell" by Flickr user L'Orso Sul Monociclo, CC BY 2.0

“Screaming Out My Hell” by Flickr user L’Orso Sul Monociclo, CC BY 2.0

Whatever might ‘cause’ an emotion – and the roots of the causes of emotion are a source of debate – once it gets going, it isn’t just a thought process. Emotion is meaty and full of pumping hormones and breath pattern alterations and gestures and rushes of fluid. Chemicals get released. Chemicals get washed away. Heart rates speed up and slow down. Our breath rises and falls and its patterns change. Digestion patterns speed up or slow down or get interrupted. What happens in the body affects the body. What happens in the body affects the voice. Ever heard that kind of voice that seems hardened against the world? Or that media voice – the voice that is carefully shaped to invoke reason? Maybe these vocalisers can never let go of that sound: maybe it’s the only sound they can do, now. It’s just too habitual to let it change.

So, these habits can become so habitual that we don’t notice them anymore. We might change our breathing in some way to modify our expressive states. Because the exact nature of the sound our voices make is exquisitely dependent on how we breathe, and on everything else we do with our bodies, it then changes as well. Our choices to not let impulses flow – and the breath is only one bodily impulse among many –  get caught up in this web. What were once choices can become embedded, difficult, and stubborn. To go far beyond the psychoanalytic and neurophysiological models, we can end up embodying a culture of these choices, and invent together a cultural body that regulates vocal sound based on groups of people making similar choices or playing by similar rules of sonic exchange.

This can end up perpetuating itself within our very tissues, and it can be an incredibly subtle dynamic to identify and shift. The way we embody the complexities of how we structure our physical and psychological engagement with the world – the ways we breathe, look, move, gesture… the ensemble of these is how Bourdieu defined the habitus. Where these complexities start and end is perhaps an infinite loop, a continual cycle of turning and exchange and influence flowing from ourselves to our culture and back again. Our bodies are cultural, counter-cultural, infra-cultural, extra-cultural bodies: we react to culture; we interact with it: we take positions.

Sound – who gets to do it, and when and how – is negotiated, with others, but also, within our own bodies. The traces that others leave there, the things we might call sonic and vocal inhibitions, tensions, these held-back-nesses, eventually become ours to carry, live with, and/or dissolve. They are gifted to us by our culture…. by our environment… by our experience … and by our bodies themselves.

We negotiate sounding.

Pleasure is negotiated, too.

5

"Quiet" by Flickr user Leo Reynolds, CC BY-NC-SA 2.0

“Quiet” by Flickr user Leo Reynolds, CC BY-NC-SA 2.0

We do this to our children: we shut them up. Oh, of course, we also facilitate their sound, and some do this more than others. But even if we give them sonic liberty at home, someone will shut them up, somewhere. We all know and we all remember being silenced as children by somebody, or at least, made to raise our hands in a classroom to ensure one speaker at a time, chosen by the authority in question. Later, teenagers, more often girls than boys, are called mouthy. The mouth: implicitly loud, and if too active, implicitly offensive. The term has been used against feminists, every identity we might include within LGBTI+, African-Americans, and the list goes on.

The wet, open, loud, loud mouth, just ready to mouth off, just ready to make trouble with its irritating, nasty, and above all, bothersome noise – bothersome because it makes us have to react – to have to consider the existence, the needs, the demands of those we might otherwise ignore – that moist orifice can be a source of great pleasure.

6

And on the score of that poor mouthy mouth, let’s consider some other colloquial terms, like ‘sucker’. Sucking is bad, apparently. It expresses need. Thumb out of the mouth! Stop wanting intimacy, reassurance, warmth, contact, and above all stop wanting to satisfy your hard-wired, biological need to suck for comfort and food (my little child). And you there, you sexually active adult! You fucking cocksucker. You ass-licker. That gaping mouth should shut itself up: its gooey pleasures are disgusting. These pleasures involve direct skin-to-skin contact.

Perhaps there is a revolution to be had, in the simple facilitation of gape-mouthed drool.

The vocal tract – that long tunnel surrounded by tongue and palates and teeth and various bits of throat, with at its bottom, the resonant buzz of elastic membranes, through which air is squeezed – also grips the world with direct contact. It’s not just a resonating and sound-shaping cave.

7

"Whistling Boy by Frank Duveneck" by Flickr user Mary Harrsch, CC BY-NC-SA 2.0

“Whistling Boy by Frank Duveneck” by Flickr user Mary Harrsch, CC BY-NC-SA 2.0

I’m making some artworks for children and families right now, and I group them together under the project moniker “Your Vivacious Voice” [See SO! Amplifies post from 6/19/14 to learn more about the free Voice Bubbles App aspect of YB’s project—ed]. I’m collaborating with some scientists and clinician-scientists on this project. They all work with the voice – in psycholinguistics, in understanding infant language acquisition, in voice medicine, and even in laryngeal surgery. We interview these scientists, and use inspiration from our conversations as sources of metaphors for art-making.

One of these is the head Speech and Language Therapist at the Royal National Ear, Nose and Throat Hospital in London, Dr Ruth Epstein. She sees and/or oversees some of the most difficult cases of vocal problems in the whole of the UK. When we asked her what concerns she’d most like us to address in artworks for children and families, she responded along the lines of: please, find a way to get through to them that voice is contact, human contact. She has begun using communication skills, such as eye contact and turn-taking exercises, in addition to vocal skills,  in families with children who have injured voices – because she realized at some point that in many of these families, the near exclusive modality of contact was yelling: yelling without contact – without relationship.

The contactless yell is the thrashing arm that somehow remains alone in a void. It’s a yell that might strike if it lands on other flesh, but somehow doesn’t grip, and can’t convert to a caress. It can’t hold… it only punches.

This reminds me of a rockish tune by Carole Pope and Rough Trade from the Canadiana of my childhood – the refrain went:

It hit me like, it hit me like, it hit me like a slap, oh-oh-oh, all touch…
All touch and all touch and no contact…..

8

Back to our children, and to us.

Bodily sound can be a pointed weapon. It can be violent, in that it can frighten, dominate, attack, evoke deep fear, and engage other mechanisms of terror and control and subjugation, and that it can attempt to annihilate our ability to recognize the existence of others. We can drown out others’ sounds. We can drown out their gesture. We can drown their vocalic bodies in our own through amplitude and clashes of timbral spectra. We can shut them up.

Let us consider, here, the desire for amplification and how amplified sound represents an exaggeration of this power, a cybernetic enhancement of the ability to dominate with our emanating waves. We can drown out the social ability for whole groups to hear anyone but ourselves.

However, if, in our cultural environments, everyone is allowed to sound – if, indeed, we facilitate social environments in which everyone’s sound is welcome, then those who are subjected to vocal and sonic violence have an incredible counter-power to this power: they have the power to make sound too.

Although making sound back to violent sound, back to annihilating sound, is not always easy, possible or permitted, it is a power that can’t be easily erased. And we can almost always feel, if not cognitively hear, our own sound vibrate within our own skulls and through our own bones, no matter what is coming from the outside, no matter what waves of vocalic body are streaming toward us. Our sound waves continue to exist, even if transformed.

"Mouthing Off" by Flickr user Demi-Brooke, CC BY 2.0

“Mouthing Off” by Flickr user Demi-Brooke, CC BY 2.0

We can give voice to ourselves. We can change our habits. We can expand away from them.

It isn’t even necessary to fight back. It’s only necessary to vibrate.

And we can take it further.

We can actively encourage each other’s sound. We can actively encourage our children’s sound. We can actively encourage social sound. We can actively encourage a dance with others’ voices. We can facilitate, make space for, enjoy being touched by, the uniqueness of other voices. We can play with how our voices collide and create children with the vocalic bodies of others. After all, our composite vocal bodies are the products of our intensive exchange. We can jublilate in the massages we receive by making our own sound, by vibrating our own skulls, flesh, blood, lymph, interstitial fluid, and the air near us, and we can make it so that we can engage in passionate exchange with the vibrations of others.

This might be something like music. Or other kinds of art. Or it might be simple conversation. Or it might be cooing with a baby. Or it might be making comforting sounds while a toddler cries. Or it might be screaming with rage together.

What it always is, though, is focusing on, opening up to, enjoying the dynamics of the dance of individual, idiosyncratic, messy, fleshly, bodily, sonic emanations reacting with one another.

In the end, the policing of our sound is under our control. We can find ways to unpolice, and enjoy the unbridledness of our sound.

Our bodily sound is a means of engaging passionately with relationship and of glorying in its results.

Featured image: “Faces 529″ by Flickr user Greg Peverill-Conti, CC BY-NC-ND 2.0

Yvon Bonenfant is Reader in Performing Arts at the University of Winchester. He likes voices that do what voices don’t usually do, and he likes bodies that don’t do what bodies usually do. He makes art starting from these sounds and movements. These unusual, intermedia works have been produced in 10 countries in the last 10 years, and his writing published in journals such as Performance Research, Choreographic Practices, and Studies in Theatre and Performance. He currently holds a Large Arts Award from the Wellcome Trust and funding from Arts Council England to collaborate with speech scientists on the development of a series of participatory, extra-normal voice artworks for children and families; see www.yourvivaciousvoice.com. Despite his air of Lenin, he does frighteningly accurate vocal imitations of both Axl Rose and Jon Bon Jovi. www.yvonbonenfant.com.

tape reelREWIND! . . .If you liked this post, you may also dig:

Experiments in Aural Resistance: Nordic Role-Playing, Community, and Sound– Aaron Trammell

This Is Your Body on the Velvet Underground– Jacob Smith

Sound Designing Motherhood: Irene Lusztig & Maile Colbert Open The Motherhood Archives– Maile Colbert

Papa Sangre and the Construction of Immersion in Audio Games

papa sangre_headphones

Sound and PlayEditor’s Note:  Welcome to Sounding Out!‘s fall forum titled “Sound and Play,” where we ask how sound studies, as a discipline, can help us to think through several canonical perspectives on play. While Johan Huizinga had once argued that play is the primeval foundation from which all culture has sprung, it is important to ask where sound fits into this construction of culture; does it too have the potential to liberate or re-entrench our social worlds? SO!’s new regular contributor Enongo Lumumba-Kasongo notes how audio games, like Papa Sangre, often use sound as a gimmick to engage players, and considers the politics of this feint. For whom are audio games immersive, and how does the experience serve to further marginalize certain people or disadvantaged groups?–AT

Immersion is a problem at the heart of sound studies. As Frances Dyson (2009) suggests in Sounding New Media, “Sound is the immersive medium par excellence. Three dimensional, interactive and synesthetic, perceived in the here and now of an embodied space, sound returns to the listener the very same qualities that media mediates…Sound surrounds” (4). Alternately, in the context of games studies (a field that is increasingly engaged with sound studies), issues of sound and immersion have most recently been addressed in terms of instrumental potentialities, historical developments, and technical constraints. Some notable examples include Sander Huiberts’ (2010) M.A. thesis entitled “Captivating Sound: The Role of Audio Immersion for Computer Games,” in which he details technical and philosophical frames of immersion as they relate to the audio of a variety of computer games, and an article by Aaron Oldenburg (2013) entitled “Sonic Mechanics: Audio as Gameplay,” in which he situates the immersive aspects of audio-gameplay within contemporaneous experimental art movements. This research provokes the question: How do those who develop these games construct the idea of immersion through game design and what does this mean for users who challenge this construct? Specifically I would like to challenge Dyson’s claim that sound really is “the immersive medium par excellence” by considering how the concept of immersion in audio-based gameplay can be tied to privileged notions of character and game development.

psIn order to investigate this problem, I decided to play an audio game and document my daily experiences on a WordPress blog. Based on its simulation of 3D audio Papa Sangre was the first game that came to mind. I also selected the game because of its accessibility; unlike the audio game Deep Sea, which is celebrated for its immersive capacities but is only playable by request at The Museum of Art and Digital Entertainment, Papa Sangre is purchasable as an app for $2.99 and can be played on an iPhone, iPad or iPod. Papa Sangre helps us to consider new possibilities for what is meant by virtual space and it serves as a useful tool for pushing back against essentialisms of “immersion” when talking sound and virtual space.

Papa Sangre is comprised of 25 levels, the completion of which leads player incrementally closer towards the palace of Papa Sangre, a man who has kidnapped a close friend of the protagonist. The game boasts real time binaural audio, meaning that the game’s diegetic sounds (sounds that the character in the game world can “hear”) pan across the player’s headphones in relation to the movement of the game’s protagonist. The objective of each level is to locate and collect musical notes that are scattered through the game’s many topographies while avoiding any number of enemies and obstacles, of course.

.

A commercial success, Papa Sangre has been named “Game of the Week” by Apple, received a 9/10 rating from IGN, a top review from 148apps, and many positive reviews from fans. Gamezebo concludes an extremely positive review of Papa Sangre by calling it “a completely unique experience. It’s tense and horrifying and never lets you relax. By focusing on one aspect of the game so thoroughly, the developers have managed to create something that does one thing really, really well…Just make sure to play with the lights on.” This commercial attention has yielded academic feedback as well. In a paper entitled “Towards an analysis of Papa Sangre, an audio-only game for the iPhone/iPad,” Andrew Hugill (2012) celebrates games like Papa Sangre for providing “an excellent opportunity for the development of a new framework for electroacoustic music analysis.” Despite such attention–and perhaps because of it–I argue that Papa Sangre deserves a critical second listen.

Between February and April of 2012, I played Papa Sangre several times a day and detailed the auditory environments of the game in my blog posts. However, by the time I reached the final level, I still wasn’t sure how to answer my initial question. Had Papa Sangre really engendered a novel experience or it could simply be thought of as a video game with no video?  I noted in my final post:

I am realizing that what makes the audio gaming experience seem so different from the experience of playing video games is the perception that the virtual space, the game itself, only exists through me. The “space” filled by the levels and characters within the game only exists between my ears after it is projected through the headphones and then I extend this world through my limbs to my extremities, which feeds back into the game through the touch screen interface, moving in a loop like an electric current…Headphones are truly a necessity in order to beat the game, and in putting them on, the user becomes the engine through which the game comes to life…When I play video games, even the ones that utilize a first-person perspective, I feel like the game space exists outside of me, or rather ahead of me, and it is through the controller that I am able to project my limbs forward into the game world, which in turn structures how I orient my body. Video game spaces of course, do not exist outside of me, as I need my eyes and ears to interpret the light waves and sound waves that travel back from the screen, but I suppose what matters here is not what is actually happening, but how what is happening is perceived by the user. Audio games have the potential to engender completely different gaming experiences because they make the user feel like he or she is the platform through which the game-space is actualized.

Upon further reflection, however, I recognize that Papa Sangre creates an environment designed to be immersive only to certain kinds of users. A close reading of Papa Sangre reveals bias against both female and disabled players.

Take Papa Sangre’s problematic relationship with blindness. The protagonist is not a visually impaired individual operating in a horrifying new world, but rather a sighted individual who is thrust into a world that is horrifying by virtue of its darkness. The first level of the game is simply entitled “In the Dark.” When the female guide first appears to the protagonist in that same level, she states:

Here you are in the land of the dead, the realm ruled by Papa Sangre…In this underworld it is pitch dark. You cannot see a thing; you can’t even see me, a fluttery watery thing here to help you. But you can listen and move…You must learn how to see with your ears. You will need these powers to save the soul in peril and make your way to the light.

Note the conversation between 3:19 and 3:56.

The game envisions an audience who find blindness to be necessarily terrifying. By equating an inability to see with death and fear, developers are intensifying popular horror genre tropes that diminish the lived experiences of those with visual impairments and unquestioningly present blindness as a problem to overcome. Rather than challenging the relationship between blindness and vulnerability that horror-game developers fetishize, Papa Sangre misses the opportunity to present a visually impaired protagonist who is not crippled by his or her disability.

feet

Disconcertingly, audio games have been tied to game accessibility efforts by developers and players alike for many years. In a 2008 interview Kenji Eno, founder of WARP (a company that specialized in audio games in the late 90s), claimed  his interactions with visually impaired gamers yielded a desire to produce audio games. Similarly forums like audiogames.net showcase users and developers interested in games that cater to gamers with impaired vision.

In terms of its actual game-play, PapaSangre is navigable without visual cues. After playing the game for just two weeks I was able to explore each level with my eyes closed. Still, the ease with which gamers can play the game without looking at the screen does not negate the tension caused by recycled depictions of disability that are in many ways built into storyline’s foundation.

gruntsThe game also fails to engage gender in any complexity. Although the main character’s appearance is never shown, the protagonist is aurally gendered male. Most notable are the deep grunting noises made when he falls to the ground. For me, this acted as a barrier to imagining a fully embodied virtual experience. Those deep grunts revealed many assumptions the designers must have considered about the imagined and perhaps intended audience of the game.  While lack of diversity is certainly an issue at the heart of all entertainment media, Papa Sangre‘s oversight directly contradicts the message of the game, wherein the putative goal is to experience an environment that enhances one’s sense of self within the virtual space.

On October 31st, 2013, Somethin’ Else will release Papa Sangre II. A quick look at the trailer suggests that the developers’ have not changed the formula. The 46-second clip warns that the game is “powered by your fear” after noting, “This Halloween, you are dead.”

.

It appears that an inability to see is still deeply connected with notions of fear and death in the game’s sequel. This does not have to be the case. Why not design a game where impairment is not framed as a hindrance or source of fear? Why not build a game with the option to choose between different sounding voice actors and actresses? Despite its popularity, however, Papa Sangre is by no means representative of general trends across the spectrum of audio-based game design. Oldenburg (2013) points out that over the past decade many independent game developers have been designing experimental “blind games” that eschew themes and representations found in popular video games in favor of the abstract relationships between diegetic sound and in-game movement.

Whether or not they eventually consider the social politics of gaming, Papa Sangre’s developers already send a clear message to all gamers by hardwiring disability and gender into both versions of the game while promoting a limited image of “immersion.” Hopefully as game designers Somethin’ Else grow in popularity and prestige, future developers that use the “Papa Engine” will be more cognizant of the privilege and discrimination embedded in the sonic cues of its framework.  Until then, if you are not a sighted male gamer, you must prepare yourself to be immersed in constant aural cues that this experience, like so many others, was not designed with you in mind.

Enongo Lumumba-Kasongo is a PhD student in the Department of Science and Technology Studies at Cornell University. Since completing a senior thesis on digital music software, tacit knowledge, and gender under the guidance of Trevor Pinch, she has become interested in pursuing research in the emergent field of sound studies. She hopes to combine her passion for music with her academic interests in technological systems, bodies, politics and practices that construct and are constructed by sound. More specifically she would like to examine the politics surrounding low-income community studios, as well as the uses of sound in (or as) electronic games.  In her free time she produces hip hop beats and raps under the moniker Sammus (based on the video game character, Samus Aran, from the popular Metroid franchise).

tape reelREWIND! . . .If you liked this post, you may also dig:

Goalball: Sport, Silence, and Spectatorship– Melissa Helquist

Playing with Bits, Pieces, and Lightning Bolts: An Interview With Sound Artist Andrea Parkins– Maile Colbert

Video Gaming and the Sonic Feedback of Surviellance: Bastion and the Stanley Parable– Aaron Trammell

%d bloggers like this: