Pleasure Beats: Using Sound for Experience Enhancement 

"Biophonic Garden" by Flickr user Rene Passet, CC BY-NC-ND 2.0

Sound and Pleasure2After a rockin’ (and seriously informative) series of podcasts from Leonard J. Paul, a Drrty South banger dropped by SO! Regular Regina Bradley, a screamtastic meditation from Yvon Bonenfant, a heaping plate of food sounds from Steph Ceraso,  and crowd chants courtesy of  Kariann Goldschmidts work on live events in Brazil, our summer Sound and Pleasure comes to a stirring (and more intimate) conclusion.  Tune into Justyna Stasiowskas frequency below. And thanks for engaging the pleasure principle this summer!--JS, Editor-in-Chief

One of my greatest pleasures is lying in bed, eyes closed and headphones on. I attune to a single stimuli while being enveloped in sound. Using sensory deprivation techniques like blindfolding and isolating headphones is a simple recipe for relaxation, but the website Digital Drugs offers you more. A user can play their mp3 files and surround themselves with an acoustical downpour that increases and then develops into gradient waves. The user feels as if in a hailstorm, surrounded by this constant gritty aural movement. Transfixed by the feeling of noise, the outside seems indistinguishable from inside.

Screenshot courtesy of the author

Screenshot courtesy of the author

Sold by the i-Doser company, Digital Drugs use mp3 files to deliver binaural beats in order to “simulate a desired experience.” The user manual advises lying in a dark and silent room with headphones on when listening to the recording. Simply purchase the mp3, and fill the prescription by listening. Depending on user needs, the experience can be preprogrammed with a specific scenario. This way users can condition themselves using Digital Drugs in order to feel a certain way. The user can control the experience by choosing the “student” or “confidence” dose suggestive of whether you’d like your high like a mild dose of marijuana or an intense dose of cocaine. The receiver is able to perceive every reaction of their body as a drug experience, which they themselves produced. The “dosing” of these aural drugs is restricted by a medical warning and “dose advisors” are available for consultation.

Screenshot, courtesy of the author

Screenshot courtesy of the author

Thus, the overall presentation of Digital Drugs resembles a crisscross of medicine and narcotic clichés with the slogan “Binaural Brainwave doses for every imaginable mood.” While researching the phenomena of Digital Drugs, I have tried not to dismiss them as another gimmick or a new age meditation prop. Rather, I argue the I-Doser company offers a simulation of a drug experience by using the discourse of psychoactive substances to describe sounds: the user becomes an actor taking part in a performance.

By tracing these strategies on a macro and micro scale I show a body emerging from a new paradigm of health. I argue that we have become a psychosomatic creature called the inFORMational body: a body that is formed by information, which shapes practices of health undertaken to feel good and form us. This body is networked, much like a fractal, and connects different agencies operating both in macro (society) and micro (individual) scales.

Macroscale Epidemy: The Power of Drug Representation 

Heinrich Wilhelm Dove described binaural beats in 1839 as a specific brain stimuli resulting in low-frequency pulsations perceivable when two tones at slightly different frequencies are presented separately through stereo headphones to each of the subject’s ears. The difference between tones must be relatively small, only up to 30 Hz, and the tones themselves must not exceed 1000 Hz. Subsequently, scientific authorities presented the phenomena as a tool in stimulating the brain in neurological affliction therapy. Gerard Oster described the applications in 1968 and the Monroe Institute later continued this research in order to use binaural beats in meditation and “expanding consciousness” as a crucial part of self-improvement programs.

I-Doser then molded this foundational research into a narrative presenting binaural beats as a brain stimulation for a desired experience. The binaural beats can be simply understood as an acoustic phenomena with application in practices like meditation or medical therapy.

I-Doser also employs the unverified claims about binaural beats into a narration that consists of the scattered information about research; it connects these authorities with YouTube recordings of human reactions to Digital Drugs. Video testimonies of Digital Drugs users caused a considerable stir among both parents and teachers in American schools two years ago. An American school even banned mp3 players as a precautionary measure. In the You Tube video one can see a person lying with headphones on. After a while we see an involuntary body movement that in some videos might resemble a seizure. Losing control over one’s body becomes the highlight of the footage alongside a subjective account also present in the video. The body movements are framed as a drug experience both for the viewer who is a vicarious witness and the participant who has an active experience.

This type of footage as evidence was popularized as early as the 1960s when military footage showed reactions to psychoactive substances such as LSD.

In the same manner as the Digital Drugs video, the army footage highlights the process of losing control over one’s body, complete with subjective testimonies as evidence of the psychoactive substance’s power.

This kind of visualization is usually fueled by paranoia, akin to Cold War fears, depicting daily attacks by an invisible enemy upon unaware subjects. The information of the authority agencies about binaural beats created a reference base that fueled the concern framing the You Tube videos as evidence of drug experience. It shows that the angst isn’t triggered by technology, in this case Digital Drugs, but by the form in which the “invisible attack” is presented: through sound waves. The manner of framing is more important than the hypothetical action itself. Context then changes recognition.

Microscale Paradigm Shift: Health as Feeling 

On an individual level, did feeling better always mean being healthy? In Histoire des pratiques de santé. Le sain et le malsain depuis le MoyenAge, Georges Vigarello, continuator of the Foucault School of Biopolitics, explains that well-being became a medicalized condition in the 20th century with growing attention to mental health. Being healthy was no longer only about the good condition of the body but became a state of mind; feeling was important as an overall recognition of oneself. In the biopolitical perspective, Vigarello points out, health became more than just the government’s concern for individual well-being but was maintained by medical techniques and technologies.

In the case of Digital Drugs the well-being of children was safely governed by parents and media coverage creating prevention in schools from the “sound drugs.” Similarly, the UAE called for a ban on “hypnotic music” citing it as an illegal drug like cannabis or ecstasy. Using this perspective, I would add that feeling better, then, becomes a never-ending warfare; well-being becomes understood as a state (as in condition and as in governed territory).

Well-being is also an obligation to society, carried out by specific practices. What does a healthy lifestyle actually mean? Its meaning includes self-governance: controlling yourself, keeping fit, discipline (embodying the rules). In order to do it you need guidance: the need for authorities (health experts and trainers) and common knowledge (the “google it” modus operandi). All of these agencies create a strategy to make you feel good every day and have a high performance rate. Digital Drugs, then, become products that promise to boost up your energy, make you more endurable, and extend your mind capabilities. High performance is redefined as a state that enables instant access to happiness, pleasure, relaxation.

"Submerged" by Flickr user Rene Passet, CC BY-NC-ND 2.0

“Submerged” by Flickr user Rene Passet, CC BY-NC-ND 2.0

inFORMational Body 

Vigarello reflects that understanding health in terms of low/high performance—itself based on the logic of consumption—created the concept of a limitless enhancement. Here, he refers to the information model, connecting past assumptions about health with a technique of self-governing. It is based on senses and an awareness of oneself using “intellectual” practices like relaxation and “probing oneself” (or knowing what vitamins you should take). The medical apparatus’s priority, moreover, shifted from keeping someone in good health to maintaining well-being. The subjective account became the crucial element of a diagnosis, supporting itself on information from different sources in order to imply the feeling of a limitless “better.” This strategy relies strongly on the use of technologies, the consideration of a sensual aspect and self-recognition—precisely the methodology used for Digital Drugs’ focus on enhancing wellbeing.

Still, this inFORMational body needs a regulatory system. How do we know that we really feel better? Apart from the media well-being campaign (and the amount of surveillance it involves), we are constantly asked about our health status in the common greeting phrase, but its unheimlich-ness only becomes apparent for non-anglo-saxon speakers. These checkpoint techniques become an everyday instrument of discipline and rely on an obligation to express oneself in social interactions.

So how do we feel? As for now, everything seems “OK.”

Featured image: “Biophonic Garden” by Flickr user Rene Passet, CC BY-NC-ND 2.0

Justyna Stasiowska is a PhD student in the Performance Studies Department at Jagiellonian University. She is preparing a dissertation under the working title: “Noise. Performativity of Sound Perception” in which she argue that frequencies don’t have a strictly programmed effect on the receiver and the way of experiencing sounds is determined by the frames or modes of perception, established by the situation and cognitive context. Justyna earned her M.A in Drama and Theater Studies. Her thesis was devoted to the notion of liveness in the context of the strategies used by contemporary playwrights to manipulate the recipients’ cognitive apparatus using the DJ figure. You can find her on Twitter and academia.edu.

tape reelREWIND!…If you liked this post, check out:

Papa Sangre and the Construction of Immersion in Audio Games–Enongo Lumumba-Kasongo

On Sound and Pleasure: Meditations on the Human VoiceYvon Bonenfant

This is Your Body on the Velvet Underground–Jacob Smith

 

The Better to Hear You With, My Dear: Size and the Acoustic World

2736318008_1c6229a405_b

Hearing the Unheard IIToday the SO! Thursday stream inaugurates a four-part series entitled Hearing the UnHeard, which promises to blow your mind by way of your ears. Our Guest Editor is Seth Horowitz, a neuroscientist at NeuroPop and author of The Universal Sense: How Hearing Shapes the Mind (Bloomsbury, 2012), whose insightful work on brings us directly to the intersection of the sciences and the arts of sound.9781608190904

That’s where he’ll be taking us in the coming weeks. Check out his general introduction just below, and his own contribution for the first piece in the series. — NV

Welcome to Hearing the UnHeard, a new series of articles on the world of sound beyond human hearing. We are embedded in a world of sound and vibration, but the limits of human hearing only let us hear a small piece of it. The quiet library screams with the ultrasonic pulsations of fluorescent lights and computer monitors. The soothing waves of a Hawaiian beach are drowned out by the thrumming infrasound of underground seismic activity near “dormant” volcanoes. Time, distance, and luck (and occasionally really good vibration isolation) separate us from explosive sounds of world-changing impacts between celestial bodies. And vast amounts of information, ranging from the songs of auroras to the sounds of dying neurons can be made accessible and understandable by translating them into human-perceivable sounds by data sonification.

Four articles will examine how this “unheard world” affects us. My first post below will explore how our environment and evolution have constrained what is audible, and what tools we use to bring the unheard into our perceptual realm. In a few weeks, sound artist China Blue will talk about her experiences recording the Vertical Gun, a NASA asteroid impact simulator which helps scientists understand the way in which big collisions have shaped our planet (and is very hard on audio gear). Next, Milton A. Garcés, founder and director of the Infrasound Laboratory of University of Hawaii at Manoa will talk about volcano infrasound, and how acoustic surveillance is used to warn about hazardous eruptions. And finally, Margaret A. Schedel, composer and Associate Professor of Music at Stonybrook University will help readers explore the world of data sonification, letting us listen in and get greater intellectual and emotional understanding of the world of information by converting it to sound.

– Guest Editor Seth Horowitz

Although light moves much faster than sound, hearing is your fastest sense, operating about 20 times faster than vision. Studies have shown that we think at the same “frame rate” as we see, about 1-4 events per second. But the real world moves much faster than this, and doesn’t always place things important for survival conveniently in front of your field of view. Think about the last time you were driving when suddenly you heard the blast of a horn from the previously unseen truck in your blind spot.

Hearing also occurs prior to thinking, with the ear itself pre-processing sound. Your inner ear responds to changes in pressure that directly move tiny little hair cells, organized by frequency which then send signals about what frequency was detected (and at what amplitude) towards your brainstem, where things like location, amplitude, and even how important it may be to you are processed, long before they reach the cortex where you can think about it. And since hearing sets the tone for all later perceptions, our world is shaped by what we hear (Horowitz, 2012).

But we can’t hear everything. Rather, what we hear is constrained by our biology, our psychology and our position in space and time. Sound is really about how the interaction between energy and matter fill space with vibrations. This makes the size, of the sender, the listener and the environment, one of the primary features that defines your acoustic world.

You’ve heard about how much better your dog’s hearing is than yours. I’m sure you got a slight thrill when you thought you could actually hear the “ultrasonic” dog-training whistles that are supposed to be inaudible to humans (sorry, but every one I’ve tested puts out at least some energy in the upper range of human hearing, even if it does sound pretty thin). But it’s not that dogs hear better. Actually, dogs and humans show about the same sensitivity to sound in terms of sound pressure, with human’s most sensitive region from 1-4 kHz and dogs from about 2-8 kHz. The difference is a question of range and that is tied closely to size.

Most dogs, even big ones, are smaller than most humans and their auditory systems are scaled similarly. A big dog is about 100 pounds, much smaller than most adult humans. And since body parts tend to scale in a coordinated fashion, one of the first places to search for a link between size and frequency is the tympanum or ear drum, the earliest structure that responds to pressure information. An average dog’s eardrum is about 50 mm2, whereas an average human’s is about 60 mm2. In addition while a human’s cochlea is spiral made of 2.5 turns that holds about 3500 inner hair cells, your dog’s has 3.25 turns and about the same number of hair cells. In short: dogs probably have better high frequency hearing because their eardrums are better tuned to shorter wavelength sounds and their sensory hair cells are spread out over a longer distance, giving them a wider range.

Interest in the how hearing works in animals goes back centuries. Classical image of comparative ear anatomy from 1789 by Andreae Comparetti.

Interest in the how hearing works in animals goes back centuries. Classical image of comparative ear anatomy from 1789 by Andreae Comparetti.

Then again, if hearing was just about size of the ear components, then you’d expect that yappy 5 pound Chihuahua to hear much higher frequencies than the lumbering 100 pound St. Bernard. Yet hearing sensitivity from the two ends of the dog spectrum don’t vary by much. This is because there’s a big difference between what the ear can mechanically detect and what the animal actually hears. Chihuahuas and St. Bernards are both breeds derived from a common wolf-like ancestor that probably didn’t have as much variability as we’ve imposed on the domesticated dog, so their brains are still largely tuned to hear what a medium to large pseudo wolf-like animal should hear (Heffner, 1983).

But hearing is more than just detection of sound. It’s also important to figure out where the sound is coming from. A sound’s location is calculated in the superior olive – nuclei in the brainstem that compare the difference in time of arrival of low frequency sounds at your ears and the difference in amplitude between your ears (because your head gets in the way, making a sound “shadow” on the side of your head furthest from the sound) for higher frequency sounds. This means that animals with very large heads, like elephants, will be able to figure out the location of longer wavelength (lower pitched) sounds, but probably will have problems localizing high pitched sounds because the shorter frequencies will not even get to the other side of their heads at a useful level. On the other hand, smaller animals, which often have large external ears, are under greater selective pressure to localize higher pitched sounds, but have heads too small to pick up the very low infrasonic sounds that elephants use.

Audiograms (auditory sensitivity in air measured in dB SPL) by frequency of animals of different sizes showing the shift of maximum sensitivity to lower frequencies with increased size. Data replotted based on audiogram data by Sivian and White, 1933; ISO 1961; Heffner and Masterton, 1980; Heffner and Heffner, 1982; Heffner, 1983; Jackson et al, 1999.

Audiograms (auditory sensitivity in air measured in dB SPL) by frequency of animals of different sizes showing the shift of maximum sensitivity to lower frequencies with increased size. Data replotted based on audiogram data by Sivian and White (1933). “On minimum audible sound fields.” Journal of the Acoustical Society of America, 4: 288-321; ISO 1961; Heffner, H., & Masterton, B. (1980). “Hearing in glires: domestic rabbit, cotton rat, feral house mouse, and kangaroo rat.” Journal of the Acoustical Society of America, 68, 1584-1599.; Heffner, R. S., & Heffner, H. E. (1982). “Hearing in the elephant: Absolute sensitivity, frequency discrimination, and sound localization.” Journal of Comparative and Physiological Psychology, 96, 926-944.; Heffner H.E. (1983). “Hearing in large and small dogs: Absolute thresholds and size of the tympanic membrane.” Behav. Neurosci. 97: 310-318. ; Jackson, L.L., et al.(1999). “Free-field audiogram of the Japanese macaque (Macaca fuscata).” Journal of the Acoustical Society of America, 106: 3017-3023.

But you as a human are a fairly big mammal. If you look up “Body Size Species Richness Distribution” which shows the relative size of animals living in a given area, you’ll find that humans are among the largest animals in North America (Brown and Nicoletto, 1991). And your hearing abilities scale well with other terrestrial mammals, so you can stop feeling bad about your dog hearing “better.” But what if, by comic-book science or alternate evolution, you were much bigger or smaller? What would the world sound like? Imagine you were suddenly mouse-sized, scrambling along the floor of an office. While the usual chatter of humans would be almost completely inaudible, the world would be filled with a cacophony of ultrasonics. Fluorescent lights and computer monitors would scream in the 30-50 kHz range. Ultrasonic eddies would hiss loudly from air conditioning vents. Smartphones would not play music, but rather hum and squeal as their displays changed.

And if you were larger? For a human scaled up to elephantine dimensions, the sounds of the world would shift downward. While you could still hear (and possibly understand) human speech and music, the fine nuances from the upper frequency ranges would be lost, voices audible but mumbled and hard to localize. But you would gain the infrasonic world, the low rumbles of traffic noise and thrumming of heavy machinery taking on pitch, color and meaning. The seismic world of earthquakes and volcanoes would become part of your auditory tapestry. And you would hear greater distances as long wavelengths of low frequency sounds wrap around everything but the largest obstructions, letting you hear the foghorns miles distant as if they were bird calls nearby.

But these sounds are still in the realm of biological listeners, and the universe operates on scales far beyond that. The sounds from objects, large and small, have their own acoustic world, many beyond our ability to detect with the equipment evolution has provided. Weather phenomena, from gentle breezes to devastating tornadoes, blast throughout the infrasonic and ultrasonic ranges. Meteorites create infrasonic signatures through the upper atmosphere, trackable using a system devised to detect incoming ICBMs. Geophones, specialized low frequency microphones, pick up the sounds of extremely low frequency signals foretelling of volcanic eruptions and earthquakes. Beyond the earth, we translate electromagnetic frequencies into the audible range, letting us listen to the whistlers and hoppers that signal the flow of charged particles and lightning in the atmospheres of Earth and Jupiter, microwave signals of the remains of the Big Bang, and send listening devices on our spacecraft to let us hear the winds on Titan.

Here is a recording of whistlers recorded by the Van Allen Probes currently orbiting high in the upper atmosphere:

When the computer freezes or the phone battery dies, we complain about how much technology frustrates us and complicates our lives. But our audio technology is also the source of wonder, not only letting us talk to a friend around the world or listen to a podcast from astronauts orbiting the Earth, but letting us listen in on unheard worlds. Ultrasonic microphones let us listen in on bat echolocation and mouse songs, geophones let us wonder at elephants using infrasonic rumbles to communicate long distances and find water. And scientific translation tools let us shift the vibrations of the solar wind and aurora or even the patterns of pure math into human scaled songs of the greater universe. We are no longer constrained (or protected) by the ears that evolution has given us. Our auditory world has expanded into an acoustic ecology that contains the entire universe, and the implications of that remain wonderfully unclear.

__

Exhibit: Home Office

This is a recording made with standard stereo microphones of my home office. Aside from usual typing, mouse clicking and computer sounds, there are a couple of 3D printers running, some music playing, largely an environment you don’t pay much attention to while you’re working in it, yet acoustically very rich if you pay attention.

.

This sample was made by pitch shifting the frequencies of sonicoffice.wav down so that the ultrasonic moves into the normal human range and cuts off at about 1-2 kHz as if you were hearing with mouse ears. Sounds normally inaudible, like the squealing of the computer monitor cycling on kick in and the high pitched sound of the stepper motors from the 3D printer suddenly become much louder, while the familiar sounds are mostly gone.

.

This recording of the office was made with a Clarke Geophone, a seismic microphone used by geologists to pick up underground vibration. It’s primary sensitivity is around 80 Hz, although it’s range is from 0.1 Hz up to about 2 kHz. All you hear in this recording are very low frequency sounds and impacts (footsteps, keyboard strikes, vibration from printers, some fan vibration) that you usually ignore since your ears are not very well tuned to frequencies under 100 Hz.

.

Finally, this sample was made by pitch shifting the frequencies of infrasonicoffice.wav up as if you had grown to elephantine proportions. Footsteps and computer fan noises (usually almost indetectable at 60 Hz) become loud and tonal, and all the normal pitch of music and computer typing has disappeared aside from the bass. (WARNING: The fan noise is really annoying).

.

The point is: a space can sound radically different depending on the frequency ranges you hear. Different elements of the acoustic environment pop up depending on the type of recording instrument you use (ultrasonic microphone, regular microphones or geophones) or the size and sensitivity of your ears.

Spectrograms (plots of acoustic energy [color] over time [horizontal axis] by frequency band [vertical axis]) from a 90 second recording in the author’s home office covering the auditory range from ultrasonic frequencies (>20 kHz top) to the sonic (20 Hz-20 kHz, middle) to the low frequency and infrasonic (<20 Hz).

Spectrograms (plots of acoustic energy [color] over time [horizontal axis] by frequency band [vertical axis]) from a 90 second recording in the author’s home office covering the auditory range from ultrasonic frequencies (>20 kHz top) to the sonic (20 Hz-20 kHz, middle) to the low frequency and infrasonic (<20 Hz).

Featured image by Flickr User Jaime Wong.

Seth S. Horowitz, Ph.D. is a neuroscientist whose work in comparative and human hearing, balance and sleep research has been funded by the National Institutes of Health, National Science Foundation, and NASA. He has taught classes in animal behavior, neuroethology, brain development, the biology of hearing, and the musical mind. As chief neuroscientist at NeuroPop, Inc., he applies basic research to real world auditory applications and works extensively on educational outreach with The Engine Institute, a non-profit devoted to exploring the intersection between science and the arts. His book The Universal Sense: How Hearing Shapes the Mind was released by Bloomsbury in September 2012.

tape reel

REWIND! If you liked this post, check out …

Reproducing Traces of War: Listening to Gas Shell Bombardment, 1918– Brian Hanrahan

Learning to Listen Beyond Our Ears– Owen Marshall

This is Your Body on the Velvet Underground– Jacob Smith

Unsettled Listening: Integrating Film and Place

EastHastingsPharmacy-still

Sculpting the Film SoundtrackWelcome to the third and final installment of Sculpting the Film Soundtrack, our series about sound in contemporary films. We’ve been focusing on how filmmakers are blurring the boundaries between music, speech, and sound effects – in effect, integrating distinct categories of soundtrack design.

In our first post, Benjamin Wright showed how celebrated composer Hans Zimmer thinks across traditional divisions of labour to integrate film sound design with music composition. Danijela Kulezic-Wilson followed up with an insightful piece on the integration of audio elements in Shane Carruth’s Upstream Color, suggesting how scholars can apply principles of music, like tempo and rhythm, to their analyses of the interactions between a film’s images and sounds. In this final entry, Randolph Jordan, considers another dimension of integration: a film’s sounds and the place where it was produced. In his provocative and insightful reading of the quasi-documentary East Hastings Pharmacy, Jordan, who is completing a post-doctoral post at Simon Fraser University, elaborates on how the concept of “unsettled listening” can clue us into the relationship between a film and its origins of production. You’ll be able to read more about “unsettled listening” in Jordan’s forthcoming book, tentatively titled Reflective Audioviewing: An Acoustic Ecology of the Cinema, to be published by Oxford University Press.

I hope you’ve enjoyed taking in this series as much as I’ve enjoyed editing it with the help of the marvelous folks at SO!. Thanks for reading. — Guest Editor Katherine Spring

A mother and son of First Nations ancestry sit in the waiting area of a methadone clinic in Vancouver’s Downtown Eastside, their attention directed toward an offscreen TV. A cartoon plays, featuring an instrumental version of “I’ve Been Working on the Railroad” that mingles with the operating sounds of the clinic and ambience from the street outside. The tune is punctuated by a metal clinking sound at the beginning of each bar, calling to mind the sound of driving railway spikes that once echoed just down the street as the City of Vancouver was incorporated as the western terminus of the Canadian Pacific Railway (beginning thus the cycle of state-sanctioned erasure of indigenous title to the land). The familiar voice of Bugs Bunny chimes in: “Uh, what’s all the hubbub, bub?”

.

Hubbub indeed. Let’s unpack it.

The scene appears one third of the way through East Hastings Pharmacy (Antoine Bourges, 2012), a quasi-documentary set entirely within this clinic, staging interactions between methadone patients (played by locals and informed by their real-life experiences) and the resident pharmacist (played by an actress). Vancouver’s Downtown Eastside, dubbed Canada’s “worst neighborhood” for its notorious concentration of transients and public drug use, is also home to the largest community of First Nations peoples within the city limits, a product of the long history of dispossession in the surrounding areas. When the film presents this indigenous pair listening to a Hollywood fabrication of the sounds that marked their loss of title to the city it is a potent juxtaposition, especially given the American infiltration of Vancouver’s mediascape since the 1970s. Long known as “Hollywood North,” Vancouver is more famous as a stand-in for myriad other parts of the world than for representing itself, its regional specificity endlessly overwritten with narratives that hide the city and its indigenous presence from public awareness.

"Quidam +  Noise" graffiti in Downtown Vancouver,  April 6, 2013, by Flickr User Kevin Krebs

“Quidam +  Noise” graffiti in Downtown Vancouver,  April 6, 2013, by Flickr User Kevin Krebs

In her essay “Thoughts on Making Places: Hollywood North and the Indigenous City,” filmmaker Kamala Todd stresses how media can assist the process of re-inscribing local stories into Vancouver’s consciousness. East Hastings Pharmacy is one such example, lending some screen time to urban Natives in the 21st Century city. But Todd reminds us that audiences also have a responsibility “to learn the stories of the land” that have been actively erased in dominant media practices, and to bring this knowledge to our experience of the city in all its incarnations (9). Todd’s call resonates with a process that Nicholas Blomley calls “unsettling the city” in his book of the same name. Blomley reveals Vancouver as a site of continual contestation and mobility across generations and cultural groups, and calls for an “unsettled” approach that can account for the multiple overlapping patterns of use that are concealed by “settled” concepts of bounded property. With that in mind, I propose “unsettled listening” as a way of experiencing the city from these multiple positions simultaneously. Rick Altman taught us to hear any given sound event as a narrative by listening for the auditory markers of its propagation through physical space, and recording media, over time (15-31). Unsettled listening invites us to hear through these physical properties of mediatic space to the resonating stories revealed by the overlapping and contradictory histories and patterns of use to which these spaces are put, all too often unacknowledged in the wake of settler colonialism.

East Hastings Pharmacy provides a great opportunity to begin the practice of unsettled listening. The film’s status as an independent production amidst industrial shooting is marked by the intersection of studio-fabricated sound effects and direct sound recording, as in the example described above, and further complicated by the film’s own hybrid of fiction and documentary modes. That speaks to the complexity of overlapping filmmaking practices in Vancouver today, a situation embedded within the intersecting claims to land use and cultural propriety on the streets of Vancouver’s Downtown Eastside. To unsettle listening is to hear all these overlapping situations as forms of resonance that begin with the original context of the televised cartoon and accumulate as they spread through the interior of the clinic and outwards across the surrounding land. So let’s try this out.

.

The cartoon is Falling Hare (Robert Clampett, 1943), a good example of the noted history of cross-departmental integration at The Warner Bros.’ cartoon studios. The scene in question begins at 1:55, and here the metallic clinking sound is just as likely to have been produced by one of music orchestrator Carl Stalling’s percussionists as by sound effects editor Treg Brown. This integration can be heard in the way that the music’s unspoken reference to railway construction charges each clink with the connotation of hammer on spike. However, the image track in Falling Hare doesn’t depict railway construction, but rather a gremlin whacking the nose of a live bomb in an attempt to do away with enemy Bugs seated on top. James Lastra would say (by way of Christian Metz) that the clinking sound is “legible” as hammer on spike for the ease with which the sound can be recognized as emanating from this implied source (126). But this legibility is premised upon a lack of specificity that also allows the sound to become interchangeable with something else, as is the case in this cartoon.

Screen Capture from Falling Hare

East Hastings Pharmacy capitalizes on this interchangeability by re-inscribing the clinking sound’s railway connotations, first by stripping the original image and then by presenting this sound in the context of the dire social realities of Vancouver’s Downtown Eastside as the city’s sanctioned corral for the markers of urban poverty – and indigeneity – that officials don’t want to spill out across the neighborhood’s increasingly gentrified perimeter.

As one of a string of Warner Bros. cartoons put in the service of WWII propaganda, the Falling Hare soundtrack also resonates with wartime xenophobia and imperialist expansion, branches of the same pathos that leads to the effacing of indigenous culture from the consciousness of colonizing peoples. In Vancouver, this has taken the form of what Jean Barman calls “Erasing Indigenous Indigeneity,” the process of chasing the area’s original peoples off the land while importing aboriginal artifacts from elsewhere to maintain a Native chic deemed safe for immigrant consumption (as when the city paid “homage” to the vacated Squamish residents of downtown Vancouver’s Stanley Park by erecting Kwakiutl totem poles imported from 200km north on Vancouver Island) (26). This is an interchangeability of cultural heritage premised upon a lack of specificity, the same quality that allows “legible” sound effects to become synchretic with a variety of implied sources. And this process is not unlike the interchangeability of urban spaces when shooting Vancouver for Seattle, New York, or Frankfurt, emphasizing generic qualities of globalized urbanization while suppressing recognizable soundmarks from the mix (such as the persistent sound of float plane propellers that populate Vancouver harbour, the grinding and screeching of trains in the downtown railyard, or the regular horn blasts from the local ferry runs just north of the city).

The high-concept legibility of Warner Bros.’ sound effects – used in Falling Hare to play on listener’s expectations to comic effect – is further unsettled by its presentation within the context of documentary sound conventions in East Hastings Pharmacy. Bourges’ film commits to regional specificity in part through the use of location sound recording, which, as Jeffrey K. Ruoff identifies in “Conventions of Documentary Sound,” is particularly valued as a marker of authenticity (27-29). While Bourges stages the action inside the clinic, the film features location recordings of the rich street life audible and visible through the clinic’s windows that proceeds unaffected by the cameras and microphones. This situation is all the more potent when we account for the fact that, in this scene, the location-recorded cartoon soundtrack and ambient sound effects were added in post-production, and so represent a highly conscious attempt to channel the acoustic environment according to the conventions of “authentic” sound in documentary film.

lg_2673

Screen Capture, East Hastings Pharmacy

While the film uses location recording as a conscious stylistic choice to evoke documentary convention, it does so to engage meaningfully with the social situation in the Downtown Eastside, underlining Michel Chion’s point that “rendered” film sound – fabricated in studio to evoke the qualities of a particular space – is just as capable of engaging the world authentically (or inauthentically) as “real” sound captured on location (95-98). By presenting this Hollywood cartoon as an embedded element within the soundscape of the clinic, using a provocative mix of location sound and studio fabrication, East Hastings Pharmacy unsettles Hollywood’s usual practice of erasing local specificity, inviting us to think of runaway projects in the context of their foreign spaces of production and the local media practices that sit next to them.

Finally, this intersection of sonic styles points to the complex relationships that exist between the domains of independent and industrial production around Vancouver. In his book Hollywood North, Mike Gasher argues for thinking about filmmaking in British Columbia as a resource industry, pointing to how the provincial government has offered business incentives for foreign film production similar to those in place for activities like logging and fishing. Here we can consider how the local film industry might follow the same unsustainable patterns of extraction as other resource industries, all premised upon willful ignorance of indigenous uses of the land. Yet as David Spaner charts in Dreaming in the Rain, the ability to make independent films in Vancouver has become largely intertwined with the availability of industrial resources in town. Just as Hollywood didn’t erase the independent film, colonization didn’t erase indigenous presence.

East Hastings Pharmacy offers a powerful example of how we can practice unsettled listening on the staged sound of Falling Hare, devoid of local context and connected to the railway only by inference, to reveal a rich integration with regional specificity as the cartoon’s auditory resonances accumulate within its new spaces of propagation. In this way we can hear local media through its transnational network, including the First Nations, to understand the overlaps between seemingly contradictory modes of being within the city. And in so doing, we can also hear through the misrepresentation of the Downtown Eastside as “Canada’s worst neighborhood” to the strength of the community that has long characterized the area for anyone who scratches the surface, an important first step along the path to unsettling the city as a whole.

Featured Image: Still from East Hastings Pharmacy

Randolph Jordan wanted to be a rock star.  Academia seemed a responsible back-up option – until it became clear that landing a professor gig would be harder than topping the Billboard charts.  After completing his Ph.D. in the interdisciplinary Humanities program at Concordia University in 2010 he floated around Montreal classrooms on contract appointments before taking up a two-year postdoctoral research fellowship in the School of Communication at Simon Fraser University. There he has been investigating geographical specificity in Vancouver-based film and media by way of sound studies and critical geography, research that will inform the last chapter of his book Reflective Audioviewing: An Acoustic Ecology of the Cinema (now under contract at Oxford University Press).  If you can’t find him hammering away at his manuscript, or recording his three young children hammering away at their Mason & Risch, look for him under Vancouver’s Burrard Bridge where he spends his “spare time” gathering film and sound material for his multimedia project Bell Tower of False Creek. Or visit him online here: http://www.randolphjordan.com

tape reel  REWIND!…If you liked this post, you may also dig:

Fade to Black, Old Sport: How Hip Hop Amplifies Baz Luhrmann’s The Great Gatsby– Regina Bradley

Quiet on the Set?: The Artist and the Sound of a Silent Resurgence– April Miller

Play It Again (and Again), Sam: The Tape Recorder in Film (Part Two on Walter Murch)– Jennifer Stoever

 

 

%d bloggers like this: