Archive | Aesthetics RSS for this section

Sounds of Science: The Mystique of Sonification

NYU_full

Hearing the Unheard IIWelcome to the final installment of Hearing the UnHeardSounding Out!s series on what we don’t hear and how this unheard world affects us. The series started out with my post on hearing, large and small, continued with a piece by China Blue on the sounds of catastrophic impacts, and Milton Garcés piece on the infrasonic world of volcanoes. To cap it all off, we introduce The Sounds of Science by professor, cellist and interactive media expert, Margaret Schedel.

Dr. Schedel is an Associate Professor of Composition and Computer Music at Stony Brook University. Through her work, she explores the relatively new field of Data Sonification, generating new ways to perceive and interact with information through the use of sound. While everyone is familiar with informatics, graphs and images used to convey complex information, her work explores how we can expand our understanding of even complex scientific information by using our fastest and most emotionally compelling sense, hearing.

– Guest Editor Seth Horowitz

With the invention of digital sound, the number of scientific experiments using sound has skyrocketed in the 21st century, and as Sounding Out! readers know, sonification has started to enter the public consciousness as a new and refreshing alternative modality for exploring and understanding many kinds of datasets emerging from research into everything from deep space to the underground. We seem to be in a moment in which “science that sounds” has a special magic, a mystique that relies to some extent on misunderstandings in popular awareness about the processes and potentials of that alternative modality.

For one thing, using sound to understand scientific phenomena is not actually new. Diarist Samuel Pepys wrote about meeting scientist Robert Hooke in 1666 that “he is able to tell how many strokes a fly makes with her wings (those flies that hum in their flying) by the note that it answers to in musique during their flying.” Unfortunately Hooke never published his findings, leading researchers to speculate on his methods. One popular theory is that he tied strings of varying lengths between a fly and an ear trumpet, recognizing that sympathetic resonance would cause the correct length string to vibrate, thus allowing him to calculate the frequency. Even Galileo used sound, showing the constant acceleration of a ball due to gravity by using an inclined plane with thin moveable frets. By moving the placement of the frets until the clicks created an even tempo he was able to come up with a mathematical equation to describe how time and distance relate when an object falls.

Illustration from Robert Hooke's Micrographia (1665)

Illustration from Robert Hooke’s Micrographia (1665)

There have also been other scientific advances using sound in the more recent past. The stethoscope was invented in 1816 for auscultation, listening to the sounds of the body. It was later applied to machines—listening for the operation of the technological gear. Underwater sonar was patented in 1913 and is still used to navigate and communicate using hydroacoustic phenomenon. The Geiger Counter was developed in 1928 using principles discovered in 1908; it is unclear exactly when the distinctive sound was added. These are all examples of auditory display [AD]; sonification-generating or manipulating sound by using data is a subset of AD. As the forward to the The Sonification Handbook states, “[Since 1992] Technologies that support AD have matured. AD has been integrated into significant (read “funded” and “respectable”) research initiatives. Some forward thinking universities and research centers have established ongoing AD programs. And the great need to involve the entire human perceptual system in understanding complex data, monitoring processes, and providing effective interfaces has persisted and increased” (Thomas Hermann, Andy Hunt, John G. Neuhoff, Sonification Handbook, iii)

Sonification clearly enables scientists, musicians and the public to interact with data in a very different way, particularly compared to the more numerous techniques involving vision. Indeed, because hearing functions quite differently than vision, sonification offers an alternative kind of understanding of data (sometimes more accurate), which would not be possible using eyes alone. Hearing is multi-directional—our ears don’t have to be pointing at a sound source in order to sense it. Furthermore, the frequency response of our hearing is thousands of times more accurate than our vision. In order to reproduce a moving image the sampling rate (called frame-rate) for film is 24 frames per second, while audio has to be sampled at 44,100 frames per second in order to accurately reproduce sound. In addition, aural perception works on simultaneous time scales—we can take in multiple streams of audio data at once at many different dynamics, while our pupils dilate and contract, limiting how much visual data we can absorb at a single time. Our ears are also amazing at detecting regular patterns over time in data; we hear these patterns as frequency, harmonic relationships, and timbre.

Image credit: Dr. Kevin Yager, data measured at X9 beamline, Brookhaven National Lab.

Image credit: Dr. Kevin Yager, Brookhaven National Lab.

But hearing isn’t simple, either. In the current fascination with sonification, the fact that aesthetic decisions must be made in order to translate data into the auditory domain can be obscured. Headlines such as “Here’s What the Higgs Boson Sounds Like” are much sexier than headlines such as “Here is What One Possible Mapping of Some of the Data We Have Collected from a Scientific Measuring Instrument (which itself has inaccuracies) Into Sound.” To illustrate the complexity of these aesthetic decisions, which are always interior to the sonification process, I focus here on how my collaborators and I have been using sound to understand many kinds of scientific data.

My husband, Kevin Yager, a staff scientist at Brookhaven National Laboratory, works at the Center for Functional Nanomaterials using scattering data from x-rays to probe the structure of matter. One night I asked him how exactly the science of x-ray scattering works. He explained that X-rays “scatter” off of all the atoms/particles in the sample and the intensity is measured by a detector. He can then calculate the structure of the material, using the Fast Fourier Transform (FFT) algorithm. He started to explain FFT to me, but I interrupted him because I use FFT all the time in computer music. The same algorithm he uses to determine the structure of matter, musicians use to separate frequency content from time. When I was researching this post, I found a site for computer music which actually discusses x-ray scattering as a precursor for FFT used in sonic applications.

To date, most sonifications have used data which changes over time – a fly’s wings flapping, a heartbeat, a radiation signature. Except in special cases Kevin’s data does not exist in time – it is a single snapshot. But because data from x-ray scattering is a Fourier Transform of the real-space density distribution, we could use additive synthesis, using multiple simultaneous sine waves, to represent different spatial modes. Using this method, we swept through his data radially, like a clock hand, making timbre-based sonifications from the data by synthesizing sine waves using with the loudness based on the intensity of the scattering data and frequency based on the position.

We played a lot with the settings of the additive synthesis, including the length of the sound, the highest frequency and even the number of frequency bins (going back to the clock metaphor – pretend the clock hand is a ruler – the number of frequency bins would be the number of demarcations on the ruler) arriving eventually at set of optimized variables.

Here is one version of the track we created using 10 frequency bins:

.

Here is one we created using 2000:

.

And here is one we created using 50 frequency bins, which we settled on:

.

On a software synthesizer this would be like the default setting. In the future we hope to have an interactive graphic user interface where sliders control these variables, just like a musician tweaks the sound of a synth, so scientists can bring out, or mask aspects of the data.

To hear what that would be like, here are a few tracks that vary length:

.

.

.

Finally, here is a track we created using different mappings of frequency and intensity:

.

Having these sliders would reinforce to the scientists that we are not creating “the sound of a metallic alloy,” we are creating one sonic representation of the data from the metallic alloy.

It is interesting that such a representation can be vital to scientists. At first, my husband went along with this sonification project as more of a thought experiment rather than something that he thought would actually be useful in the lab, until he heard something distinct about one of those sounds, suggesting that there was a misaligned sample. Once Kevin heard that glitched sound (you can hear it in the video above), he was convinced that sonification was a useful tool for his lab. He and his colleagues are dealing with measurements 1/25,000th the width of a human hair, aiming an X-ray through twenty pieces of equipment to get the beam focused just right. If any piece of equipment is out of kilter, the data can’t be collected. This is where our ears’ non-directionality is useful. The scientist can be working on his/her computer and, using ambient sound, know when a sample is misaligned.

procedure

It remains to be seen/heard if the sonifications will be useful to actually understand the material structures. We are currently running an experiment using Mechanical Turk to determine this kind of multi-modal display (using vision and audio) is actually helpful. Basically we are training people on just the images of the scattering data, and testing how well they do, and training another group of people on the images plus the sonification and testing how well they do.

I’m also working with collaborators at Stony Brook University on sonification of data. In one experiment we are using ambisonic (3-dimensional) sound to create a sonic map of the brain to understand drug addiction. Standing in the middle of the ambisonic cube, we hope to find relationships between voxels, a cube of brain tissue—analogous to pixels. When neurons fire in areas of the brain simultaneously there is most likely a causal relationship which can help scientists decode the brain activity of addiction. Computer vision researchers have been searching for these relationships unsuccessfully; we hope that our sonification will allow us to hear associations in distinct parts of the brain which are not easily recognized with sight. We are hoping to leverage the temporal pattern recognition of our auditory system, but we have been running into problems doing the sonification; each slice of data from the FMRI has about 300,000 data points. We have it working with 3,000 data points, but either our programming needs to get more efficient, or we have to get a much more powerful computer in order to work with all of the data.

On another project we are hoping to sonify gait data using smartphones. I’m working with some of my music students and a professor of Physical Therapy, Lisa Muratori, who works on understanding the underlying mechanisms of mobility problems in Parkinsons’ Disease (PD). The physical therapy lab has a digital motion-capture system and a split-belt treadmill for asymmetric stepping—the patients are supported by a harness so they don’t fall. PD is a progressive nervous system disorder characterized by slow movement, rigidity, tremor, and postural instability. Because of degeneration of specific areas of the brain, individuals with PD have difficulty using internally driven cues to initiate and drive movement. However, many studies have demonstrated an almost normal movement pattern when persons with PD are provided external cues, including significant improvements in gait with rhythmic auditory cueing. So far the research with PD and sound has be unidirectional – the patients listen to sound and try to match their gait to the external rhythms from the auditory cues.In our system we will use bio-feedback to sonify data from sensors the patients will wear and feed error messages back to the patient through music. Eventually we hope that patients will be able to adjust their gait by listening to self-generated musical distortions on a smartphone.

As sonification becomes more prevalent, it is important to understand that aesthetic decisions are inevitable and even essential in every kind of data representation. We are so accustomed to looking at visual representations of information—from maps to pie charts—that we may forget that these are also arbitrary transcodings. Even a photograph is not an unambiguous record of reality; the mechanics of the camera and artistic choices of the photographer control the representation. So too, in sonification, do we have considerable latitude. Rather than view these ambiguities as a nuisance, we should embrace them as a freedom that allows us to highlight salient features, or uncover previously invisible patterns.

__

Margaret Anne Schedel is a composer and cellist specializing in the creation and performance of ferociously interactive media. She holds a certificate in Deep Listening with Pauline Oliveros and has studied composition with Mara Helmuth, Cort Lippe and McGregor Boyle. She sits on the boards of 60×60 Dance, the BEAM Foundation, Devotion Gallery, the International Computer Music Association, and Organised Sound. She contributed a chapter to the Cambridge Companion to Electronic Music, and is a joint author of Electronic Music published by Cambridge University Press. She recently edited an issue of Organised Sound on sonification. Her research focuses on gesture in music, and the sustainability of technology in art. She ran SUNY’s first Coursera Massive Open Online Course (MOOC) in 2013. As an Associate Professor of Music at Stony Brook University, she serves as Co-Director of Computer Music and is a core faculty member of cDACT, the consortium for digital art, culture and technology.

Featured Image: Dr. Kevin Yager, data measured at X9 beamline, Brookhaven National Lab.

Research carried out at the Center for Functional Nanomaterials, Brookhaven National Laboratory, is supported by the U.S. Department of Energy, Office of Basic Energy Sciences, under Contract No. DE-AC02-98CH10886.

tape reelREWIND! ….. If you liked this post, you might also like:

The Noises of Finance–Nicholas Knouf

Revising the Future of Music Technology–Aaron Trammell

A Brief History of Auto-Tune–Owen Marshall

Unsettled Listening: Integrating Film and Place

EastHastingsPharmacy-still

Sculpting the Film SoundtrackWelcome to the third and final installment of Sculpting the Film Soundtrack, our series about sound in contemporary films. We’ve been focusing on how filmmakers are blurring the boundaries between music, speech, and sound effects – in effect, integrating distinct categories of soundtrack design.

In our first post, Benjamin Wright showed how celebrated composer Hans Zimmer thinks across traditional divisions of labour to integrate film sound design with music composition. Danijela Kulezic-Wilson followed up with an insightful piece on the integration of audio elements in Shane Carruth’s Upstream Color, suggesting how scholars can apply principles of music, like tempo and rhythm, to their analyses of the interactions between a film’s images and sounds. In this final entry, Randolph Jordan, considers another dimension of integration: a film’s sounds and the place where it was produced. In his provocative and insightful reading of the quasi-documentary East Hastings Pharmacy, Jordan, who is completing a post-doctoral post at Simon Fraser University, elaborates on how the concept of “unsettled listening” can clue us into the relationship between a film and its origins of production. You’ll be able to read more about “unsettled listening” in Jordan’s forthcoming book, tentatively titled Reflective Audioviewing: An Acoustic Ecology of the Cinema, to be published by Oxford University Press.

I hope you’ve enjoyed taking in this series as much as I’ve enjoyed editing it with the help of the marvelous folks at SO!. Thanks for reading. — Guest Editor Katherine Spring

A mother and son of First Nations ancestry sit in the waiting area of a methadone clinic in Vancouver’s Downtown Eastside, their attention directed toward an offscreen TV. A cartoon plays, featuring an instrumental version of “I’ve Been Working on the Railroad” that mingles with the operating sounds of the clinic and ambience from the street outside. The tune is punctuated by a metal clinking sound at the beginning of each bar, calling to mind the sound of driving railway spikes that once echoed just down the street as the City of Vancouver was incorporated as the western terminus of the Canadian Pacific Railway (beginning thus the cycle of state-sanctioned erasure of indigenous title to the land). The familiar voice of Bugs Bunny chimes in: “Uh, what’s all the hubbub, bub?”

.

Hubbub indeed. Let’s unpack it.

The scene appears one third of the way through East Hastings Pharmacy (Antoine Bourges, 2012), a quasi-documentary set entirely within this clinic, staging interactions between methadone patients (played by locals and informed by their real-life experiences) and the resident pharmacist (played by an actress). Vancouver’s Downtown Eastside, dubbed Canada’s “worst neighborhood” for its notorious concentration of transients and public drug use, is also home to the largest community of First Nations peoples within the city limits, a product of the long history of dispossession in the surrounding areas. When the film presents this indigenous pair listening to a Hollywood fabrication of the sounds that marked their loss of title to the city it is a potent juxtaposition, especially given the American infiltration of Vancouver’s mediascape since the 1970s. Long known as “Hollywood North,” Vancouver is more famous as a stand-in for myriad other parts of the world than for representing itself, its regional specificity endlessly overwritten with narratives that hide the city and its indigenous presence from public awareness.

"Quidam +  Noise" graffiti in Downtown Vancouver,  April 6, 2013, by Flickr User Kevin Krebs

“Quidam +  Noise” graffiti in Downtown Vancouver,  April 6, 2013, by Flickr User Kevin Krebs

In her essay “Thoughts on Making Places: Hollywood North and the Indigenous City,” filmmaker Kamala Todd stresses how media can assist the process of re-inscribing local stories into Vancouver’s consciousness. East Hastings Pharmacy is one such example, lending some screen time to urban Natives in the 21st Century city. But Todd reminds us that audiences also have a responsibility “to learn the stories of the land” that have been actively erased in dominant media practices, and to bring this knowledge to our experience of the city in all its incarnations (9). Todd’s call resonates with a process that Nicholas Blomley calls “unsettling the city” in his book of the same name. Blomley reveals Vancouver as a site of continual contestation and mobility across generations and cultural groups, and calls for an “unsettled” approach that can account for the multiple overlapping patterns of use that are concealed by “settled” concepts of bounded property. With that in mind, I propose “unsettled listening” as a way of experiencing the city from these multiple positions simultaneously. Rick Altman taught us to hear any given sound event as a narrative by listening for the auditory markers of its propagation through physical space, and recording media, over time (15-31). Unsettled listening invites us to hear through these physical properties of mediatic space to the resonating stories revealed by the overlapping and contradictory histories and patterns of use to which these spaces are put, all too often unacknowledged in the wake of settler colonialism.

East Hastings Pharmacy provides a great opportunity to begin the practice of unsettled listening. The film’s status as an independent production amidst industrial shooting is marked by the intersection of studio-fabricated sound effects and direct sound recording, as in the example described above, and further complicated by the film’s own hybrid of fiction and documentary modes. That speaks to the complexity of overlapping filmmaking practices in Vancouver today, a situation embedded within the intersecting claims to land use and cultural propriety on the streets of Vancouver’s Downtown Eastside. To unsettle listening is to hear all these overlapping situations as forms of resonance that begin with the original context of the televised cartoon and accumulate as they spread through the interior of the clinic and outwards across the surrounding land. So let’s try this out.

.

The cartoon is Falling Hare (Robert Clampett, 1943), a good example of the noted history of cross-departmental integration at The Warner Bros.’ cartoon studios. The scene in question begins at 1:55, and here the metallic clinking sound is just as likely to have been produced by one of music orchestrator Carl Stalling’s percussionists as by sound effects editor Treg Brown. This integration can be heard in the way that the music’s unspoken reference to railway construction charges each clink with the connotation of hammer on spike. However, the image track in Falling Hare doesn’t depict railway construction, but rather a gremlin whacking the nose of a live bomb in an attempt to do away with enemy Bugs seated on top. James Lastra would say (by way of Christian Metz) that the clinking sound is “legible” as hammer on spike for the ease with which the sound can be recognized as emanating from this implied source (126). But this legibility is premised upon a lack of specificity that also allows the sound to become interchangeable with something else, as is the case in this cartoon.

Screen Capture from Falling Hare

East Hastings Pharmacy capitalizes on this interchangeability by re-inscribing the clinking sound’s railway connotations, first by stripping the original image and then by presenting this sound in the context of the dire social realities of Vancouver’s Downtown Eastside as the city’s sanctioned corral for the markers of urban poverty – and indigeneity – that officials don’t want to spill out across the neighborhood’s increasingly gentrified perimeter.

As one of a string of Warner Bros. cartoons put in the service of WWII propaganda, the Falling Hare soundtrack also resonates with wartime xenophobia and imperialist expansion, branches of the same pathos that leads to the effacing of indigenous culture from the consciousness of colonizing peoples. In Vancouver, this has taken the form of what Jean Barman calls “Erasing Indigenous Indigeneity,” the process of chasing the area’s original peoples off the land while importing aboriginal artifacts from elsewhere to maintain a Native chic deemed safe for immigrant consumption (as when the city paid “homage” to the vacated Squamish residents of downtown Vancouver’s Stanley Park by erecting Kwakiutl totem poles imported from 200km north on Vancouver Island) (26). This is an interchangeability of cultural heritage premised upon a lack of specificity, the same quality that allows “legible” sound effects to become synchretic with a variety of implied sources. And this process is not unlike the interchangeability of urban spaces when shooting Vancouver for Seattle, New York, or Frankfurt, emphasizing generic qualities of globalized urbanization while suppressing recognizable soundmarks from the mix (such as the persistent sound of float plane propellers that populate Vancouver harbour, the grinding and screeching of trains in the downtown railyard, or the regular horn blasts from the local ferry runs just north of the city).

The high-concept legibility of Warner Bros.’ sound effects – used in Falling Hare to play on listener’s expectations to comic effect – is further unsettled by its presentation within the context of documentary sound conventions in East Hastings Pharmacy. Bourges’ film commits to regional specificity in part through the use of location sound recording, which, as Jeffrey K. Ruoff identifies in “Conventions of Documentary Sound,” is particularly valued as a marker of authenticity (27-29). While Bourges stages the action inside the clinic, the film features location recordings of the rich street life audible and visible through the clinic’s windows that proceeds unaffected by the cameras and microphones. This situation is all the more potent when we account for the fact that, in this scene, the location-recorded cartoon soundtrack and ambient sound effects were added in post-production, and so represent a highly conscious attempt to channel the acoustic environment according to the conventions of “authentic” sound in documentary film.

lg_2673

Screen Capture, East Hastings Pharmacy

While the film uses location recording as a conscious stylistic choice to evoke documentary convention, it does so to engage meaningfully with the social situation in the Downtown Eastside, underlining Michel Chion’s point that “rendered” film sound – fabricated in studio to evoke the qualities of a particular space – is just as capable of engaging the world authentically (or inauthentically) as “real” sound captured on location (95-98). By presenting this Hollywood cartoon as an embedded element within the soundscape of the clinic, using a provocative mix of location sound and studio fabrication, East Hastings Pharmacy unsettles Hollywood’s usual practice of erasing local specificity, inviting us to think of runaway projects in the context of their foreign spaces of production and the local media practices that sit next to them.

Finally, this intersection of sonic styles points to the complex relationships that exist between the domains of independent and industrial production around Vancouver. In his book Hollywood North, Mike Gasher argues for thinking about filmmaking in British Columbia as a resource industry, pointing to how the provincial government has offered business incentives for foreign film production similar to those in place for activities like logging and fishing. Here we can consider how the local film industry might follow the same unsustainable patterns of extraction as other resource industries, all premised upon willful ignorance of indigenous uses of the land. Yet as David Spaner charts in Dreaming in the Rain, the ability to make independent films in Vancouver has become largely intertwined with the availability of industrial resources in town. Just as Hollywood didn’t erase the independent film, colonization didn’t erase indigenous presence.

East Hastings Pharmacy offers a powerful example of how we can practice unsettled listening on the staged sound of Falling Hare, devoid of local context and connected to the railway only by inference, to reveal a rich integration with regional specificity as the cartoon’s auditory resonances accumulate within its new spaces of propagation. In this way we can hear local media through its transnational network, including the First Nations, to understand the overlaps between seemingly contradictory modes of being within the city. And in so doing, we can also hear through the misrepresentation of the Downtown Eastside as “Canada’s worst neighborhood” to the strength of the community that has long characterized the area for anyone who scratches the surface, an important first step along the path to unsettling the city as a whole.

Featured Image: Still from East Hastings Pharmacy

Randolph Jordan wanted to be a rock star.  Academia seemed a responsible back-up option – until it became clear that landing a professor gig would be harder than topping the Billboard charts.  After completing his Ph.D. in the interdisciplinary Humanities program at Concordia University in 2010 he floated around Montreal classrooms on contract appointments before taking up a two-year postdoctoral research fellowship in the School of Communication at Simon Fraser University. There he has been investigating geographical specificity in Vancouver-based film and media by way of sound studies and critical geography, research that will inform the last chapter of his book Reflective Audioviewing: An Acoustic Ecology of the Cinema (now under contract at Oxford University Press).  If you can’t find him hammering away at his manuscript, or recording his three young children hammering away at their Mason & Risch, look for him under Vancouver’s Burrard Bridge where he spends his “spare time” gathering film and sound material for his multimedia project Bell Tower of False Creek. Or visit him online here: http://www.randolphjordan.com

tape reel  REWIND!…If you liked this post, you may also dig:

Fade to Black, Old Sport: How Hip Hop Amplifies Baz Luhrmann’s The Great Gatsby– Regina Bradley

Quiet on the Set?: The Artist and the Sound of a Silent Resurgence– April Miller

Play It Again (and Again), Sam: The Tape Recorder in Film (Part Two on Walter Murch)– Jennifer Stoever

 

 

Brasil Ao Vivo!: The Sonic Pleasures of Liveness in Brazilian Popular Culture

5077365816_a7cd9d4eca_o

Sound and Pleasure2After a rockin’ (and seriously informative) series of podcasts from Leonard J. Paul, a Drrty South banger dropped by SO! Regular Regina Bradley, a screamtastic meditation from Yvon Bonenfant, and a heaping plate of food sounds from Steph Ceraso, our summer Sound and Pleasure series gets even louder with Kariann Goldschmidts work on live events in Brazil. Brasil Ao Vivo! --JS, Editor-in-Chief

 —

Brazilians pray, cheer and celebrate in public and often in close physical proximity to each other.  From the nearly 3 million people that flocked to Copacabana Beach to hear Pope Francis lead a mass in 2013 to the huge crowds that regularly turn out for concerts at Maracanã stadium, Brazilians earn their global reputation for large-scale public events. Of course there is Carnival in Rio de Janeiro and Salvador; the largest LGBT Pride Parade in the world held in São Paulo; and then there is football.

The relationship between large-scale public events and sound hit home as the country reacted to the national team’s humiliating loss to Germany in the semi-final round of the 2014 FIFA World Cup. The world witnessed a different kind of public outpouring as the Brazilian public mourned. Within hours of the initial shock at the lopsided score, images of Brazilian football fans weeping and screaming in the stadium and on the street became a humorous meme with music and sound playing a prominent role. By the next day, most Brazilian football observers were taking pleasure in the public spectacle of weeping fans.  With the abundance of images featuring hysteria,  videos mocking the intensity of the crying went viral with dramatic musical scores. One observer proclaimed : “essa capacidade de rir de nós mesmos é uma das melhores qualidades”; the capacity to laugh at ourselves is one of our best qualities. That Brazilians express all varieties of emotions and annual passages together in public for everyone to witness, even when they border on campy excess, allow for everyone to feel the pleasures of community and the power of public performance.

"Abschlussfeier Maracana Fifa WM 2014" by Flickr user Marco Verch, CC BY 2.0

“Abschlussfeier Maracana Fifa WM 2014″ by Flickr user Marco Verch, CC BY 2.0

All of this led me to believe that such a public culture has an effect on the aesthetics of what performance studies scholar Philip Auslander calls “liveness” in recorded music and related viral media. Auslander argues that the appeal of liveness for television broadcasts, concerts, and other stage performances allows audiences to feel the immediacy of the moment even if the presence of mediation, such as screens and on-air censorship, is obvious. The international spectacle of Brazilians emoting en masse, then, has a direct relationship with Brazilian sonic aesthetics. Nowhere, I argue, is this more prominent than in the (sometimes viral) popularity of live recordings.

That immediacy Auslander speaks of spreads to many aspects of Brazilian popular culture, including the popularity of concert DVDs and albums which are regularly listed among the most popular domestic recordings. In fact, concert records tend to be more popular than the studio albums that inspire the tour. These live albums often carry the designations Ao Vivo, live or MTV Acústico (the equivalent of the Unplugged albums popular in the United States), and they are often recorded in such a way so as to feature the interaction of the crowds. In place of the draw for authenticity (a value that permeates the MTV Unplugged recordings) is the love for community, and for experiencing big emotions together no matter how obviously they are mediated through cameras, microphones and other technology. Through the example of the continued popularity of live albums in Brazil, there is an opening for a different theorization for sounding liveness; in place of celebrating canonic performances and virtuosity, the valorization of liveness in Brazil reinforces the importance of crowds and the so-called “popular classes” at the root of the politicized singer-songwriter genre MPB or Música Popular Brasileira.

The pleasure and preference for live recordings also extends to social media. For meme chasers, a good example of this is Michel Teló’s 2011 hit “Ai Se Eu Te Pego.” The song and video were recorded ao vivo before a crowd dominated by young women. A close listen reveals that sounds of Teló’s female audience members are just as important as his voice  even if his voice is only slightly louder in the mix. There is barely a moment in the recording when the audience stops making itself heard; the engineering revels in their presence. This is especially obvious during the opening seconds of the track when Teló and his audience sing “Nossa, nossa / assim você me mata / Ai, se eu te pego / Ai, ai, se eu te pego” [Wow, wow / you kill me like that / Ah, if I could get you / ah, ah, if I could get you] in unison at nearly the same volume in the mix. When the accordion and electric bass (crucial instruments for the song’s forró style) finally enter over the screaming audience, there is a noticeable break in the tension set up by the audience and Teló singing together. Their cries, like those in other live recordings, illustrate Teló’s appeal to the crowd in that moment while also allowing other listeners to imagine themselves there.

Teló’s song went viral (as of this writing, the official version currently has nearly 580 million views on YouTube and over 72 million plays on Spotify), with alternate video versions teaching the song’s dance steps and others highlighting global football stars dancing and singing along to the song. At one point Neymar, the national team’s biggest hope for World Cup victory, sang with Teló in front of a crowd. In general, Teló’s live songs easily outpace his studio recordings in terms of virality, and, I would argue, that a major part of the appeal of “Ai Se Eu Te Pego” is its provenance in a concert setting. It is just as important that the screaming throngs of women are audible as it is for those dance steps to be easy and recognizable. The liveness of the recording is so important, in fact, that the screaming audience appears as sampled snippets in the Pitbull remix. In its viral form, Teló’s song united the popularity of live spectacle with Brazil’s enthusiasm for other live events, merging concert goers with football fans.

The popularity of Teló’s live song is not an isolated incident. Look, for example, at record sales figures for all time.  Two are live albums by artists who do not appear elsewhere on the list. Other albums that have sold more than 2 million copies in Brazil alone are by Roberto Carlos (Acústico MTV) and the teen pop/rock duo Sandy and Júnior (As Quatro Estações ao Vivo and Era Uma Vez… Ao Vivo). In 2011, five of the top ten albums in Brazil fit the ao vivo mode with little regard to genre: MPB stars Caetano Veloso and Maria Gadú are there alongside sertanejo artists Paula Fernandes and Luan Santana. In 2012, three of the top 20 best-sellers were live albums. Meanwhile,  DVDs of concerts in Brazil continue to be strong sellers. Thus, the communal pleasure palpable on-screen translates to that experienced in the home.

"Eric Clapton - Unplugged" by Flickr user Ian Alexander Martin, CC BY-NC-ND 2.0

“Eric Clapton – Unplugged” by Flickr user Ian Alexander Martin, CC BY-NC-ND 2.0

Compare this with the status of live records in the United States in the last few years where they have rarely seen any chart success. If anything, liveness continues in YouTube clips and Spotify Sessions but not in physical sales and downloads. This is probably because live albums for U.S. based artists are embedded with different values having to do with the rock authenticity rather than communal pleasure. These performances demonstrate the chops of the musician and valorize the concerts (and tours) as events. The double live albums from the 1970s such as as Frampton Comes Alive, Lynyrd Skynyrd’s One More From The Road, and Kiss Alive! hold a prized place in the classic rock canon, often as much for extended guitar solos rather as the screaming throngs of fans. In the late ‘80s and early ’90s live albums, especially MTV Unplugged, re-inscribed a love of liveness through acoustic instruments and songs that reached back into the roots of American popular music. Eric Clapton’s Unplugged (1992) even topped the Billboard album charts and won 6 Grammy awards including Album of the Year while other records such as Nirvana’s MTV Unplugged in New York and U2’s Rattle and Hum were multi-platinum hits. While there is the occasional top-40 live single, these songs are the exception to a genre of that has has moved liveness  to YouTube rather than streaming and MP3 markets.

SO! contributor Osvaldo Oyola has noted there is a tension between the efforts recording engineers often go through to make studio recordings sound as immediate as possible, and those that call attention to the recording process. Live records replace the need to sound polished with the need to sound spontaneous, often reveling in mistakes and banter. That immediacy is something I enjoy when listening to live recordings and it has a parallel for many people who participate in the reception of major events in real time through social media.

In Brazil, audiences enjoy the immense power of participation in live events.  As part of a larger work in progress I’m particularly fascinated by how this power and pleasure is mediated through the sonic experience of recordings and viral social media. Whether they are sharing tears over an international football loss or singing along to “Ai Se Eu Te Pego”  Brazilians extend Auslander’s liveness by prolonging and replaying the  immediacy of the crowds to experience that shared sonic moment, again and again.

Kariann Goldschmitt is a Visiting Lecturer in the Faculty of Music at University of Cambridge. Her scholarly work focuses on Brazilian music, modes of listening, and sonic branding in the global cultural industries. She has published in the Oxford Handbook of Mobile Music Studies, Popular Music and Society, American Music, Yearbook for Traditional Music, and Luso-Brazilian Review and contributes to the South American cultural magazine, Sounds and Colours.

Featured image: Adapted from “Gloria” by Flickr user Lourenço Fabrino, CC BY-NC-SA 2.0

tape reelREWIND! . . .If you liked this post, you may also dig:

Sound-politics in São Paulo, Brazil– Leonardo Cardoso

Calling Out To (Anti)Liveness: Recording and the Question of Presence–Osvaldo Oyola

Hello, Americans: Orson Welles, Latin America, and the Sounds of the “Good Neighbor“– Tom McEnaney

The Musical Flow of Shane Carruth’s Upstream Color

Screen Shot 2014-07-23 at 10.08.23 AM

Sculpting the Film Soundtrack

Welcome back to Sculpting the Film Soundtrack, SO!‘s new series on changing notions about how sound works in recent film and in recent film theory, edited by Katherine Spring.

Two weeks ago, Benjamin Wright started things off with a fascinating study of Hans Zimmer, a highly influential composer whose film scoring borders on engineering — or whose engineering borders on music — in many major Hollywood releases. This week we turn to the opposite end of the spectrum to a seemingly smaller film, Shane Carruth’s Upstream Color (2013), which has made quite a few waves among sound studies scholars and fans of sound design, even earning a Special Jury Prize for sound at Sundance.

To unpack the many mysteries of the film and explore its place in the field of contemporary filmmaking, we are happy to welcome musicologist and film scholar Danijela Kulezic-Wilson of University College Cork. Listen to Upstream Color through her ears (it’s currently available to stream on Netflix) and perhaps you’ll get a sense of why you’ll have to listen to it two or three more times. At least.

–nv

Screen Shot 2014-07-23 at 10.04.09 AM

When Shane Carruth’s film Upstream Color was released in 2013, critics described it in various ways—as a body horror film, a sci-fi thriller, a love story, and an art-house head-scratcher—but they all agreed that it was a film “not quite like any other”. And while the film’s cryptic imagery and non-linear editing account for most of the “what the hell?” reactions (see here for example), I argue that the reason for its distinctively hypnotic effect is Carruth’s musical approach to the film’s form: he organizes the images and sounds according to principles of music, including the use of repetition, rhythmic structuring, and antiphony.

The resulting musicality of Upstream Color may not be surprising given that Carruth composed most of the score, and also, as Jonathan Romney has noted in Sight & Sound, Carruth has said on many occasions that he was hoping “people would watch this film repeatedly, as they might listen to a favourite album” (52).  In this sense, Carruth (whose DIY toolkit also includes writing, directing, acting, producing, cinematography, and editing) joins the ranks of filmmakers such as Darren Aronofsky and Joe Wright who recognize that, despite our culture’s obsession with the cinematic and narrative aspects of “visual” media, music governs film’s deepest foundations.

 

Upstream Color is a story about a woman, Kris, who is kidnapped by a drug manufacturer (referred to in the credits as Thief) and contaminated with a worm that keeps her in a trance-like state during which the Thief strips her of all her savings.  Kris is subsequently dewormed by a character known as the Sampler, who transfers the parasite into a pig that maintains a physical and/or metaphorical connection to Kris. Kris later meets and falls in love with Jeff who, we eventually discover, has been a victim of the same ordeal. Although the bizarreness of the plot has encouraged numerous interpretations, the film’s unconventional audio-visual language suggests that its story of two people who share supressed memories of the same traumatic experience shouldn’t be taken at face value, but rather serves as a metaphor for existential anxiety resulting from being influenced by unknown forces.

Such an interpretation owes as much to the film’s disregard for the rules of classical storytelling as it does to a formally innovative soundtrack, one that uses musicality as an overarching organizing principle. The fact that Carruth wrote the score and script simultaneously (discussed in the video below) indicates the extent to which music was from the beginning considered an integral part of the film’s expressive language. More importantly, as the scenes discussed in this post suggest, the musical logic of the film is even more pervasive than the style, role, and placement of the actual score.

Whereas feature films traditionally assign a central role to speech, allowing music and sound effects supporting roles only, Upstream Color breaks down the conventional soundtrack hierarchy, often reversing the roles of each constitutive element.  For example, hardly any information in the film that could be considered vital to understanding the story is communicated through speech. Instead, images, sound, music, and editing–for which Carruth shares the credit with fellow indie director David Lowery–are the principal elements that create the atmosphere, convey the sense of the protagonists’ brokenness, and reveal the connection between the characters. At the same time, characters’ conversations are either muted or their speech is blended with music in such a way that we’re encouraged to focus on body language or mise-en-scène rather than trying to discern every spoken word. For example, Jeff and Kris’s flirting with one another during their initial meetings (at roughly 0.44.20-0.47.00 of the film) is conveyed primarily through gestures, glances, and fragmentary editing rather than speech, which would be more typical for this sort of narrative situation.

Further undercutting the significance of speech across the film is how the film has been edited to resemble the flow of music. For example, non-linear jumps in the narrative are often arranged in such a way as to create syncopated audio-visual rhymes. This technique is particularly obvious in the montage sequence in which Kris and Jeff argue over the ownership of their memories, whose similarities suggest that they were implanted during the characters’ kidnappings. In this sequence, both the passing of time and the recurrence of the characters’ argument is conveyed through the repetition of images that become visual refrains: Kris and Jeff lying on a bed, watching birds flying above trees, touching each other. Some of these can be seen in the film’s official trailer:

The scene’s images and sounds are fragmented into a non-linear assembly of pieces of the same conversation the characters had at different times and places, like the verses and the choruses of a song. Importantly, the assemblage is also patterned, with phrases like “we should go on a trip” and “where should we go?” heard in refrain. The first time we hear Jeff say “we should go on a trip” and suggesting that they go “somewhere bright”, his words are played in sync with the image of him and Kris lying on the bed. The following few shots, accompanied only by music, symbolize the “honeymoon” phase of their relationship: the couple kiss, hold hands, and walk with their hands around each other’s waists. A shift in mood is marked by the repetition of the dialogue, with Jeff again saying “we should go on a trip” – only this time, the phrase plays asynchronously over a shot of Jeff and Kris pushing a table into the house that they have moved into together. Finally, the frustration that starts infiltrating the characters’ increasingly heated arguments is alleviated by the repetition of the sentence “They could be starlings.” As it is spoken three times by both characters in an antiphonic exchange, the phrase emphasizes the underlying strength of their connection and gives the scene a rhythmic balance. Across this sequence, the musical organization of audio-visual refrains prompts us to recognize the psychic connection between Kris and Jeff, and even to begin to guess the sinister reason for it.

While speech in Upstream Color is often stripped of its traditional role as a principal source of information, sound and music are given important narrative functions, illuminating hidden connections between the characters. In one of the most memorable scenes, the Sampler is revealed to be not only a pig farmer but also a field recordist and sound artist who symbolizes the hidden source of everything that affects Kris and Jeff from afar. As we hear the sounds of the Sampler’s outdoor recordings merge with and emulate the sounds made and heard by Kris and Jeff at home and at work, the soundtrack eloquently establishes the connection between all three characters while also giving us a look “behind the scenes” of Kris’s and Jeff’s lives and suggesting how they are influenced from a distance.

Screen Shot 2014-07-23 at 10.07.38 AM

In one sense, by calling attention to the very act of recording sound, the scene exposes how films are constructed, offering a reflexive glimpse into usually hidden processes of production. The implied idea here–that the visible and audible are products of not-so-obvious processes of formation–refers not only to the medium of film but also to the complexity of the inner workings of someone’s mind. Thus the Sampler’s role, his actions, and his relationship to Kris, Jeff, and other infected victims can be interpreted as a metaphor for the subconscious programming – all the familial, social and cultural influences – that all of us are exposed to from an early age. The Sampler is portrayed symbolically as the Creator, a force whose actions affect the protagonists’ lives without them knowing it.  The fact that he is simultaneously represented as a sound artist establishes sound-making and musicality as the film’s primary creative principles.

Considering Carruth’s very deliberate departure from the conventions of even what David Bordwell calls “intensified” storytelling, it is fair to say that Upstream Color is a film that weakens the strong narrative role traditionally given to oral language. What is intensified here are the musical and sensuous qualities of the audio-visual material and a mode of perception that encourages absorption of the subtext (in other words, the metaphorical meaning of the film) as well as the text.

The musical organization of film form and soundtrack is no longer limited to independent projects such as Carruth’s Upstream Color. As I have shown elsewhere, musicality has become an extremely influential principle in contemporary cinema, acting as an inspiration and model for editing, camera movement, movement within a scene and sound design. Some of the most interesting results of a musical approach to film include Aronofsky’s “hip hop montage” in Pi (1998) and Requiem for a Dream (2000), Jim Jarmusch’s rhythmically structured film poems (The Limits of Control, 2009; Only Lovers Left Alive, 2013), the interchangeable use of musique concrète and environmental sound in Gus Van Sant’s Death Trilogy and films by Peter Strickland (Katalin Varga, 2009; Berberian Sound Studio, 2012); the choreographed mise-en-scène in Joe Wright’s Anna Karenina (2012); the musicalization of language in Harmony Korine’s Spring Breakers (2012); and the foregrounding of musical material over intelligible speech in Drake Doremus’s Breathe In (2013). Given the breadth of these examples, it’s no exaggeration to say that filmmakers’ growing affinity for a musical approach to film is changing the landscape of contemporary cinema.

Danijela Kulezic-Wilson teaches film music, film sound, and comparative arts at University College Cork. Her research interests include approaches to film that emphasize its inherent musical properties, the use of musique concrète and silence in film, the musicality of sound design, and musical aspects of the plays of Samuel Beckett. Danijela’s publications include essays on film rhythm, musical and film time, the musical use of silence in film, Darren Aronofsky’s Pi, P.T. Anderson’s Magnolia, Peter Strickland’s Katalin Varga, Gus Van Sant’s Death Trilogy, Prokofiev’s music for Eisenstein’s films, and Jim Jarmusch’s Dead Man. She has also worked as a music editor on documentaries, short films, and television.

All images taken from the film.

tape reelREWIND! . . .If you liked this post, you may also dig:

Sound Designing Motherhood: Irene Lusztig & Maile Colbert Open The Motherhood Archives– Maile Colbert

Play it Again (and Again), Sam: The Tape Recorder in Film (Part One on Noir)– Jennifer Stoever

Animal Renderings: The Library of Natural Sounds– Jonathan Skinner

%d bloggers like this: