Since its inception at the World Soundscape Project in the 1970s, soundwalking has emerged as a critical method for sound studies research and artistic practice. Although “soundwalking” now describes a diversity of activities and purposes, critical discussions and reading lists still rarely represent or consider the experiences of people of color (POC). As Locatora Radio hosts Diosa and Mala have argued in their 2018 podcast about womxn of color and the sound of sexual harassment in their everyday lives and neighborhoods, sound in public space is weaponized to create “sonic landscapes of unwelcome” for POC.
While we often think of soundwalks as engines of knowledge production, we must also consider that they may simultaneously silence divergent worldviews and perspectives of space and place. In “Black Joy: African Diasporic Religious Expression in Popular Culture,” Vanessa Valdés explored alternate conceptions of space held by practicioners of Regla de Ocha, epistemologies rarely, if ever, addressed via soundwalks. “Within African diasporic religions . . . including Palo Monte, Vodou, Obeah, Macumba, Candomblé – there is respect for the seemingly inexplicable,” Valdés remarks, “there is room for the miraculous, for that which can be found outside the realms of what has been deemed reasonable by systems of European thought. There is room for faith.” Does current soundwalk praxis—either as research method, public intervention, artistic medium, field recording subject, or pop culture phenomenon—impose dominant ideas about space and knowledge production as much as—if not more–they offer access to alternatives? Are there alternate historiographies for soundwalking that predate the 1970s? Can soundwalks provide such openings, disruptions, and opportunities without a radical rethinking? What would a decolonial/decolonizing soundwalk praxis look and sound like?
Soundwalking While POC explores these questions through the work of Allie Martin, Amanda Gutierrez, and Paola Cossermelli Messina. Today, Allie Martin kicks off the series with a powerful reframing of the soundwalk as a black feminist methodology. —JS
In July 2018 I visited Oxford, Mississippi for the first time, to attend a workshop on conducting oral histories. Upon walking with a friend back to our accommodations on the University of Mississippi campus, we heard a voice calling to us from far away, up a hill somewhere. It was a catcalling voice—that much I definitely recognized—but I also felt sure that I heard the word “nigger.” My friend, who is also a black woman, heard the taunting sounds of the voice but not that word specifically. Herein lies one of the difficulties of black womanhood: I was unable to distinguish which of my two most prominent identity markers (blackness and womanhood) the speaker was using to harm me in that moment. I found it ironic that I came to Mississippi to learn best practices for listening to people’s stories, but could not hear my own story, could not say for sure what had happened to me.
In the time since that visit, I have come to embrace the speculative sonic ephemerality of black womanhood and utilize it on my soundwalks. Soundwalks are a popular method for understanding the everyday sonic life of a place. Reminiscent of Michel de Certeau’s “Walking in the City,” soundwalks offer the kind of embodied experience missing from other more static soundscape recordings. I argue here that soundwalks can operate as black feminist method, precisely because they allow me to center the complex, incomplete sonorities of black womanhood, and they are enough in their incompleteness. One of our foremost thinkers on black feminism, Patricia Hill Collins, has argued that black women’s knowledge is subjugated (1990). I understand this to mean that my knowledge is tainted somehow, too specialized or not specialized enough, and not considered fit for application by a broader audience. Soundwalks as method, though, rely on my own subjugated knowledge. What did I hear? Black feminism centers and humanizes black women, and I utilize soundwalks to humanize myself in a soundscape that would otherwise disregard my sonic perceptions in favor of white hearing as the default standard of sound.
I began soundwalking in Washington, DC as a part of my dissertation project, which explores the musical and sonic dimensions of gentrification in the city. Gentrification is often considered in visual terms, meaning that a neighborhood is considered gentrified because new coffeeshops, bike lanes, and dog parks make it “look” different from what was once there. I recognize these new additions as important markers of gentrification, but what do they sound like? And what do these sonic markers reveal about the sonorities of race?
I have taken up the sonic exploration of gentrification, drawing inspiration from Jennifer Stoever’s Sonic Color Line and Regina Bradley’s exploration of the criminalization of black sound. As SO! writer and ethnic studies scholar Marlén Rios Hernandez has noted in her work on racial and spatial shifts in early punk in 1970s Los Angeles, it is crucial to work on “delinking gentrification as exclusively spatial and analyzing it as also a sonic force of expulsion.” Having spent time researching the auralities of gentrification in DC, I understand it to be a process that silences poor and marginalized populations while amplifying the concerns of those privileged enough to have the ear of the DC Council and developers. Gentrification displaces musicians and music genres, while increasing tensions around music and noise in “public” space. More than these changes, though, gentrification changes the soundscape of the city.
My soundwalks focus on the Shaw neighborhood in the Northwest quadrant of DC, part of the fastest gentrifying zip code in the country. Before the explosion of development, Shaw was a cultural hub of black DC, only blocks away from the U Street Corridor, formerly known as Black Broadway. From Pearl Bailey to James Brown, prominent black entertainers frequented the neighborhood because they were unable to perform in or have accommodations in other areas of the city. As the neighborhood shifts and transforms, the soundscapes grow louder with new nightclubs and quieter due to increased reporting of noise violations. The neighborhood diversifies in terms of languages, increases in siren whoops, and new sounds appear, such as the beep of a dockless scooter. Shaw has seen a concomitant increase in property values, community gardens, and bars; a Whole Foods is set to open in the neighborhood by 2020.
[The recording here is of a soundscape at 7th street and Florida Avenue NW, a busy intersection at the north edge of Shaw. Recorded on a mid-September afternoon, you can hear go-go music (DC’s indigenous subgenre of funk), engines idling, and the whoop of a siren. In the past two months, this intersection has become a battleground for cultural erasure, as artists, activists, and councilmembers attempt to legitimize the go-go music that has been playing in the area since 1995.]
During the day, Shaw oscillates between a quiet neighborhood and a busy city space. Traffic, horns, and sirens are frequent, yet so are the sounds of children at recess and old men chatting outside on their stoops or outside of corner stores.
Conducting soundwalks as a black woman in this gentrifying neighborhood is a curious space to tarry in. I am in some ways an outsider as a non-resident, mindful of who and what I record at any given moment because part of what makes gentrification such a tense and terrifying process is the lack of control that residents (particularly renters) have regarding their futures, and often their presence too. I am also an insider, a black woman in this space where being a black woman is not (yet) anything out of the ordinary. In fact, as the months went on, more of my recordings feature me speaking to people on the street, some I had come to know and some still strangers to me.
One of my favorite interactions on a soundwalk came early on, in late February of 2018. I was running late for an interview, listening intently to what was going around me, when I walked past a black man, seemingly in his 30s, on a narrow sidewalk. The exchange went something like this:
Man: Whoa, whoa, why you running up on people?
Me: My bad, my bad!
Man: It’s okay. Hey sis, you know how to make grits?
Me: [laughing], Nah, I don’t know how to make grits.
Man: What about pancakes?
Me: Yeah, I can make some pancakes.
Man: Ayyyee, I’m tryna get some breakfast!!
Me: I don’t know about all that!
The exchange, not quite a catcall but not quite comfortable either, consistently faded in volume, because during the entire time we spoke, I continued to walk away from him. I was in a position of wanting to speak, because I know the politics of being an outsider in a gentrifying neighborhood and not greeting folks as you walk by. However, I also know the dangers of being a black woman walking alone, and so I negotiated a lighthearted exchange while making my way to my destination. My soundwalks, then, act as a sonic record of gentrifying space as well as my attempts to keep myself safe.
These moments also inform the contours of my dissertation project on hearing gentrification in DC. The larger project involves passive acoustic recording in the same neighborhood, a methodology that entails creating a large amount of short soundscape recordings over a long period of time. Understanding both my soundwalks and passive acoustic recording as black feminist method allows for the consideration of multiple sonic perspectives of the neighborhood, rather than one record. When once describing passive acoustic recording to a colleague at a digital humanities workshop, they celebrated the idea that I would be able to “objectively” hear what was occurring in the neighborhood, instead of relying only on pieced together accounts from community members.
However, just as black feminist thought amplifies my “tainted” knowledge, it also mutes the authoritative “objective” knowledge of a rooftop recorder. The sounds of the stationary recorder placed on a rooftop at 7th and Florida are as partial and positioned as the recordings of my footsteps as I move around the neighborhood. As I continue to walk, be it through the unfamiliarity of Mississippi or my hometown DC, I do so with the reassurance that what I hear is enough.
Featured Image: Shaw From Above, by author.
Allie Martin is a PhD Candidate at Indiana University in the Department of Folklore and Ethnomusicology. Her dissertation project explores the musical and sonic dimensions of gentrification in Washington, DC, using a combination of ethnographic fieldwork, archival research, and soundscape recordings. Originally from the Washington, DC, metropolitan area, she received her BAs in music performance and audio production from American University.
REWIND! . . .If you liked this post, you may also dig:
“I Am Thinking Of Your Voice”: Gender, Audio Compression, and a Sonic Cyberfeminist Theory of Oppression
I developed the text I recite in this post as the theoretical framework for an article I’m working on about audio compression. As I was working on the article, I wondered about the role of gender and race in the research on audio compression. Specifically, I was reminded of the central role Suzanne Vega’s “Tom’s Diner” played into research that led to the mp3. Karl-Heinz Brandenburg used the song to test the compression method he was developing for mp3s because it sounded “warm.” Sure, the track is very intimate and Vega’s voice is soft and vulnerable. But to what extent is its “warmth” the effect of a man’s perception of Vega addressing him as either/both an intimate partner or caregiver? Is its so-called warmth dependent upon the extent to which Vega’s voice performs idealized white hetero femininity, a role from which patriarchy definitely expects warmth (intimacy, care work) but can’t be bothered to hear anything beyond or other than that from (white) women?
In other words, I’m wondering about what ways our compression practices are shaped by white supremacist, patriarchal listening ears. Before anyone even runs an audio signal through a compressor, how do patriarchal gender systems already themselves act as a kind of epistemological and sensory compression that separates out essential from inessential signal, such that we let women’s warm, caring voices through while also demanding they discipline themselves into compressing their anger and rage away?
The literature does address the role of sexism and ableism in the shaping of audio technologies, but this critique is most commonly framed in conventionally liberal terms that understand oppression as a matter of researcher bias that excludes and censors minority voices. For example, the literature addresses the way “cultural differences like gender, age, race, class, nationality, and language” are overlooked by researchers (Jonathan Sterne), offers cursory nods to the biases and preferences of white cis men scientists (Ryan Maguire), or claims that “the principles of efficiency and universality central to the history of signal processing also worked to censure atypical voices and minor modes of communication” (Mara Mills). Though such analyses are absolutely necessary components of sonic cyberfeminist practice, they are not sufficient.
We also need to consider the ways frequencies get parsed into the structural positions that masculinity and femininity occupy in Western patriarchal gender systems. Patriarchy doesn’t just influence researchers, their preferences, their choices, and their judgments. How is the break between essential and inessential signal mapped onto the gendered break between what Beauvoir calls “Absolute” and “Other,” masculine and feminine? Patriarchy is not just a relation among people; it is also a relation among sounds. I don’t think this is inconsistent with the positions I cited earlier in this paragraph; rather, I am pursuing the concerns that motivate those positions a bit more emphatically. And this is perhaps because our objects of analysis are slightly different: I’m a political philosopher interested in political structures that shape epistemologies and ontologies—such as the patriarchal gender system organized by masculine absolute/feminine other—whereas most of the scholars I cited earlier have a more STS- and media-studies-approach that is interested in material culture.
As a way to address these questions, I made a short critical karaoke-style sound piece where I read a shortened version of the text below over the original version of “Tom’s Diner” from Vega’s album Solitude Standing (which, for what it’s worth, I first owned on cassette, not digitally). I recorded my voice reciting a condensed version of the framework I develop for a sonic cyberfeminist theory of oppression over a copy of the original, a cappella version of “Tom’s Diner.” If I were in philosopher mode, I would theorize the full implications of this aesthetic choice, but I’m offering this as a sound art piece, the material and sensory dimensions of which provide y’all the opportunity to think through those implications yourselves.
[Text from audio]
Perceptual coding and perceptual technics create breaks in the audio spectrum in the same way that neoliberalism and biopolitics create breaks in the spectrum of humanity. Perceptual coding refers to “those forms of audio coding that use a mathematical model of human hearing to actively remove sound in the audible part of the spectrum under the assumption that it will not be heard” (loc 547). Neoliberalism and biopolitics use a mathematical model of human life to actively remove people from eligibility for moral and political personhood on the assumption that they will not be missed. They each use the same basic set of techniques: a normalized model of hearing, the market, or life defines the parameters of what should be included and what should be disposed of, in order to maximize the accumulation of private property/personhood.
These parameters are not objective but grounded in what Jennifer Lynne Stoever calls a “listening ear”: “a socially constructed ideological system producing but also regulating cultural ideas about sound” (13). Perceptual coding uses white supremacist, capitalist presumptions about the limits of humanity to mark a break in what counts as sound and what counts as noise…such as presumptions about feminine voices like Suzanne Vega’s.
Perceptual coding subjects audio frequencies to the same techniques of government and management that neoliberalism and biopolitics subject people to. For this reason, it can serve as a specifically sonic cyberfeminist theory of oppression.
It shows us not just how oppression works under neoliberalism and biopolitics, but also its motivations and effects. The point is to increase the efficient accumulation of personhood as property by white supremacist capitalist patriarchal institutions. Privilege is the receipt of social investment and the ability to build on it by access to circulation. Oppression is the denial of this investment and access to circulation. For example, mass incarceration takes people of color out of circulation and subjects them to carceral logics…because this is the way such populations are most profitable for neoliberal and biopolitical white supremacist capitalist patriarchy.
Featured image: “Solo show: Order and Progress at Fabio Paris Art Gallery (Brescia, 15 January 2011)” by Flickr user Roͬͬ͠͠͡͠͠͠͠͠͠͠͠sͬͬ͠͠͠͠͠͠͠͠͠aͬͬ͠͠͠͠͠͠͠ Menkman, CC BY-NC 2.0
Robin James is Associate Professor of Philosophy at UNC Charlotte. She is author of two books: Resilience & Melancholy: pop music, feminism, and neoliberalism, published by Zer0 books last year, and The Conjectural Body: gender, race and the philosophy of music was published by Lexington Books in 2010. Her work on feminism, race, contemporary continental philosophy, pop music, and sound studies has appeared in The New Inquiry, Hypatia, differences, Contemporary Aesthetics, and the Journal of Popular Music Studies. She is also a digital sound artist and musician. She blogs at its-her-factory.com and is a regular contributor to Cyborgology.
REWIND! . . .If you liked this post, you may also dig:
Tape Hiss, Compression, and the Stubborn Materiality of Sonic Diaspora–Christopher Chien
On Whiteness and Sound Studies–Gustavus Stadler
In the radio dramatization of Return of the Jedi (1996), a hibernation sickness-blinded Han Solo can tell bounty hunter Boba Fett is in the same room with him just by smelling him. Later this month, Solo: A Star Wars Story (part of the Anthology films, and as you might expect from the title, a prequel to Han Solo’s first appearance in Star Wars: A New Hope) may be able to shed some light on how Han developed this particular skill.
Later in that dramatization, we have to presume Han is able to accurately shoot a blaster blind by hearing alone. Appropriately, then, sound is integral to Star Wars. For every iconic image in the franchise—from R2D2 to Chewbacca to Darth Vader to X-Wing and TIE-fighters to the Millennium Falcon and the light sabers—there is a correspondingly iconic sound. In musical terms, too, the franchise is exemplary. John Williams, Star Wars’ composer, won the most awards of his career for his Star Wars (1977) score, including an Oscar, a Golden Globe, a BAFTA, and three Grammys. Not to mention Star Wars’ equally iconic diegetic music, such as the Mos Eisley Cantina band (officially known as Figrin D’an and the Modal Nodes).
Without sound, there would be no Star Wars. How else could Charles Ross’ One Man Star Wars Trilogy function? In One Man Star Wars, Ross performs all the voices, music, and sound effects himself. He needs no quick costume changes; indeed, in his rapid-fire, verbatim treatment, it is sound (along with a few gestures) that he uses to distinguish between characters. His one-man show, in fact, echoes C-3PO’s performance of Star Wars to the Ewoks in Return of the Jedi, a story told in narration and sound effects far more than in any visuals. “Translate the words, tell the story,” says Luke in the radio dramatization of this scene. That is what sound does in Star Wars.
I believe that the general viewing public is aware on a subconscious level of Star Wars’ impressive sound achievements, even if this is not always articulated as such. As Rick Altman noted in 1992 in his four and a half film fallacies, the ontological fallacy of film—while not unchallenged—began life with André Bazin’s “The Ontology of the Photographic Image,” (1960) which argues that film cannot exist without image. Challenging such an argument not only elevates silent film but also the discipline of film sound generally, so often regarded as an afterthought. “In virtually all film schools,” Randy Thom wrote in 1999, “sound is taught as if it were simply a tedious and mystifying series of technical operations, a necessary evil on the way to doing the fun stuff.”
Film critic Pauline Kael wrote about Star Wars on original release in what Gianlucca Sergi terms a “harmful generalization” that its defining characteristic was its “loudness.” Loud sound does not necessarily equal good sound in the movies, which audiences themselves can sometimes confuse. “High fidelity recordings of gunshots and explosions, and well fabricated alien creature vocalizations” do not equal good sound design alone, as Thom has argued. On the contrary, Star Wars’ achievements, Sergi posited, married technological invention with overall sound concept and refined if not defined the work of sound technicians and sound-conscious directors.
The reason why Star Wars is so successful aurally is because its creator, George Lucas, was invested in sound holistically and cohesively, a commitment that has carried through nearly every iteration of the franchise, and because his original sound designer, Ben Burtt, understood there was an art as well as a science to highly original, aurally “sticky” sounds. Ontologically, then, Star Wars is a sound-based story, as reflected in the existence of the radio dramatizations (more on them later). This article traces the historical development of sound in not only the Star Wars films (four decades of them!) but also in other associated media, such as television and video games as well as examining aspects of Star Wars’ holistic sound design in detail.
A long time ago, in a galaxy far, far away . . .
As Chris Taylor points out, George Lucas “loved cool sounds and sweeping music and the babble of dialogue more than he cared for dialogue itself.” In 1974, Lucas was working on The Radioland Murders, a screwball comedy thriller set in the fictional 1930s radio station WKGL. Radio, indeed, had already made a strong impression on Lucas, such that legendary “Border blaster” DJ Wolfman Jack played an integral part in Lucas’ film American Graffiti (1973). As Marcus Hearn picks up the story, Lucas soon realized that The Radioland Murders were going nowhere (the film would eventually be made in 1994). Lucas then turned his sound-conscious sensibilities in a different direction, in “The Star Wars” project upon which he had been ruminating since his film school days at the University of Southern California. Retaining creative control, and a holistic interest in a defined soundworld, were two aspects Lucas insisted upon during the development of the project that would become Star Wars. Lucas had worked with his contemporary at USC, sound designer and recordist Walter Murch, on THX 1138 (1971) and American Graffiti, and Murch would go on to provide legendary sound work for The Conversation (1974), The Godfather Part II (1974), and Apocalypse Now (1979). Murch was unavailable for the new project, so Lucas then asked producer Gary Kurtz to visit USC to evaluate emerging talent.
Pursuing a Masters degree in Film Production at USC was Ben Burtt, whose BA was in physics. In Burtt, Lucas found a truly innovative approach to film sound which was the genesis of Star Wars’ sonic invention, providing, in Sergi’s words, “audiences with a new array of aural pleasures.” Sound is embodied in the narrative of Star Wars. Not only was Burtt innovative in his meticulous attention to “found sounds” (whereas sound composition for science fiction films has previously relied on electronic sounds), he applied his meticulousness in character terms. Burtt said that Lucas and Kurtz, “just gave me a Nagra recorder and I worked out of my apartment near USC for a year, just going out and collecting sound that might be useful.”
Inherent in this was Burtt’s relationship with sound, in the way he was able to construct a sound of an imaginary object from a visual reference, such as the light saber, described in Lucas’ script and also in concept illustrations by Ralph McQuarrie. “I could kind of hear the sound in my head of the lightsabers even though it was just a painting of a lightsaber,” he said. “I could really just sort of hear the sound maybe somewhere in my subconscious I had seen a lightsaber before.” Burtt also shared with Lucas a sonic memory of sound from the Golden Age of Radio: “I said, `All my life I’ve wanted to see, let alone work on, a film like this.’ I loved Flash Gordon and other serials, and westerns. I immediately saw the potential of what they wanted to do.”
But sir, nobody worries about upsetting a droid
Burtt has described the story of A New Hope as being told from the point of view of the droids (the robots). While Lucas was inspired by Kurosawa’s The Hidden Fortress (1958) to create the characters of droids R2-D2 (“Artoo”) and C-3PO (“Threepio”), the robots are patently non-human characters. Yet, it was essential to imbue them with personalities. There have been cinematic robots since Maria, but Burtt uniquely used sound to convey not only these two robots’ personalities, but many others as well. As Jeanne Cavelos argues, “Hearing plays a critical role in the functioning of both Threepio and Artoo. They must understand the orders of their human owners.” Previous robots had less personality in their voices; for example, Douglas Rain, the voice of HAL in 2001: A Space Odyssey, spoke each word crisply with pauses. Threepio is a communications expert, with a human-like voice, provided by British actor (and BBC Radio Drama Repertory Company graduate) Anthony Daniels. According to Hearn, Burtt felt Daniels should use his own voice, but Lucas was unsure, wanting an American used car salesman voice. Burtt prevailed, creating in Threepio, vocally, “a highly strung, rather neurotic character,” in Daniels’ words, “so I decided to speak in a higher register, at the top of the lungs.” (Indeed, in the Diné translation of Star Wars [see below], Threepio was voiced by a woman, Geri Hongeva-Camarillo, something that the audience seemed to find hilarious.)
Artoo was altogether a more challenging proposition. As Cavelos puts it, “Artoo, even without the ability to speak English, manages to convey a clear personality himself, and to express a range of emotions.” Artoo’s non-speech sounds still convey emotional content. We know when Artoo is frightened;
when he is curious and friendly;
and when he is being insulting.
we started making little vocal sounds between each other to get a feeling for it. And it dawned on us that the sounds we were making were not actually so bad. Out of that discussion came the idea that the sounds a baby makes as it learns to walk would be a direction to go; a baby doesn’t form any words, but it can communicate with sounds.
The approach to Artoo’s aural communications became emblematic of all of the sounds made by machines in Star Wars, creating a non-verbal language, as Kris Jacobs calls it, the “exclusive province” of the Star Wars universe.
Powers of observation lie with the mind, Luke, not the eyes
According to Gianlucca Sergi, the film soundtrack is composed of sound effects, music, dialogue, and silence, all of which work together with great precision in Star Wars, to a highly memorable degree. Hayden Christensen, who played Anakin Skywalker in Attack of the Clones (2002) and Revenge of the Sith (2005), noted that when filming light saber battles with Ewan McGregor (Obi-Wan Kenobi), he could not resist vocally making the sound effects associated with these weapons.
This a good illustration of how iconic the sound effects of Star Wars have become. As Burtt noted above, he was stimulated by visuals to create the sound effects of the light sabers, though he was also inspired by the motor on a projector in the Department of Cinema at USC. As Todd Longwell pointed out in Variety, the projector hum was combined with a microphone passed in front of an old TV to create the sound. (It’s worth noting that the sounds of weapons were some of the first sound effects created in aural media, as in the case with Wallenstein, the first drama on German radio, in 1924, which featured clanging swords.)
If Burtt gave personality to robots through their aural communications, he created an innovative sound palette for far more than the light sabers in Star Wars. In modifying and layering found sounds to create sounds corresponding to every aspect of the film world—from laser blasts (the sound of a hammer on an antenna tower guy wire) to the Imperial Walkers from Empire Strikes Back (modifying the sound of a machinist’s punch press combined with the sounds of bicycle chains being dropped on concrete)—he worked as meticulously as a (visual) designer to establish cohesion and impact.
Sergi argues that the sound effects in Star Wars can give subtle clues about the objects with which they are associated. The sound of Imperial TIE fighters, which “roar” as they hurtle through space, was made from elephant bellows, and the deep and rumbling sound made by the Death Star is achieved through active use of sub-frequencies. Meanwhile, “the rebel X-wing and Y-wing fighters attacking the Death Star, though small, emit a wider range of frequencies, ranging from the high to the low (piloted as they are by men of different ages and experience).” One could argue that even here, Burtt has matched personality to machine. The varied sounds of the Millennium Falcon (jumping into hyperspace, hyperdrive malfunction), created by Burtt by processing sounds made by existing airplanes (along with some groaning water pipes and a dentist’s drill), give it, in the words of Sergi, a much more “grown-up” sound than Luke’s X-Wing fighter or Princess Leia’s ship, the Tantive IV. Given that, like its pilot Han Solo, the Falcon is weathered and experienced, and Luke and Leia are comparatively young and ingenuous, this sonic shorthand makes sense.
Millions of voices
Michel Chion argues that film has tended to be verbocentric, that is, that film soundtracks are produced around the assumption that dialogue, and indeed the sense of the dialogue rather than the sound, should be paramount and most easily heard by viewers. Star Wars contradicts this convention in many ways, beginning with the way it uses non-English communication forms, not only the droid languages discussed above but also its plethora of languages for various denizens of the galaxy. For example, Cavelos points out that Wookiees “have rather inexpressive faces yet reveal emotion through voice and body language.”
While the 1978 Star Wars Holiday Special may have many sins laid at its door, among them must surely be that the only Wookiee who actually sounds like a Wookiee is Chewbacca. His putative family sound more like tauntauns. Such a small detail can be quite jarring in a universe as sonically invested as Star Wars.
While many of the lines in Star Wars are eminently quotable, the vocal performances have perhaps received less attention than they deserve. As Starr A. Marcello notes, vocal performance can be extremely powerful, capitalizing on the “unique timbre and materiality that belong to a particular voice.” For example, while Lucas originally wanted Japanese actor Toshiro Mifune to play Obi-Wan, Alec Guinness’ patrician Standard English Neutral accent clearly became an important part of the character. For example, when (Scottish) actor Ewan McGregor was cast to play the younger version of Obi-Wan, he began voice lessons to reproduce Guinness’ voice. Ian McDiarmid (also Scottish), a primarily a Shakespearean stage actor, was cast as arch-enemy the Emperor in Return of the Jedi, presumably on the quality of his vocal performance, and as such has portrayed the character in everything from Revenge of the Sith to Angry Birds Star Wars II.
Sergi argues that Harrison Ford as Han Solo performs in a lower pitch but an unstable meter, a characterization explored in the radio dramatizations of A New Hope, Empire Strikes Back, and Return of the Jedi, when Perry King stands in for Ford. By contrast, Mark Hamill voices Luke in two of the radio dramatizations, refining and intensifying his film performances. Sergi argues that Hamill’s voice emphasizes youth: staccato, interrupting/interrupted, high pitch.
And affectionately parodied here:
I would add warmth of tone to this list, perhaps illustrated nowhere better than in Hamill’s performance in episode 1 – “A Wind to Shake the Stars” of the radio dramatization, which depicts much of Luke’s story that never made it onscreen, from Luke’s interaction with his friends in Beggar’s Canyon to a zany remark to a droid (“I know you don’t know, you maniac!”). It will come as no surprise to the listeners of the radio dramatization that Hamill would find acclaim in voice work (receiving multiple nominations and awards). In the cinematic version, Hamill’s performance is perhaps most gripping during the climactic scene in Empire Strikes Back when Darth Vader tells him:
According to Hamill, “what he was hearing from Vader that day were the words, ‘You don’t know the truth: Obi-Wan killed your father.’ Vader’s real dialogue would be recorded in postproduction under conditions easier to control.” More on that (and Vader) shortly.
It has been noted that Carrie Fisher (who was only nineteen when A New Hope was filmed) uses an accent that wavers between Standard North American and Standard Neutral English. Fisher has explained this as her emulating experienced British star of stage and screen Peter Cushing (playing Grand Moff Tarkin).
However, the accents of Star Wars have remained a contentious if little commented upon topic, with most (if not all) Imperial staff from A New Hope onwards speaking Standard Neutral English (see the exception, stormtroopers, further on). In production terms, naturally, this has a simple explanation. In story terms, however, fans have advanced theories regarding the galactic center of the universe, with an allegorical impetus in the form of the American Revolution. George Lucas, after all, is an American, so the heroic Rebels here have echoes with American colonists throwing off British rule in the 18th century, inspired in part because of their geographical remove from centers of Imperial rule like London. Therefore, goes this argument, in Star Wars, worlds like Coruscant are peopled by those speaking Standard Neutral English, while those in the Outer Rim (the majority of our heroes) speak varieties of Standard North American. Star Wars thus both advances and reinforces the stereotype that the Brits are evil.
It is perhaps appropriate, then, that James Earl Jones’ performance as Darth Vader has been noted for sounding more British than American, though Sergi emphasizes musicality rather than accent, the vocal quality over verbocentricity:
The end product is a fascinating mixture of two opposite aspects: an extremely captivating, operatic quality (especially the melodic meter with which he delivers the lines) and an evil and cold means of destruction (achieved mainly through echoing and distancing the voice).
It is worth noting that Lucas originally wanted Orson Welles, perhaps the most famous radio voice of all time, to portray Vader, yet feared that Welles would be too recognizable. That a different voice needed to emanate from behind Vader’s mask than the actor playing his body was evident from British bodybuilder David Prowse’s “thick West Country brogue.” The effect is parodied in the substitution of a Cockney accent from Snatch (2000) for Jones’ majestic tones:
A Newsweek review of Jones in the 1967 play A Great White Hope argued that Jones had honed his craft through “Fourteen years of good hard acting work, including more Shakespeare than most British actors attempt.” Sergi has characterized Jones’ voice as the most famous in Hollywood, in part because in addition to his prolific theatre back catalogue, Jones took bit parts and voiced commercials—“commercials can be very exciting,” he noted. The two competing forces combined to create a memorable performance, though as others have noted, Jones is the African-American voice to the white actors who portrayed Anakin Skywalker (Clive Revill and Hayden Christensen), one British, one American.
Brock Peters, also African American and known for his deep voice, played Vader in the radio dramatizations. Jennifer Stoever notes that in America, the sonic color line “historically contoured, identified, and marked mismatches between ‘sounding white’ and ‘looking black’” (231) whereas the Vader performances “sound black” and “look white.” Andrew Howe in his chapter “Star Wars in Black and White” notes the “tension between black outer visage and white interior identity [ . . ] Blackness is thus constructed as a mask of evil that can be both acquired and discarded.”
Like many of the most important aspects of Star Wars, Vader’s sonic presence is multi-layered, consisting in part of Jones’ voices manipulated by Burtt, as well as the sonic indicator of his presence: his mechanized breathing”
The concept for the sound of Darth Vader came about from the first film, and the script described him as some kind of a strange dark being who is in some kind of life support system. That he was breathing strange, that maybe you heard the sounds of mechanics or motors, he might be part robot, he might be part human, we really didn’t know. [ . . .] He was almost like some robot in some sense and he made so much noise that we had to sort of cut back on that concept.
On radio, a character cannot be said to exist unless we hear from him or her; whether listening to the radio dramatizations or watching Star Wars with our eyes closed, we can always sense the presence of Vader by the sound of his breathing. As Kevin L. Ferguson points out, “Is it accidental, then, that cinematic villains, troubling in their behaviour, are also often troubled in their breathing?” As Kris Jacobs notes, “Darth Vader’s mechanized breathing can’t be written down”—it exists purely in a sonic state.
Your eyes can deceive you; don’t trust them
Music is the final element of Sergi’s list of what makes up the soundtrack, and John Williams’ enduring musical score is the most obvious of Star Wars’ sonic elements. Unlike “classical era” Hollywood film composers like Max Steiner or Erich Korngold who, according to Kathryn Kalinak, “entered the studio ranks with a fair amount of prestige and its attendant power, Williams entered as a contract musician working with ‘the then giants of the film industry,’” moving into a “late-romantic idiom” that has come to characterize his work. This coincided with what Lucas envisioned for Star Wars, influenced as it was by 1930s radio serial culture.
Williams’ emotionally-pitched music has many elements that Kalinak argues link him with the classical score model: unity, the use of music in the creation of mood and character; the privileging of music in moments of spectacle, the way music and dialogue are carefully mixed. This effect is exemplified in the opening of A New Hope, the “Main Title” or, as Dr Lehman has it (see below), “Main/Luke A.” As Sergi notes, “the musical score does not simply fade out to allow the effects in; it is, rather literally, blasted away by an explosion (the only sound clearly indicated in the screenplay).”
As Kalinak points out, it was common in the era of Steiner and Korngold to score music for roughly three-quarters of a film, whereas by the 1970s, it was more likely to be one-quarter. “Empire runs 127 minutes, and Williams initially marked 117 minutes of it for musical accompaniment”; while he used three themes from A New Hope, “the vast majority of music in The Empire Strikes Back was scored specifically for the film.”
Perhaps Williams’ most effective technique is the use of leitmotifs, derived from the work of Richard Wagner, and more complex than a simple repetition of themes. Within leitmotifs, we hear the blending of denotative and connotative associations, as Matthew Bribitzer-Stull notes, “not just a musical labelling of people and things” but also, as Thomas S. Grey puts it, “a matter of musical memory, of recalling things dimly remembered and seeing what sense we can make of them in a new context.” Bribitzer-Stull also notes the complexity of Williams’ leitmotif use, given that tonal music is given for both protagonists and antagonists, resisting the then-cliché of using atonal music for antagonists. In Williams’ score, atonal music is used for accompanying exotic landscapes and fight or action scenes. As Jonathan Broxton explains,
That’s how it works. It’s how the films maintain musical consistency, it’s how characters’ musical identities are established, and it offers the composer an opportunity to create interesting contrapuntal variations on existing ideas, when they are placed in new situations, or face off against new opponents.
Within the leitmotifs, Williams provides various variations and disruptions, such as the harmonic corruption when “the melody remains largely the same, but its harmonization becomes dissonant.” One of the most haunting ways in which Williams alters and reworks his leitmotifs is what Bribitzer-Stull calls “change of texture.”
Frank Lehman of Harvard has examined Williams’ leitmotifs in detail, cataloguing them based on a variety of meticulous criteria. He has noted, for example, that some leitmotifs are used often, like “Rebel Fanfare” which has been used in Revenge of the Sith, A New Hope, The Empire Strikes Back, The Force Awakens, The Last Jedi, and Rogue One. Lehman particularly admires Williams’ skill and restraint, though, in reserving particular leitmotifs for very special occasions. For example, “Luke & Leia,” first heard in Return of the Jedi (both film and radio dramatization) and not again until The Last Jedi:
While Williams’ use of leitmotifs is successful and evocative, not all of Star Wars’ music consists of leitmotifs, as Lehman points out; single, memorable pieces of music not heard elsewhere are still startlingly effective.
In the upcoming Solo, John Williams will contribute a new leitmotif for Han Solo, while all other material will be written and adapted by John Powell. Williams has said in interview that “I don’t make a particular distinction between ‘high art’ and ‘low art.’ Music is there for everybody. It’s a river we can all put our cups into, and drink it, and be sustained by it.” The sounds of Star Wars have sustained it—and us—and perfectly illustrate George Lucas’ investment in the equal power of sound to vision in the cinematic experience. I, for one, am looking forward to what new sonic gems may be unleashed as the saga continues.
On the first week of June, Leslie McMurtry will return with Episode II, focusing on shifts in sound in the newer films and multi-media forms of Star Wars, including radio and cartoons–and, if we are lucky, her take on Solo!
Featured Image made here: Enjoy!
Leslie McMurtry has a PhD in English (radio drama) and an MA in Creative and Media Writing from Swansea University. Her work on audio drama has been published in The Journal of Popular Culture, The Journal of American Studies in Turkey, and Rádio-Leituras. Her radio drama The Mesmerist was produced by Camino Real Productions in 2010, and she writes about audio drama at It’s Great to Be a Radio Maniac.
REWIND!…If you liked this post, you may also dig:
Speaking American–Leslie McMurtry
Out of Sync: Gendered Location Sound Work in Bollywood–Priya Jaikumar