Tag Archive | Star Wars

Episode II: The Greatest Sound in the Galaxy: Sound and Star Wars

In this galaxy, two weeks ago, Leslie McMurtry published Episode I, a discussion of sound in the Star Wars films. Binge read it here! In today’s post, she listens to the farther reaches of the Star Wars galaxy–its multi-media forms including radio and cartoons–as well as the newest installment, Solo!

Yeah, I speak it a little.

For the first time in the onscreen history of Star Wars, a human speaks Wookiee and needs subtitles to do so.  There is more significance to this moment in Solo (2018) than might first seem apparent.  To understand why, we need to think back to the Ewoks, the small furry creatures from Return of the Jedi.  They have polarized fans, and their language feeds into potential ethical sonic/linguistic dilemmas in Star Wars.  As Ben Burtt explains,

With a new language, the most important goal is to create emotional clarity. People spend all of their lives learning to identify voices. You became an expert at that, and somewhat impossible to electronically process the human characteristic, and retain the necessary emotion. To fool the audience into believing this is a real character as the basis of the sound, although you may sprinkle other things in there. It varies from character to character.

The language of the Ewoks, however, was “rendered almost entirely from Tibetan.”   As Stephen Davis argues, Tibetan and other non-European languages used in Star Wars “were sometimes distorted” and “not used to convey meaningful content.”  This, says Davis, seemingly suggests “that these languages were never meant to be intelligible to moviegoers; rather, they were used to create social distance between strange characters and the anticipated audience.”

The process actually took a reversal when Star Wars was translated into Diné (Navajo) for its premiere in 2013 at Window Rock, Arizona, in front of an audience of hundreds.  In Star Wars, a plethora of languages have been spoken by a variety of species, but it has been rare for human characters to speak in these languages.  The potential distancing at work somehow becomes much less during this moment in Solo. 

The first part of this article has mainly focused on the “original trilogy.”  The next section will detail Star Wars in the digital era and in other media.   Envisioned since 1978 as a cycle of nine films, A New Hope, Empire Strikes Back, and Return of the Jedi were given “Special Edition” makeovers in 1997 as Star Wars entered the digital era.  These re-issues were committed to the soundworld of the original, Ben Burtt leaving “virtually untouched” such key elements as the sounds of Darth Vader, Artoo, Threepio, and the TIE fighters, while the Special Edition required the creation of Huttese dialogue for Jabba the Hutt and sound effects for his movement in A New Hope.

Kinda handy to have a storyteller who makes his own sound effects

As a child, Ben Burtt loved listening to his grandfather’s radio, tuning between stations to hear the sounds in between, the beeps, whistles, and static.  “There’s something about that I find opens my mind,” he said.  While the radio links to George Lucas and Star Wars have already been comprehensively explored, what about Star Wars on the radio?

Since at least 1925, when the BBC began its long-standing series of adaptations, The Classic Serial, adaptations from media like books and stage plays have been a mainstay of radio content.  Despite the one-time frequent proliferation of film-to-radio adaptation, the practice has become much more uncommon.  In 1981, with organised public radio manifestations like NPR (National Public Radio) still in their infancy in the US, drama had played a much smaller part on the airwaves than public service broadcasting’s equivalent in the UK, the BBC—indeed, drama was more likely to be found on nostalgic commercial throwbacks like The CBS Radio Mystery TheaterNevertheless, when approached by Richard Toscan of USC, John Houseman, and Frank Mankiewicz, Lucasfilm quickly sold the adaptation rights to NPR for $1, including, crucially, use of music and special effects.  The BBC also agreed to co-produce.

Why did George Lucas sell the radio rights to Star Wars for $1 a pop?  Clearly the involvement of his alma mater USC was a factor; nevertheless, as previously argued, Lucas was invested in radio culture, not just of the 1930s serial type that was mirrored in action-adventure-science fiction films of that era, but also the free-wheeling intimacy of radio hosts such as Wolfman Jack and Bob “The Emperor” Hudson, a Burbank DJ and subject of one of Lucas’ films.  The dramatization’s length (six-and-a-half hours) de-compressed A New Hope’s story, “meaning that the characters could be treated in more depth and the story told in more detail,” as noted by Frederica Kushner, creating character-developing moments in transmedia long before the digital age (including completely new sequences for Luke and Leia in episodes 1 & 2 of A New Hope and a scene in which Luke constructs a new light saber as a prologue to Return of the Jedi).  NPR’s listening audience doubled during the broadcast of the first adaptation in 1981.

In the era of classic radio serials, rural listeners often used film-to-radio adaptations as a way of keeping up with movie culture; as Malcolm Usrey of the Texas Panhandle recalled, “[o]nly a serious emergency kept us from hearing The Lux Radio Theater.”  In 1981, there was no way for viewers who wanted to re-watch Star Wars to do so, as it was long gone from cinemas.  The radio adaptations would have offered the next best thing, while a more “fill-in” approach was beginning to manifest by 1996, when Return of the Jedi was finally adapted (in a much compressed form), by which point the original trilogy were all available on VHS.

The radio adaptations, nevertheless, remain a fascinating meditation on Star Wars’ transmedia and its use of sound.  The response to sound in Star Wars functions perhaps similarly to Dermot Rattigan’s “macro-micro scale” in radio, the intimacy created when broadcasters address an audience of millions but seem to speak individually to YOU.

Sir, my audio sensors no longer . . .

As celebrated movie critic Gene Siskel wrote in his review of Return of the Jedi, “I can’t think of another recent picture whose sound I enjoyed so much. [. . .] it’s almost flawless.  [ . . .] Three is not enough.”  Indeed, three was not enough, and in 1999, Burtt became the sound editor on The Phantom Menace, the most expensive independent film in history, the first of a new trilogy.  The Phantom Menace made full use (perhaps, some would suggest, over-use) of digital animation technologies and brought voices in the shape of Brian Blessed and Andy Secombe to alien creatures.

Williams’ leitmotifs proceeded to weave retrospectively into this trilogy as well as the introduction of new themes, for example “Across the Stars,” “a love theme that swells with the fervent romance shared by Anakin and Padmé, and which subsequently plays over the end credits” (and is only heard in Attack of the Clones and Revenge of the Sith).  To return to Bribitzer-Stull’s catalogue of Williams’ use of leitmotifs, thematic irony is prominent in Revenge of the Sith when Padmé says she is pregnant:

Bribitzer-Stull presents this as “a clear case of romantic irony, since the audience knows what horrible fate lies in store for the two characters, though the characters themselves do not.”  Another new composition was “Duel of the Fates” with three iterations of its leitmotif heard throughout The Phantom Menace, Attack of the Clones, and Revenge of the Sith. 

Echoes of “Duel of the Fates” are heard in Solo, and those who have seen the film will understand why.

Civilized words can be our greatest weapon!

Long before Disney acquired Lucasfilm, Star Wars’ transmedia success was profound.  On television, Star Wars lived on the 1980s in Ewoks the animated series (1985-6) and Droids (1985).  Droids were the further adventures of R2-D2 and C-3PO, set before A New Hope.  With voice talent lent (again) by Anthony Daniels as Threepio and Artoo “as himself,” the short-lived animated series featured an opening theme tune by The Police’s Stewart Copeland.  The final episode, an hour-long special, “The Great Heep” was based on a screenplay by Ben Burtt.

The music, composed and performed by Patricia Cullen (who scored Ewoks and The Care Bears), flirts with “fantasy”/ “medieval” music as well as imitating John Williams’ late-Romantic idiom. The story features humanoid characters speaking non-English languages and creatures that sound like tauntauns.  Artoo and Threepio also interact with other droids in scenes that lay the foundation for later Burtt robot project Wall-E (2008)

As Daniels remarked, “That was my favorite episode.  Ben has a particular affection for me as C-3P0 and has a natural empathy toward R2-D2.” (Ewoks and Droids were followed on the small screen by Clone Wars (Cartoon Network, 2008-15) and Rebels (Disney Channel, 2014-18), whose sound worlds may be investigated in future installments.)

Elsewhere, sound was particularly important to Star Wars in video game format.  According to Felan Parker, since the release of Star Wars:  The Empire Strikes Back for Atari, video games have had prominence within the Star Wars storyworld; as Jason Scott puts it, “Star Wars has been repurposed for each new technology, frequently as a flagship title to help sell hardware.”  In the games, the Force becomes visualized and sonified.  In Star Wars:  Return of the Jedi (1994), it has, in Parker’s words, a “tinkling sound” while in The Force Unleashed (2008), it sounds like “gushing wind.”  As Christopher Coleman puts it, the Star Wars video games were perhaps more adventurous than the films and other forms of media to break away from the John Williams score and musically innovate.

Coleman argues that the zenith of this innovation was The Force Unleashed (for Xbox 360, Playstation, Nintendo Wii, or DS), set between Revenge of the Sith and A New Hope, centered around Darth Vader’s secret apprentice, Starkiller.  This gave players the opportunity to be “visually stunned” but also musically impressed, creating a reactive musical environment that bridged the “significant stylistic gap between the two Star Wars trilogies.”

In 2012, Disney acquired Lucasfilm for $4 billion, starting a new trilogy cycle.  Furthermore, Disney would begin making the Anthology films, “churning out” a new film “every two or three years indefinitely, providing the anchor for brand extensions worldwide,” of which Solo is the second (with Rogue One, 2016, being the first). “The cultural box-office explosion” from Black Panther has reportedly carried into Solo, with “fascinating” African-American actor Donald Glover as Lando Calrissian inheriting the cape from Billy Dee Williams.

However, Star Wars has never been able to escape, in Andrew Howe’s words, “the gravitational pull of contemporary racial politics”; the original trilogy has suffered from a notable absence of human racial minorities.  Howe argues, for example, that Lucas withholds from the Tusken Raiders any forms of humanizing speech, in turn suggesting that human desert races like the Bedouins share the Tusken Raiders’ brutishness. “Perhaps,” Howe posits, “Lucas is suggesting that it is only in areas of lax governmental control that racial minorities can exist unmodified by race-based expectations.”

More infamous, perhaps, is Ahmed Best’s performance in The Phantom Menace as Jar-Jar Binks.  Patricia Williams wrote scathingly of Jar-Jar’s “mush of West African, Caribbean, and African-American linguistic styles” in The Nation. The perception is that “Jar Jar was depicted in broad, stereotypical terms as the lazy Jamaican,” if not more pejoratively.  Silas Carson played Nute Gunray (The Phantom Menace, Attack of the Clones) as a Transylvanian, though audiences interpreted the Neimoidians as having East Asian accents which, combined with the qualities of sadism, power, and cowardice, caused some concern over stereotyped portrayals.   However, as Howe points out, the main villains in the prequels are largely coded as white.

Howe argues that Lucas (and Lucasfilm, and by extension, Disney) were made exceedingly aware of Star Wars’ potentially unsatisfying track record as regards race and ethnicity, which he believes has been addressed, with varying degrees of success, in the prequels.  Perhaps more successful was the casting of Mexican actor Diego Luna, playing the heroic rebel Cassian Andor in Rogue One, with a pronounced accent, which Samantha Schmidt puts this way, “There was no particular reason Cassian was Mexican, or why he shouldn’t be. He just was. [ . . .] It was a rare example of a time when a Latino actor has been cast in a blockbuster film not simply as a token Latino character but as a leading role with no obvious ties to Latino culture,” though arguably the same had occurred 14 years earlier with the casting of Jimmy Smits as Bail Organa (Smits is half Puerto Rican).

Star Wars in the digital era also revisited the divide between Standard North American and Standard Neutral English, particularly in the characters of Rey and Finn.  Rey, the hero whose journey across The Force Awakens, The Last Jedi, and Episode IX has mirrored Luke Skywalker’s, is played by British actor Daisy Ridley, who has maintained a Standard Neutral English throughout her performance.  Finn, former stormtrooper FN-2187, Rey’s friend and potential love interest, is also played by a British actor, John Boyega, who has swapped his accent for a Standard North American one.  Boyega’s claims that director Rian Johnson felt Boyega’s accent just didn’t work are ironic, considering that Lucas originally preferred an American accent for Threepio (voiced by Anthony Daniels in a Standard English Neutral accent).  American-ness and Britishness were clearly a major component of the latest grouping of Star Wars films:

Among other things, this resulted in rampant speculation about Rey’s parentage, her “Core Worlds”/Coruscanti accent making it clear that she wasn’t the long-lost daughter of Han Solo and Princess Leia.  By contrast, Kylo Ren, Han and Leia’s actual son (and Rey’s antagonist/potential love interest), played by Adam Driver, does not emulate his code-swapping grandfather Anakin Skywalker/Darth Vader and speaks Standard North American (though with the same bass baritone as James Earl Jones).

Finn’s accent and role feeds into a long-standing argument regarding the role of the stormtrooper, who spoke in the original trilogy with a Standard North American accent (likely because these characters were dubbed by Americans, like Bill Wookey) whereas, as previously discussed, the management structure of the Empire emulated its namesake Emperor and spoke Standard English Neutral.  In the prequels, Jango Fett, the prototype for the stormtrooper clone army, spoke with a New Zealand accent, predicated on the Maori ancestry of actor Temuera Morrison, a convention carried through during Clone Wars, which is set between Attack of the Clones and Revenge of the Sith.

To return to Ridley and Boyega, though, while both were born in London, the “suitability” of their respective accents says a lot about the endurance of class distinctions in British culture.  Ridley’s suitably “noble-sounding” accent, closely matching Standard English Neutral, differs greatly from Boyega’s Peckham accent.  You can take the stormtrooper out of the Empire, but you can’t take the American-ness out of the stormtrooper, it seems.

Ridley, with a background in music performance, is a mezzo-soprano, and her speaking voice is somewhat low in pitch, in contrast to the stereotypical feminine voice, higher-pitched and lilting.  This vocal quality has not garnered the attention that Carrie Fisher’s voice has in her last role, in The Last Jedi, in which she was pilloried by some elements of fandom:

her voice is kerazy! It has that “I’ve been through some serious drugs and alcohol” tone, which, unless she can really play it down, would be pretty distracting for a “Queen”. It doesn’t sound like an easy voice to get away from….throaty, broken and borderline insane.

As Ros Jennings and Eva Krainitzki note, ageism is part of contemporary society.  They further argue that a binary is usually established in screen media between “ageing as decline” or “successful ageing.”  If an older woman on screen does not conform as either a “graceful” ager or “sexy oldie,” she is effectively erased and made invisible, clearly not the case with a visible and powerful General Leia Organa in The Force Awakens or The Last Jedi. Nor has Fisher’s voice been erased.  The comments made by (usually male) critics regarding Fisher’s “kerazy” voice as evidenced above clearly situates her in, as Melanie Williams puts it, “the middle of both misogyny and gerontophobia.”  On the dichotomy between “ageing well” and “letting oneself go,” Fisher’s voice is perceived as the latter and therefore not evidence of a “queenly” persona.

To break through this binary, it might be more instructive to look at the example of Vanessa Redgrave, who, like British compatriots Dame Helen Mirren and Dame Judi Dench, are examples of (in Williams’ words) female “post-middle-age life [being] as equally dynamic and fulfilling as the years before.”  Redgrave’s distinctive “husky” voice (“powerful” despite her life-long smoking) more closely resembles Fisher’s, also deepened in register (though whether her self-admitted addictions to cocaine and prescription drugs have had any bearing on the timbre, intensity, and pitch of her voice is of, arguably, less relevance).  While, as Williams would argue, Mirren and Dench in their public personas have attempted to “transcend age by ignoring it,” Fisher as Leia in The Last Jedi seems to adhere more to a model Jennings and Krainitzki have applied to Redgrave in Call the Midwife (BBC, 2012-), where her voice is of paramount importance—a holistic understanding of female ageing, neither hypersexualized nor invisible.   

Fisher’s death before the release of The Last Jedi predicated a good deal of work cutting together Leia’s dialogue in order to finish the filmStar Wars is no stranger to such digital wizardry, having already digitally inserted Fisher’s face onto actress Ingvild Deila in 2016 for a scene in Rogue One and having resurrected Peter Cushing in the same film.  Viewers were seemingly so struck with the visual spectacle of Cushing, portrayed on set by another actor, Guy Henry, with FACS (facial action coding system) superimposed, there was little comment on the non-Cushing vocal performance—though a number of fans felt they could tell the difference.  Manuel Nogueira argued,

The only thing that put me off a little was the voice – the way the words were pronounced was perfect, but the tone was not. Peter Cushing had a beautifull [sic] unique voice and I suppose it’s difficult for someone to imitate it exactly.

While commentators argued about the ethics of these uncanny resurrections, the voices for these hybrid creations seemed to fly under the radar.  Fisher’s voice was original, having been edited together from her dialogue in previous Star Wars films.

I’m such a happy Chewbacca!!

John Williams has continued to be involved in scoring the newest Star Wars films, to greater and lesser degrees.  Composer Michael Giacchino had only four weeks to complete the score to Rogue One as he was brought in at the last minute.  Giacchino, as the first person to compose for a Star Wars film other than John Williams, faced the difficulty of fitting his musical style within the existing Williams leitmotif structure while contributing something new.  Broxton notes that within Giacchino’s score are allusions to the Battle of Hoth music from The Empire Strikes Back throughout the sequence “AT-ACT Assault” in Rogue One, including the use of xylophones and pianos, while the rhythm from the “Rebel Blockade Runner” sequence of A New Hope is heard in one of the new themes Giacchino composed, “Hope”:

Williams himself was back for The Force Awakens (2015), about which he noted, “It would be like writing an opera, and then writing six more based on the same kind of material and the same story . . . over the course of 40 years.”  Similarly to previous movies in the series, the ratio of music to scenes in The Force Awakens is high, with little reference to previous leitmotifs (only seven minutes).  The Last Jedi works differently, introducing, as Broxton points out, only two significant new leitmotifs.  Nevertheless, Broxton argues, “As a result, The Last Jedi manages to be warmly nostalgic, emotionally powerful, and daring and thrilling, all at the same time, and often in the same cue.”  For example, “Battle of the Heroes” returns in The Last Jedi, though it was last heard in Revenge of the Sith. 

Although Solo makes sparing use of Williams’ leitmotifs (for example, “Rebel Fanfare” in an exhilarating sequence), John Powell’s score has seemingly more shading of mentor Hans Zimmer or Howard Shore.  Bributzer-Stull considered Shore’s leitmotif structure for The Lord of the Rings films the most complex in film history.  Also in Solo we have the first (to my knowledge) onscreen diegetic use of Williams’ themes, the Imperial March, which is used as part of a recruitment video on Corellia, in which a voice in Standard English Neutral tells potential applicants to join up and see the universe.

However, quite a different diegetic Williams music moment has already been heard, in Ep. 1 – “A Wind to Shake the Stars” of the 1981 Star Wars radio dramatization.  Curiously enough, it was also part of a recruitment video, though, this time, the music was “Main/Luke A.”  The first non-music and non-narration sounds heard in the radio adaptation are, in fact, Luke humming along to this Imperial Space Academy theme tune which he is playing repeatedly in the techdome before interrupted by his frenemies Windy and Deke.  (For an idea of what it might have looked like, and indeed, a notion of how integral the score of Star Wars is to the story—and how odd it feels when it’s absent—take a look at one of the deleted scenes from A New Hope:

Beyond musical motifs, sound design in the newest films builds heavily on previously established conventions.  Solo, the first Star Wars film not to feature Artoo and Threepio, gives us our first female-voiced droid, L-3 (voiced by Phoebe Waller-Bridge).  L-3 speaks with a Standard English Neutral accent, which would lend credence to our “Core Worlds”/Coruscanti accent hypothesis.  However, in all other respects, the film seems to muddy the waters considerably regarding consistency of accents.  For example, soldier-of-fortune Val (Thandie Newton) speaks Standard English Neutral, though she could have easily been raised in the Core Worlds and fallen from grace.  However, characters like Qi’ra (Emilia Clarke), who grew up a street orphan in Corellia, speak Standard English Neutral, which hardly fits the hypothesis that it denotes the supposed Core Worlds linguistic training that the Empire (and the First Order) value.  Surely Beckett’s (Woody Harrelson) disguise among the Imperials should have fooled no one, given he is the only officer there with an American accent.  Some characters don’t really seem to know what accent to put on, such as Paul Bettany’s Dryden Vos, who appears to be speaking Standard North American with some difficulty.

***

In writing this article, I have realized the emotional impact of sound in Star Wars not just generally, but upon myself.  The most intimate sonic moment for me is the Force/Obi-Wan/All-Purpose leitmotif, also known by the visual scene in which it first appears (or, in Bribitzer-Stull’s terms, the “prototypic statement”), Binary Sunset in A New Hope [However, if you watch the films in chronological rather than release order, you will not fail to recognize it in The Phantom Menace onwards].   At this moment, according to Bribitzer-Stull, “we have no idea of what this musical signifier actually signifies, but we know it means something important.”  “Rey A/Primary” (from The Force Awakens) is linked via chords to “Binary Sunset.”  I would argue the frequent use of “Binary Sunset” titillates the film-goer in scenes like the one in The Last Jedi in which—apparently astrally linked across space by Grand Leader Snoke—Rey and Kylo touch hands.  By invoking “Binary Sunset,” does such a moment argue that the two will bring long-desired balance to the Force, given this leitmotif’s frequent and long-standing association with the Force?  Or does it have another meaning?  There is another musical echo when Rey and Kylo work together rather than against each other in that movie—a short reference to “Duel of the Fates” from The Phantom Menace:

However, Broxton best describes the emotional power of “Binary Sunset” in The Last Jedi by linking it cyclically with the title of the film:

as Luke watches Ahch-To’s twin suns rise in his final moments before he becomes one with the Force, [this] is a heartbreaking mirror of the legendary ‘Binary Sunset’ scene from 40 years ago, and allows us to reflect on the life of that young farm boy from Tatooine, dreaming of a life of adventure.

Interviewed in 2018, Ben Burtt noted that, “despite the digital age, I still emphasise field recording real, physical objects.”  As has been previously argued, Burtt’s commitment to creative sound design which is still rooted in the experience of the physical world helped locate the fantastical elements of Star Wars.  Coupled with George Lucas’ keen sound awareness and vision (or sonic vision), Star Wars’ sound has come to be an integral part of its ontology, whether in audiovisual media or its countless other incarnations (ahead of the release of Solo, children could clamour for a Nerf Blaster, Lightsaber, and Millennium Falcon Playset, complete with appropriate sound effects).  No longer is it necessary to create one’s own sound effects during play, and one can roleplay as Chewbacca just as easily as Han Solo.  According to Alexis C. Madrigal of The Atlantic, “Humans being humans, once Chewbacca’s voice had been manufactured by Burtt, people began to imitate it with their own vocal chords.”  And while “Chewbacca Mask Lady” (Candace Payne) seems to revel as much in her appearance as Chewie as in the Wookiee sounds her mask makes, it’s surely through the sound of her exuberant, irrepressible laughter that we enjoy the YouTube video that has currently received more than six million views.

May the Force (and all its accompanying sounds) be with you.

Featured Image made here: Enjoy!

 Leslie McMurtry has a PhD in English (radio drama) and an MA in Creative and Media Writing from Swansea University.  Her work on audio drama has been published in The Journal of Popular Culture, The Journal of American Studies in Turkey, and Rádio-Leituras.  Her radio drama The Mesmerist was produced by Camino Real Productions in 2010, and she writes about audio drama at It’s Great to Be a Radio Maniac.

tape reelREWIND!…If you liked this post, you may also dig:

The Magical Post-Horn: A Trip to the BBC Archive Centre in Perivale–Leslie McMurtry

Speaking American–Leslie McMurtry

Out of Sync: Gendered Location Sound Work in Bollywood–Priya Jaikumar  

Episode I: The Greatest Sound in the Galaxy: Sound and Star Wars

Ever tried listening to a Star Wars movie without the sound? –IGN, 1999
Sound is 50 percent of the motion-picture experience. –George Lucas

In the radio dramatization of Return of the Jedi (1996), a hibernation sickness-blinded Han Solo can tell bounty hunter Boba Fett is in the same room with him just by smelling him.  Later this month, Solo:  A Star Wars Story (part of the Anthology films, and as you might expect from the title, a prequel to Han Solo’s first appearance in Star Wars:  A New Hope) may be able to shed some light on how Han developed this particular skill.

Later in that dramatization, we have to presume Han is able to accurately shoot a blaster blind by hearing alone.  Appropriately, then, sound is integral to Star Wars.  For every iconic image in the franchise—from R2D2 to Chewbacca to Darth Vader to X-Wing and TIE-fighters to the Millennium Falcon and the light sabers—there is a correspondingly iconic sound.  In musical terms, too, the franchise is exemplary. John Williams, Star Wars’ composer, won the most awards of his career for his Star Wars (1977) score, including an Oscar, a Golden Globe, a BAFTA, and three Grammys.  Not to mention Star Wars’ equally iconic diegetic music, such as the Mos Eisley Cantina band (officially known as Figrin D’an and the Modal Nodes).

Without sound, there would be no Star Wars.  How else could Charles Ross’ One Man Star Wars Trilogy function?  In One Man Star Wars, Ross performs all the voices, music, and sound effects himself.  He needs no quick costume changes; indeed, in his rapid-fire, verbatim treatment, it is sound (along with a few gestures) that he uses to distinguish between characters.  His one-man show, in fact, echoes C-3PO’s performance of Star Wars to the Ewoks in Return of the Jedi, a story told in narration and sound effects far more than in any visuals.  “Translate the words, tell the story,” says Luke in the radio dramatization of this scene.  That is what sound does in Star Wars. 

I believe that the general viewing public is aware on a subconscious level of Star Wars’ impressive sound achievements, even if this is not always articulated as such.  As Rick Altman noted in 1992 in his four and a half film fallacies, the ontological fallacy of film—while not unchallenged—began life with André Bazin’s “The Ontology of the Photographic Image,” (1960) which argues that film cannot exist without image.  Challenging such an argument not only elevates silent film but also the discipline of film sound generally, so often regarded as an afterthought.  “In virtually all film schools,” Randy Thom wrote in 1999, “sound is taught as if it were simply a tedious and mystifying series of technical operations, a necessary evil on the way to doing the fun stuff.”

Film critic Pauline Kael wrote about Star Wars on original release in what Gianlucca Sergi terms a “harmful generalization” that its defining characteristic was its “loudness.”  Loud sound does not necessarily equal good sound in the movies, which audiences themselves can sometimes confuse.  “High fidelity recordings of gunshots and explosions, and well fabricated alien creature vocalizations” do not equal good sound design alone, as Thom has argued.  On the contrary, Star Wars’ achievements, Sergi posited, married technological invention with overall sound concept and refined if not defined the work of sound technicians and sound-conscious directors.

The reason why Star Wars is so successful aurally is because its creator, George Lucas, was invested in sound holistically and cohesively, a commitment that has carried through nearly every iteration of the franchise, and because his original sound designer, Ben Burtt, understood there was an art as well as a science to highly original, aurally “sticky” sounds.  Ontologically, then, Star Wars is a sound-based story, as reflected in the existence of the radio dramatizations (more on them later). This article traces the historical development of sound in not only the Star Wars films (four decades of them!) but also in other associated media, such as television and video games as well as examining aspects of Star Wars’ holistic sound design in detail.

A long time ago, in a galaxy far, far away . . .

As Chris Taylor points out, George Lucas “loved cool sounds and sweeping music and the babble of dialogue more than he cared for dialogue itself.”  In 1974, Lucas was working on The Radioland Murders, a screwball comedy thriller set in the fictional 1930s radio station WKGL.  Radio, indeed, had already made a strong impression on Lucas, such that legendary “Border blaster” DJ Wolfman Jack played an integral part in Lucas’ film American Graffiti (1973).  As Marcus Hearn picks up the story, Lucas soon realized that The Radioland Murders were going nowhere (the film would eventually be made in 1994).  Lucas then turned his sound-conscious sensibilities in a different direction, in “The Star Wars” project upon which he had been ruminating since his film school days at the University of Southern California.  Retaining creative control, and a holistic interest in a defined soundworld, were two aspects Lucas insisted upon during the development of the project that would become Star Wars.  Lucas had worked with his contemporary at USC, sound designer and recordist Walter Murch, on THX 1138 (1971) and American Graffiti, and Murch would go on to provide legendary sound work for The Conversation (1974), The Godfather Part II (1974), and Apocalypse Now (1979). Murch was unavailable for the new project, so Lucas then asked producer Gary Kurtz to visit USC to evaluate emerging talent.

Pursuing a Masters degree in Film Production at USC was Ben Burtt, whose BA was in physics.  In Burtt, Lucas found a truly innovative approach to film sound which was the genesis of Star Wars’ sonic invention, providing, in Sergi’s words, “audiences with a new array of aural pleasures.”  Sound is embodied in the narrative of Star Wars.  Not only was Burtt innovative in his meticulous attention to “found sounds” (whereas sound composition for science fiction films has previously relied on electronic sounds), he applied his meticulousness in character terms.  Burtt said that Lucas and Kurtz, “just gave me a Nagra recorder and I worked out of my apartment near USC for a year, just going out and collecting sound that might be useful.”

Ben Burtt plays the twang of steel guy wires, which formed the basis of the many blaster sounds (re-creating the moment with Miki Hermann for a documentary). Image by Flickr User: Tom Simpson (CC BY-NC-ND 2.0)

Inherent in this was Burtt’s relationship with sound, in the way he was able to construct a sound of an imaginary object from a visual reference, such as the light saber, described in Lucas’ script and also in concept illustrations by Ralph McQuarrie.  “I could kind of hear the sound in my head of the lightsabers even though it was just a painting of a lightsaber,” he said.  “I could really just sort of hear the sound maybe somewhere in my subconscious I had seen a lightsaber before.”  Burtt also shared with Lucas a sonic memory of sound from the Golden Age of Radio:  “I said, `All my life I’ve wanted to see, let alone work on, a film like this.’ I loved Flash Gordon and other serials, and westerns. I immediately saw the potential of what they wanted to do.”

But sir, nobody worries about upsetting a droid

Burtt has described the story of A New Hope as being told from the point of view of the droids (the robots).  While Lucas was inspired by Kurosawa’s The Hidden Fortress (1958) to create the characters of droids R2-D2 (“Artoo”) and C-3PO (“Threepio”), the robots are patently non-human characters.  Yet, it was essential to imbue them with personalities.  There have been cinematic robots since Maria, but Burtt uniquely used sound to convey not only these two robots’ personalities, but many others as well.  As Jeanne Cavelos argues, “Hearing plays a critical role in the functioning of both Threepio and Artoo.  They must understand the orders of their human owners.”  Previous robots had less personality in their voices; for example, Douglas Rain, the voice of HAL in 2001:  A Space Odyssey, spoke each word crisply with pauses. Threepio is a communications expert, with a human-like voice, provided by British actor (and BBC Radio Drama Repertory Company graduate) Anthony Daniels.  According to Hearn, Burtt felt Daniels should use his own voice, but Lucas was unsure, wanting an American used car salesman voice.  Burtt prevailed, creating in Threepio, vocally, “a highly strung, rather neurotic character,” in Daniels’ words, “so I decided to speak in a higher register, at the top of the lungs.”  (Indeed, in the Diné translation of Star Wars [see below], Threepio was voiced by a woman, Geri Hongeva-Camarillo, something that the audience seemed to find hilarious.)

Artoo was altogether a more challenging proposition.  As Cavelos puts it, “Artoo, even without the ability to speak English, manages to convey a clear personality himself, and to express a range of emotions.”  Artoo’s non-speech sounds still convey emotional content.  We know when Artoo is frightened;

when he is curious and friendly;

and when he is being insulting.

(And although subtitled scenes of Artoo are amusing, they are not in the least necessary.)  Artoo’s language was composed and performed by Burtt, derived from the communication of babies:

we started making little vocal sounds between each other to get a feeling for it.  And it dawned on us that the sounds we were making were not actually so bad.  Out of that discussion came the idea that the sounds a baby makes as it learns to walk would be a direction to go; a baby doesn’t form any words, but it can communicate with sounds.

The approach to Artoo’s aural communications became emblematic of all of the sounds made by machines in Star Wars, creating a non-verbal language, as Kris Jacobs calls it, the “exclusive province” of the Star Wars universe.

Powers of observation lie with the mind, Luke, not the eyes

According to Gianlucca Sergi, the film soundtrack is composed of sound effects, music, dialogue, and silence, all of which work together with great precision in Star Wars, to a highly memorable degree.  Hayden Christensen, who played Anakin Skywalker in Attack of the Clones (2002) and Revenge of the Sith (2005), noted that when filming light saber battles with Ewan McGregor (Obi-Wan Kenobi), he could not resist vocally making the sound effects associated with these weapons.

This a good illustration of how iconic the sound effects of Star Wars have become.  As Burtt noted above, he was stimulated by visuals to create the sound effects of the light sabers, though he was also inspired by the motor on a projector in the Department of Cinema at USC.  As Todd Longwell pointed out in Variety, the projector hum was combined with a microphone passed in front of an old TV to create the sound.  (It’s worth noting that the sounds of weapons were some of the first sound effects created in aural media, as in the case with Wallenstein, the first drama on German radio, in 1924, which featured clanging swords.)

If Burtt gave personality to robots through their aural communications, he created an innovative sound palette for far more than the light sabers in Star Wars.  In modifying and layering found sounds to create sounds corresponding to every aspect of the film world—from laser blasts (the sound of a hammer on an antenna tower guy wire) to the Imperial Walkers from Empire Strikes Back (modifying the sound of a machinist’s punch press combined with the sounds of bicycle chains being dropped on concrete)—he worked as meticulously as a (visual) designer to establish cohesion and impact.

Sergi argues that the sound effects in Star Wars can give subtle clues about the objects with which they are associated.  The sound of Imperial TIE fighters, which “roar” as they hurtle through space, was made from elephant bellows, and the deep and rumbling sound made by the Death Star is achieved through active use of sub-frequencies.  Meanwhile, “the rebel X-wing and Y-wing fighters attacking the Death Star, though small, emit a wider range of frequencies, ranging from the high to the low (piloted as they are by men of different ages and experience).”  One could argue that even here, Burtt has matched personality to machine.  The varied sounds of the Millennium Falcon (jumping into hyperspace, hyperdrive malfunction), created by Burtt by processing sounds made by existing airplanes (along with some groaning water pipes and a dentist’s drill), give it, in the words of Sergi, a much more “grown-up” sound than Luke’s X-Wing fighter or Princess Leia’s ship, the Tantive IV.  Given that, like its pilot Han Solo, the Falcon is weathered and experienced, and Luke and Leia are comparatively young and ingenuous, this sonic shorthand makes sense.

Millions of voices

Michel Chion argues that film has tended to be verbocentric, that is, that film soundtracks are produced around the assumption that dialogue, and indeed the sense of the dialogue rather than the sound, should be paramount and most easily heard by viewers.  Star Wars contradicts this convention in many ways, beginning with the way it uses non-English communication forms, not only the droid languages discussed above but also its plethora of languages for various denizens of the galaxy.  For example, Cavelos points out that Wookiees “have rather inexpressive faces yet reveal emotion through voice and body language.”

While the 1978 Star Wars Holiday Special may have many sins laid at its door, among them must surely be that the only Wookiee who actually sounds like a Wookiee is Chewbacca.  His putative family sound more like tauntauns.  Such a small detail can be quite jarring in a universe as sonically invested as Star Wars. 

While many of the lines in Star Wars are eminently quotable, the vocal performances have perhaps received less attention than they deserve.  As Starr A. Marcello notes, vocal performance can be extremely powerful, capitalizing on the “unique timbre and materiality that belong to a particular voice.”  For example, while Lucas originally wanted Japanese actor Toshiro Mifune to play Obi-Wan, Alec Guinness’ patrician Standard English Neutral accent clearly became an important part of the character. For example, when (Scottish) actor Ewan McGregor was cast to play the younger version of Obi-Wan, he began voice lessons to reproduce Guinness’ voice. Ian McDiarmid (also Scottish), a primarily a Shakespearean stage actor, was cast as arch-enemy the Emperor in Return of the Jedi, presumably on the quality of his vocal performance, and as such has portrayed the character in everything from Revenge of the Sith to Angry Birds Star Wars II

Sergi argues that Harrison Ford as Han Solo performs in a lower pitch but an unstable meter, a characterization explored in the radio dramatizations of A New Hope, Empire Strikes Back, and Return of the Jedi, when Perry King stands in for Ford.  By contrast, Mark Hamill voices Luke in two of the radio dramatizations, refining and intensifying his film performances.  Sergi argues that Hamill’s voice emphasizes youth:  staccato, interrupting/interrupted, high pitch.

And affectionately parodied here:

I would add warmth of tone to this list, perhaps illustrated nowhere better than in Hamill’s performance in episode 1 – “A Wind to Shake the Stars” of the radio dramatization, which depicts much of Luke’s story that never made it onscreen, from Luke’s interaction with his friends in Beggar’s Canyon to a zany remark to a droid (“I know you don’t know, you maniac!”). It will come as no surprise to the listeners of the radio dramatization that Hamill would find acclaim in voice work (receiving multiple nominations and awards).  In the cinematic version, Hamill’s performance is perhaps most gripping during the climactic scene in Empire Strikes Back when Darth Vader tells him:

According to Hamill, “what he was hearing from Vader that day were the words, ‘You don’t know the truth:  Obi-Wan killed your father.’  Vader’s real dialogue would be recorded in postproduction under conditions easier to control.”  More on that (and Vader) shortly.

It has been noted that Carrie Fisher (who was only nineteen when A New Hope was filmed) uses an accent that wavers between Standard North American and Standard Neutral English.  Fisher has explained this as her emulating experienced British star of stage and screen Peter Cushing (playing Grand Moff Tarkin).

However, the accents of Star Wars have remained a contentious if little commented upon topic, with most (if not all) Imperial staff from A New Hope onwards speaking Standard Neutral English (see the exception, stormtroopers, further on).  In production terms, naturally, this has a simple explanation.  In story terms, however, fans have advanced theories regarding the galactic center of the universe, with an allegorical impetus in the form of the American Revolution.  George Lucas, after all, is an American, so the heroic Rebels here have echoes with American colonists throwing off British rule in the 18th century, inspired in part because of their geographical remove from centers of Imperial rule like London.  Therefore, goes this argument, in Star Wars, worlds like Coruscant are peopled by those speaking Standard Neutral English, while those in the Outer Rim (the majority of our heroes) speak varieties of Standard North American.  Star Wars thus both advances and reinforces the stereotype that the Brits are evil.

It is perhaps appropriate, then, that James Earl Jones’ performance as Darth Vader has been noted for sounding more British than American, though Sergi emphasizes musicality rather than accent, the vocal quality over verbocentricity:

The end product is a fascinating mixture of two opposite aspects:  an extremely captivating, operatic quality (especially the melodic meter with which he delivers the lines) and an evil and cold means of destruction (achieved mainly through echoing and distancing the voice).

It is worth noting that Lucas originally wanted Orson Welles, perhaps the most famous radio voice of all time, to portray Vader, yet feared that Welles would be too recognizable.  That a different voice needed to emanate from behind Vader’s mask than the actor playing his body was evident from British bodybuilder David Prowse’s “thick West Country brogue.”  The effect is parodied in the substitution of a Cockney accent from Snatch (2000) for Jones’ majestic tones:

A Newsweek review of Jones in the 1967 play A Great White Hope argued that Jones had honed his craft through “Fourteen years of good hard acting work, including more Shakespeare than most British actors attempt.”  Sergi has characterized Jones’ voice as the most famous in Hollywood, in part because in addition to his prolific theatre back catalogue, Jones took bit parts and voiced commercials—“commercials can be very exciting,” he noted.  The two competing forces combined to create a memorable performance, though as others have noted, Jones is the African-American voice to the white actors who portrayed Anakin Skywalker (Clive Revill and Hayden Christensen), one British, one American.

Brock Peters, also African American and known for his deep voice, played Vader in the radio dramatizations.  Jennifer Stoever notes that in America, the sonic color line “historically contoured, identified, and marked mismatches between ‘sounding white’ and ‘looking black’” (231) whereas the Vader performances “sound black” and “look white.” Andrew Howe in his chapter “Star Wars in Black and White” notes the “tension between black outer visage and white interior identity [ . . ] Blackness is thus constructed as a mask of evil that can be both acquired and discarded.”

Like many of the most important aspects of Star Wars, Vader’s sonic presence is multi-layered, consisting in part of Jones’ voices manipulated by Burtt, as well as the sonic indicator of his presence:  his mechanized breathing”

The concept for the sound of Darth Vader came about from the first film, and the script described him as some kind of a strange dark being who is in some kind of life support system.  That he was breathing strange, that maybe you heard the sounds of mechanics or motors, he might be part robot, he might be part human, we really didn’t know.  [ . . .] He was almost like some robot in some sense and he made so much noise that we had to sort of cut back on that concept.

On radio, a character cannot be said to exist unless we hear from him or her; whether listening to the radio dramatizations or watching Star Wars with our eyes closed, we can always sense the presence of Vader by the sound of his breathing.  As Kevin L. Ferguson points out, “Is it accidental, then, that cinematic villains, troubling in their behaviour, are also often troubled in their breathing?”  As Kris Jacobs notes, “Darth Vader’s mechanized breathing can’t be written down”—it exists purely in a sonic state.

Your eyes can deceive you; don’t trust them

Music is the final element of Sergi’s list of what makes up the soundtrack, and John Williams’ enduring musical score is the most obvious of Star Wars’ sonic elements. Unlike “classical era” Hollywood film composers like Max Steiner or Erich Korngold who, according to Kathryn Kalinak, “entered the studio ranks with a fair amount of prestige and its attendant power, Williams entered as a contract musician working with ‘the then giants of the film industry,’” moving into a “late-romantic idiom” that has come to characterize his work.  This coincided with what Lucas envisioned for Star Wars, influenced as it was by 1930s radio serial culture.

Williams’ emotionally-pitched music has many elements that Kalinak argues link him with the classical score model:  unity, the use of music in the creation of mood and character; the privileging of music in moments of spectacle, the way music and dialogue are carefully mixed. This effect is exemplified in the opening of A New Hope, the “Main Title” or, as Dr Lehman has it (see below), “Main/Luke A.”  As Sergi notes, “the musical score does not simply fade out to allow the effects in; it is, rather literally, blasted away by an explosion (the only sound clearly indicated in the screenplay).”

As Kalinak points out, it was common in the era of Steiner and Korngold to score music for roughly three-quarters of a film, whereas by the 1970s, it was more likely to be one-quarter.  “Empire runs 127 minutes, and Williams initially marked 117 minutes of it for musical accompaniment”; while he used three themes from A New Hope, “the vast majority of music in The Empire Strikes Back was scored specifically for the film.”

Perhaps Williams’ most effective technique is the use of leitmotifs, derived from the work of Richard Wagner, and more complex than a simple repetition of themes.  Within leitmotifs, we hear the blending of denotative and connotative associations, as Matthew Bribitzer-Stull notes, “not just a musical labelling of people and things” but also, as Thomas S. Grey puts it, “a matter of musical memory, of recalling things dimly remembered and seeing what sense we can make of them in a new context.”  Bribitzer-Stull also notes the complexity of Williams’ leitmotif use, given that tonal music is given for both protagonists and antagonists, resisting the then-cliché of using atonal music for antagonists.  In Williams’ score, atonal music is used for accompanying exotic landscapes and fight or action scenes.  As Jonathan Broxton explains,

That’s how it works. It’s how the films maintain musical consistency, it’s how characters’ musical identities are established, and it offers the composer an opportunity to create interesting contrapuntal variations on existing ideas, when they are placed in new situations, or face off against new opponents.

Within the leitmotifs, Williams provides various variations and disruptions, such as the harmonic corruption when “the melody remains largely the same, but its harmonization becomes dissonant.” One of the most haunting ways in which Williams alters and reworks his leitmotifs is what Bribitzer-Stull calls “change of texture.”

Frank Lehman of Harvard has examined Williams’ leitmotifs in detail, cataloguing them based on a variety of meticulous criteria.  He has noted, for example, that some leitmotifs are used often, like “Rebel Fanfare” which has been used in Revenge of the Sith, A New Hope, The Empire Strikes Back, The Force Awakens, The Last Jedi, and Rogue One.  Lehman particularly admires Williams’ skill and restraint, though, in reserving particular leitmotifs for very special occasions.  For example, “Luke & Leia,” first heard in Return of the Jedi (both film and radio dramatization) and not again until The Last Jedi:

While Williams’ use of leitmotifs is successful and evocative, not all of Star Wars’ music consists of leitmotifs, as Lehman points out; single, memorable pieces of music not heard elsewhere are still startlingly effective.

In the upcoming Solo, John Williams will contribute a new leitmotif for Han Solo, while all other material will be written and adapted by John Powell.  Williams has said in interview that “I don’t make a particular distinction between ‘high art’ and ‘low art.’  Music is there for everybody.  It’s a river we can all put our cups into, and drink it, and be sustained by it.”  The sounds of Star Wars have sustained it—and us—and perfectly illustrate George Lucas’ investment in the equal power of sound to vision in the cinematic experience.  I, for one, am looking forward to what new sonic gems may be unleashed as the saga continues.

On the first week of June, Leslie McMurtry will return with Episode II, focusing on shifts in sound in the newer films and multi-media forms of Star Wars, including radio and cartoons–and, if we are lucky, her take on Solo!

Featured Image made here: Enjoy!

 Leslie McMurtry has a PhD in English (radio drama) and an MA in Creative and Media Writing from Swansea University.  Her work on audio drama has been published in The Journal of Popular Culture, The Journal of American Studies in Turkey, and Rádio-Leituras.  Her radio drama The Mesmerist was produced by Camino Real Productions in 2010, and she writes about audio drama at It’s Great to Be a Radio Maniac.

tape reelREWIND!…If you liked this post, you may also dig:

The Magical Post-Horn: A Trip to the BBC Archive Centre in Perivale–Leslie McMurtry

Speaking American–Leslie McMurtry

Out of Sync: Gendered Location Sound Work in Bollywood–Priya Jaikumar  

Beyond the Grave: The “Dies Irae” in Video Game Music

For those familiar with modern media, there are a number of short musical phrases that immediately trigger a particular emotional response. Think, for example, of the two-note theme that denotes the shark in Jaws, and see if you become just a little more tense or nervous. So too with the stabbing shriek of the violins from Psycho, or even the whirling four-note theme from The Twilight Zone. In each of these cases, the musical theme is short, memorable, and unalterably linked to one specific feeling: fear.

The first few notes of the “Dies Irae” chant, perhaps as recognizable as any of the other themes I mentioned already, are often used to provoke that same emotion.

Often, but not always. The “Dies Irae” has been associated with death since its creation in the thirteenth century, due to its use in the Requiem Mass for the dead until the Second Vatican Council (1962–65). Its text describes the Last Judgment, when all humanity will be sent to heaven or hell. But from the Renaissance to today, the “Dies Irae” has also come to symbolize everything from the medieval church and Catholic ritual to the sinister, superstitious, or supernatural, even violence and battle—and any combination of the above.

Because of its unique history not only within its original liturgical context but also within later musical genres, this chant has become largely divorced from its original purposes, at least in modern popular imagination. Instead, it now holds a multiplicity of meanings; composers manipulate these meanings by utilizing this chant in a new setting, and thus in turn continue to reinforce those meanings within modern media. Since its use within the Mass, concert music, and films has already been well documented, this blog post explores its presence in an as yet unexamined medium: video games.

By Willem Vrelant (Flemish, died 1481, active 1454 – 1481) 1481 – illuminator (Flemish) Details of artist on Google Art Project [Public domain], via Wikimedia Commons

Chant—monophonic music of the Western Christian tradition—is the largest surviving body of music from the medieval period. Although chant was not written down until the ninth century, it has been continuously sung for over two thousand years. Before the Reformation, chant permeated the musical landscape of Western Europe. But as John Haines points out, chant’s meanings changed in the sixteenth century; to Protestants, chant was a sign of superstitious, even sinister, ritual, whereas to Catholics it was a flawed but holy tradition (112). Chant became ever more confined to the Catholic liturgy; although composers continued to use chant in new compositions, by the late nineteenth century the only chant guaranteed to be recognized by a secular audience was the “Dies Irae.”

Beginning in the late eighteenth century, the text was set in Requiems for the secular stage by composers such as Mozart, Verdi, and Britten. But due to both its evocative text and its memorable melody (often just the first sixteen, eight, or even four notes), the “Dies Irae” chant soon was incorporated into secular instrumental works, where it signified the past, the supernatural, the oppressive, the demonic, and death. No work is more responsible for this than Hector Berlioz’s Symphonie Fantastique, where the chant symbolizes the composer’s own death and the depravity of the demons and witches who dance at his funeral.

The history of this chant, together with its use in film, has been explored by scholars such as Linda Schubert and John Haines. Because the “Dies Irae” was already a well-known symbol of the aforementioned characteristics, and because early silent film musicians borrowed musical ideas from previously composed works, the chant segued quickly into early film, where its symbolic possibilities were reinforced. Thus, even in newly composed soundtracks, composers utilized this chant as an aural shortcut to a host of emotional and psychological reactions, especially (as James Deaville and others discuss) within horror films. It appears in scenes depicting inner anguish, fear, the occult, evil, and imminent death in films from It’s a Wonderful Life, The Seventh Seal, and The Shining to Disney’s The Lion King and Star Wars, in musicals like Sweeney Todd, and in literary works such as Gaston Leroux’s The Phantom of the Opera, but it also symbolizes power and even heroism, such as in this Nike shoe commercial.

The “Dies Irae” appears analogously in video game soundtracks, where it communicates the same symbolic meanings that it does in film scores and concert music. Its recognizability also lends itself to parody, as it did in Monty Python and the Holy Grail. Yet, unlike in film music, the evolution of its use in game music speaks also to the evolution of game music technology.

In the earlier years of video games, technology could not create continuous soundtracks. The first such was in Space Invaders (1978), although it consisted only of four descending notes looped indefinitely. Additionally, while voice synthesis was used in game soundtracks as early as 1982, reproduction of musical voices was limited even into the 1990s. William Gibbons describes how early systems had a limited number of channels (40); as a result, Baroque-style counterpoint worked well texturally, and reproducing music from earlier composers such as Bach was not only permissible by copyright but also demonstrated the capabilities of their systems (201–204). As such, earlier games were less interested in a monophonic chant, although several (such as Fatal Fury) did use Mozart’s setting of the “Dies Irae.”

The “Dies Irae” chant is first used in game music in the late 1980s and early 1990s, by which point most systems had five or more channels, allowing for improved timbres and sound synthesis. The opening theme song to F-19 Stealth Fighter (1988–92, DOS/PC/Amiga/Atari) subtly references the first phrase of the chant. Composer Ken Lagace sets the first eight pitches evenly in the lower voice before moving them to a higher, rhythmicized register. The chant is accompanied by a consistent percussive element and several higher, chordal voices, which splinter off into fast arpeggios before restating the opening. There is as yet no action, nor is the plot either spiritual or supernatural, so the chant here actually works in a somewhat anomalous way. It heightens the player’s tension through its aural connotations of fear and death, thus setting the stage for the battles still to come in the game itself.

Indiana Jones and the Fate of Atlantis (1992, PC) is another early instance of the “Dies Irae,” which appears at the end, when Indiana and his companion Sophia confront the malevolent Doctor. The chant again increases tension but also indicates the presence of evil. Musically, the first two phrases of the chant appear in long, low tones, accompanied by several high, sustained, dissonant pitches. New voices enter, reminiscent of the opening phrase, before the chant returns in full in all registers. The system’s capability for thicker textures allowed the composers to stack the monophonic “Dies Irae” against itself, further emphasizing the threat of imminent danger in this final encounter.

The last of the early case studies is Zombies Ate My Neighbors (1993, SNES/Genesis). These systems featured multiple channels capable of emulating a variety of acoustical settings. The game is a parody of 1950s horror films; the protagonists race through standard horror settings such as malls and castles to rescue their neighbors from demonic babies, vampires, zombies, and other stock creatures. The soundtrack also mimics the musical tropes in such films: chant itself, especially the “Dies Irae,” but also timbres such as tremolo, stingers, extreme ranges, and dissonance. The track “Curse of the Tongue,” which plays upon encountering the final boss, Dr. Tongue’s Giant Head, emulates a Gothic pipe organ. The low organ drone sustains underneath the first sixteen notes of the chant, which sound in a shrieking, vibrato-heavy register. The voices then move in parallel fifths as in medieval polyphony. The “Dies Irae” here brings to mind an entire film genre while also overtly characterizing the final battle against the otherworldly, sinister, evil Head. In this case, the chant works literally to signify the current battle and threat of death, but also parodically to indicate the absurdity of the situation.

The development of video game audio technology allowed first for voice emulation, then voice reproduction. Vocal samples were used as early as the 1980s, but were often confined to theme songs. Yet even after voices were reproduced within soundtracks, it is the “Dies Irae” melody alone that is most frequently sampled, strikingly paralleling its earlier use in film and concert music. When the “Dies Irae” text is used, it is set to newly composed music or borrowed from the Mozart or Verdi Requiems. Moreover, as in earlier media, all that is needed as an aural mnemonic is the first phrase, even just the first four notes, of the chant melody.

For example, two games released for PC in 1999—Heroes of Might & Magic III and Gabriel Knight 3: Blood of the Sacred, Blood of the Damned—both use just the first portion of the “Dies Irae.” In “Burying the Manuscript” from Gabriel Knight, pizzicato violins first allude to the first four or five notes of the chant (1:25); the full first phrase is then presented in parallel motion in the brass. The remainder of this theme alludes to the first few notes, making the “Dies Irae” a constant presence here and underscoring the secrecy, even the occult nature, of the manuscript in this scene.

Heroes III uses even less melodic material. In the Necropolis, composer Paul Romero uses the first four notes of the “Dies Irae” to underpin the entire theme. The bass plays the first four notes in a low register before seguing into newly composed material, but the contour of that phrase returns throughout the theme. The full chant phrases do not appear until the very end. The chant hints constantly at the overwhelming metaphor of death in this area, as well as to the presence of supernatural creatures such as vampires, zombies, and wraiths.

Unusual for many reasons, then, is the last case study: the game Dante’s Inferno (2010, PS3/Xbox360). It is the sole example here to use voices, but the text appears to be newly composed. As John Haines noted, the presence of Latin or pseudo-Latin is in and of itself a trope of the diabolical or demonic, which adds further nuance to this scene (129). The familiar melody is presented by a choir of mixed voices, accompanied by a roar of low brass, ambient noise, and a descant voice singing on open vowels, all signifiers of horror or the medieval. Moreover, the “Dies Irae” is not reserved for a final battle, as in previous examples, nor does it characterize supernatural creatures. Rather, it is the first theme heard in the game, reinforcing not only the medieval setting and the constant presence of death but also the ultimate trajectory of Dante, and the gamer, into Hell.

While the “Dies Irae” has been well studied as an aural signifier within film and concert music, its use in video games has, before now, been largely ignored. As in earlier musical genres, this chant brings to games a host of culturally accepted, musically mediated meanings that allow composers to immediately flesh out a character or scene. In so doing, game composers acknowledge that sound is not just sound, but rather it is (to borrow a phrase from Elizabeth Randell Upton) “a complex interaction of experiences and expectations on the part of the audience.” These experiences are continuously shaped by new compositions, scores, and soundtracks, which in turn continuously shape the audience’s expectations for future works.

As such, game soundtracks, along with other kinds of media, continue to transform the “Dies Irae” out of its original context and into an ever-growing set of pop culture symbols. The chant now signifies everything from the medieval to the present day, from judgment, battle, and death to demons, witches, and the occult. Within games in particular, though, it acts as a “memento mori,” a reminder of the mortality that game characters, and thus game players, seek to avoid through play. As such, it may instill fear in a player, but also suspicion, alertness, tension, even excitement, spurring the player to react in whichever manner suits the individual game.

The iconic status of the opening phrases of the “Dies Irae” chant marks it as a particularly useful polyvalent symbol for composers. Yet the utilization of this well-known trope is not without its problems. As I discuss in a forthcoming article, this chant, and indeed all plainchant, originates in a particular sacred, liturgical tradition. When a chant such as “Dies Irae” is used as a signifier of a general sense of spirituality, or of the medieval, or even of horror, then by default those characteristics are reified, if subtly, as Christian. Moreover, linking a chant such as the “Dies Irae” to the supernatural or the occult serves to perpetuate early modern stereotypes of Catholicism as nothing more than superstitious magic; see, for example, the purported origins of the phrase “hocus pocus.” Such anachronistic uses further obfuscate chant’s continuous role within Catholic (and other) liturgy; it is both a historic and a very modern practice.

Given that the “Dies Irae” is certainly not the only musical means to the aforementioned symbolic ends, perhaps these concerns are not pressing. Still, as Anita Sarkeesian points out, we can enjoy modern media while simultaneously critiquing facets that are problematic. There is no clear-cut way, at this point, to overturn hundreds of years of accumulated symbolic meaning for a musical icon such as the “Dies Irae,” but it behooves us as participants in auditory culture to become better aware of the multiple, and occasionally challenging, meanings within what we hear.

[Other games that also use the “Dies Irae” chant include Gauntlet Legends (1999, N64/PS/Dreamcast), Final Fantasy IX (2000, PS), EverQuest II (2004, MMORPG), Heroes of Might and Magic V (2006, PC), Sam & Max: Season 2 (2007–8, Wii/PC/PS3/Xbox 360), Ace Combat: Assault Horizon (2011, PS3/Xbox 360), and Diablo 3 (2012–4, PC/PS3/Xbox/PS4). My thanks go to VGMdb and Overclocked Remix for bringing several of these games to my attention, and to Ryan Thompson and Dana Plank for comments.]

Featured Image: A mashup of the first lines of the Dies Irae and the Zombies Ate My Neighbors title screen. Remixed for purposes of critique.

Dr. Karen Cook specializes in medieval and Renaissance music theory, history, and performance. She is currently working on a monograph on the development of rhythmic notation in the fourteenth and early fifteenth centuries. She also maintains active research interests in popular and contemporary music, especially with regard to music and identity in television, film, and video games. She frequently links her areas of specialization together through a focus on medievalism, writ broadly, in contemporary culture. As such, some of her recent and forthcoming publications include articles on fourteenth-century theoretical treatises, biographies of lesser-known late medieval theorists, and the use of plainchant in video games, a book chapter on medievalist music in Harry Potter video games, and a forthcoming co-authored Oxford Bibliography on medievalism and music.

REWIND!…If you liked this post, you may also dig:

SO! Amplifies: Mega Ran and Sammus, The Rappers With Arm Cannons Tour–Enongo Lumumba-Kasongo

Video Gaming and the Sonic Feedback of Surveillance: Bastion and the Stanley Parable–Aaron Trammell

Playing with the Past in the Imagined Middle Ages: Music and Soundscape in Video Game–James Cook

The Eldritch Voice: H. P. Lovecraft’s Weird Phonography

Weird Tales CoverWelcome to the last installment of Sonic Shadows, Sounding Out!’s limited series pursuing the question of what it means to have a voice. In the most recent post, Dominic Pettman encountered several traumatized birds who acoustically and uncannily mirror the human, a feedback loop that composes what he called “the creaturely voice.” This week, James Steintrager investigates the strange meaning of a “metallic” voice in the stories of H.P. Lovecraft, showing how early sound recording technology exposed an alien potential lingering within the human voice. This alien voice – between human and machine – was fodder for  techniques of defamiliarizing the world of the reader.
 
I’ll leave James to tell us more. Thanks for reading!

— Guest Editor Julie Beth Napolin

A decade after finding itself downsized to a dwarf planet, Pluto has managed to spark wonder in the summer of 2015 as pictures of its remarkable surface features and those of its moon are delivered to us by NASA’s New Horizons space probe. As scientists begin to tentatively name these features, they have drawn from speculative fiction for what they see on the moon Charon, giving craters names including Spock, Sulu, Uhuru, and—mixing franchises—Skywalker . From Doctor Who there will be a Tardis Chasma and a Gallifrey Macula. Pluto’s features stretch back a bit further, where there will also be a Cthulhu Regio, named after the unspeakable interstellar monster-cum-god invented by H. P. Lovecraft.

We can imagine that Lovecraft would have been thrilled, since back when Pluto was first discovered in early 1930 and was the evocative edge of the solar system, he had turned the planet into the putative home of secretive alien visitors to Earth in his short story “The Whisperer in Darkness.” First published in the pulp magazine Weird Tales in 1931, “The Whisperer in Darkness” features various media of communication—telegraphs, telephones, photographs, and newspapers—as well as the possibilities of their manipulation and misconstruing. The phonograph, however, plays the starring role in this tale about gathering and interpreting the eerie and otherworldly—the eldritch, in a word—signs of possible alien presence in backwoods Vermont.

In the story, Akeley, a farmer with a degree of erudition and curiosity, captures something strange on a record. This something, when played back by the protagonist Wilmarth, a folklorist at Lovecraft’s fictional Miskatonic University, goes like this:

Iä! Shub-Niggurath! The Black Goat of the Woods with a Thousand Young! (219)

The sinister resonance of a racial epithet in what appears to be a foreign or truly alien tongue notwithstanding, this story features none of the more obvious and problematic invocations of race and ethnicity—the primitive rituals in the swamps of Louisiana of “The Call of Cthulhu” or the anti-Catholic immigrant panic of “The Horror at Red Hook”—for which Lovecraft has achieved a degree of infamy. Moreover, the understandable concern with Lovecraft’s social Darwinism and bad biology in some ways tends to miss how for the author—and for us as well—power and otherness are bound up with technology.

weirdtalesThe transcription of these exclamations, recorded on a “blasphemous waxen cylinder,” is prefaced with an emphatic remark about their sonic character: “A BUZZING IMITATION OF HUMAN SPEECH” (219-220). The captured voice is further described as an “accursed buzzing which had no likeness to humanity despite the human words which it uttered in good English grammar and a scholarly accent” (218). It is glossed yet again as a “fiendish buzzing… like the drone of some loathsome, gigantic insect ponderously shaped into the articulate speech of an alien species” (220). If such a creature tried to utter our tongue and to do so in our manner—both of which would be alien to it—surely we might expect an indication of the difference in vocal apparatuses: a revelatory buzzing. Lovecraft’s story figures this “eldritch sound” as it is transduced through the corporeal: as the timbral indication of something off when the human voice is embodied in a fundamentally different sort of being. It is the sound that happens when a fungoid creature from Yuggoth—the supposedly native term for Pluto—speaks our tongue with its insectile mouthparts.

Yet, reading historically, we might understand this transduction as the sound of technical mediation itself: the brazen buzz of phonography, overlaying and, in a sense, inhabiting the human voice.

For listeners to early phonographic recordings, metallic sounds—inevitable given the materials used for styluses, tone arms, diaphragms, amplifying horns—were simply part of the experience. Far from capturing “the unimaginable real” or registering “acoustic events as such,” as media theorist Friedrich Kittler once put the case about Edison’s invention, which debuted in 1877, phonography was not only technically incapable of recording anything like an ambient soundscape but also drew attention to the very noise of itself (23).

For the first several decades of the medium’s existence, patent registers and admen’s pitches show that clean capture and reproduction were elusive rather than given. An account in the Literary Digest Advertiser of Valdemar Poulsen’s Telegraphone, explains the problem:

telegraphoneThe talking-machine records sound by the action of a steel point upon some yielding substance like wax, and reproduces it by practically reversing the operation. The making of the record itself is accompanied by a necessary but disagreeably mechanical noise—that dominating drone—that ‘b-r-r-r-r’ that is never in the human voice, and always in its mechanical imitations. One hears metallic sounds from a brazen throat—uncanny and inhuman. The brittle cylinder drops on the floor, breaks—and the neighbors rejoice!

The Telegraphone, which recorded sounds “upon imperishable steel through the intangible but potent force of electromagnetism” such that no “foreign or mechanical noise is heard or is possible,” of course promised to make the neighbors happy not by breaking the cylinder but rather by taking the inhuman ‘b-r-r-r-r’ out of the phonographically reproduced voice. Nonetheless, etching sound on steel, the Telegraphone was still a metal machine and unlikely to overcome the buzz entirely.

In his account of “weird stories” and why the genre suited him best, Lovecraft explained that one of his “strongest and most persistent wishes” was “to achieve, momentarily, the illusion of some strange suspension or violation of the galling limitations of time, space, and natural law which forever imprison us and frustrate our curiosity about the infinite cosmic spaces beyond the radius of our sight and analysis.” In “The Whisperer in Darkness,” Lovecraft put to work a technology that was rapidly becoming commonplace to introduce a buzz into the fabric of the everyday. This is the eldritch effect of Lovecraft’s evocation of phonography. While we might wonder whether a photograph has been tampered with, who really sent a telegram, or with whom we are actually speaking over a telephone line—all examples from Lovecraft’s tale—the central, repeated conundrums for the scholar Wilmarth remain not only whose voice is captured on the recorded cylinder but also why it sounds that way.

auxetophoneThe phonograph transforms the human voice, engineers a cosmic transduction, suggesting that within our quotidian reality something strange might lurk. This juxtaposition and interplay of the increasingly ordinary and the eldritch is also glimpsed in an account of Charles Parson’s invention the Auxetophone, which used a column of pressurized air rather than the usual metallic diaphragm. Here is an account of the voice of the Auxetophone from the “Matters Musical” column of The Bystander Magazine from 1905: “Long ago reconciled to the weird workings of the phonograph, we had come to regard as inevitable the metallic nature of its inhuman voice.” The new invention might well upset our listening habits, for Mr. Parson’s invention “bids fair to modify, if not entirely to remove,” the phonograph’s “somewhat unpleasant timbre.”

What the phonograph does as a medium is to make weird. And what making weird means is that instead of merely reproducing the human voice—let alone rendering acoustic events as such—it transforms the latter into its own: an uncanny approximation, which fails to simulate perfectly with regard to timbre in particular. Phonography reveals that the materials of reproduction are not vocal chords, breath, labial and dental friction—not flesh and spirit, but vibrating metal.

Although we can only speculate in this regard, I would suggest that “The Whisperer in Darkness” was weirder for readers for whom phonographs still spoke with metallic timbre. The rasping whisper of the needle on cylinder created what the Russian Formalist Viktor Shklovsky was formulating at almost exactly the same time as the function of the literary tout court: defamiliarization or, better, estrangement. Nonetheless, leading the reader to infer an alien presence behind this voice was equally necessary for the effect. After all, if we are to take the Auxetophone as our example—an apparatus announced in 1905, a quarter of a decade before Lovecraft composed his tale, and that joined a marketplace burgeoning with metallic-voice reducing cabinets, styluses, dampers, and other devices—phonographic listeners had long since become habituated to the inhumanity of the medium. That inhumanity had to be recalled and reactivated in the context of Lovecraft’s story.

dictaphoneTo understand fully the nature of this reactivation, moreover, we need to know precisely what Lovecraft’s evocative phonograph was. When Akeley takes his phonograph into the woods, he adds that he brought “a dictaphone attachment and a wax blank” (209). Further, to play back the recording, Wilmarth must borrow the “commercial machine” from the college administration (217). The device most consistent with Lovecraft’s descriptions and terms is not a record player, as we might imagine, but Columbia Gramophone Company’s Dictaphone. By the time of the story’s setting, Edison Phonograph’s had long since switched to more durable celluloid cylinders (Blue Amberol Records; 1912-1929) in an effort to stave off competition from flat records. Only Dictaphones, aimed at businessmen rather than leisure listeners, still used wax cylinders, since recordings could be scraped off and the cylinder reused. The vinyl Dictabelt, which eventually replaced them, would not arrive until 1947.

Meanwhile, precisely when the events depicted in “The Whisperer in Darkness” are supposed to have taken place, phonography was experiencing a revolutionary transformation: electronic sound technologies developed by radio engineers were hybridizing the acoustic machines, and electro-acoustic phonographs were in fact becoming less metallic in tone. Yet circa 1930, as the buzz slipped toward silence, phonography was still the best means of figuring the sonic uncanny valley. It was a sort of return of the technologically repressed: a reminder of the original eeriness of sound reproduction—recalled from childhood or perhaps parental folklore—at the very moment that new technologies promised to hide such inhumanity from sensory perception. Crucially, in Lovecraft’s tale, estrangement is not merely a literary effect. Rather, the eldritch is what happens when the printed word at a given moment of technological history calls up and calls upon other media of communication, phonography not the least.

I have remarked the apparent absence of race as a concern in “The Whisperer in Darkness,” but something along the lines of class is subtly but insistently at work in the tale. The academic Wilmarth and his erudite interlocutor Akeley are set in contrast with the benighted, uncomprehending agrarians of rural Vermont. Both men also display a horrified fascination with the alien technology that will allow human brains to be fitted into hearing, seeing, and speaking machines for transportation to Yuggoth. These machines are compared to phonographs: cylinders for storing brains much like those for storing the human voice. In this regard, the fungoid creatures resemble not so much bourgeois users or consumers of technology as scientists and engineers. Moreover, they do so just as a discourse of technocracy—rule by a technologically savvy elite—was being articulated in the United States. Here we might see the discovery of Pluto as a pretext for exploring anxieties closer to home: how new technologies were redistributing power, how their improvement—the fading of the telltale buzz—was making it more difficult to determine where humanity stopped and technology began, and whether acquiescence in this changes was laudable or resistance feasible. As usual with Lovecraft, these topics are handled with disconcerting ambivalence.

James A. Steintrager is a professor of English, Comparative Literature, and European Languages and Studies at the University of California, Irvine. He writes on a variety of topics, including libertinism, world cinema, and auditory cultures. His translation of and introduction to Michel Chion’s Sound: An Acoulogical Treatise will be published by Duke University Press in fall of 2015.

Featured image: Taken from “Global Mosaic of Pluto in True Color” in NASA’s New Horizons Image Gallery, public domain. All other images courtesy of the author.

tape reelREWIND! . . .If you liked this post, you may also dig:

Sound and Sanity: Rallying Against “The Voice” — Mark Brantner

DIANE… The Personal Voice Recorder in Twin Peaks — Tom McEnaney

Reproducing Traces of War: Listening to Gas Shell Bombardment, 1918 — Brían Hanrahan

%d bloggers like this: