Editor’s Note: Even though this is officially Osvaldo Oyola‘s final post as an SO! regular–his brilliant dissertation on Latino/a identity and collection cultures is calling–I refuse to say goodbye, perpetually leaving the door open for future encores. He has been a bold and steadfast contributor–peep his extensive back catalogue here–and we cannot thank him enough for bringing such a whipsmart presence to Sounding Out! over the years. Best of luck, OOO, our lighters are up for you!–J. Stoever-Ackerman, Editor-in-Chief
As several of my previous Sounding Out! Blog posts reveal, I am intrigued by the way popular music seeks to establish its authenticity to the listener. It seems that recorded popular music seeks out ways to overcome its lack of presence as compared to a live performance, where a unified and spontaneous sense of immediacy seems to automatically bestow the aura of the “authentic”—a uniqueness that, ironically, live reproducibility engenders. Throughout my time as a Sounding Out! regular, I have explored how authenticity may be conferred through artists affecting an accent as a form of musical style, comparing their songs to other “less authentic” forms of music through a call to nostalgia, or even by highlighting artificiality through use of auto-tune.
One of the ways that artists and producers get past a potential lack of authenticity when recording is through call outs to “liveness.” I am not referring to concert recordings (though there are ways that they can be used), but elements like counting off at the beginning of songs or introducing some change or movement in a song. There is no practical need to count off “One, two, three, four!” at the beginning of a recording of a song if it is being pieced together through multiple tracks and overdubs. These days a “click track” or adjustment post-recording can keep all the players in time even if not necessarily playing at once; even if a song is being recorded as a kind of studio jam, the count off could be edited out. It is an artifact of the creation, not a sign of creation itself. Instead, the counting can become an accepted and notable part of the song, like Sam the Sham and the Phaorahs performing “Wooly Bully,” giving it an orientation to time—the sense that all these musicians were present together and playing their instruments at once and needed this unique introduction to keep them all in tempo.
Similarly, sometimes artists call out to other musicians, giving instructions when no instructions are needed, assuming that most popular music is recorded in multiple takes using multiple tracks. In Parade‘s “Mountains,” Prince commands the Revolution, “guitars and drums on the one!” when clearly they had rehearsed when putting together the song, and ostensibly knew when the drum and guitar breakdown was coming up. Prince, furthermore, joins artists as varied as the Grateful Dead and the Beastie Boys in mixing concert recordings with studio overdubs to capture a “live” sound on songs like “It’s Gonna Be a Beautiful Night” and “Alligator.” Even something as ubiquitous as guitar feedback is a transformation of an artifact of live performance into a sound available for use in recording—something that was purposefully avoided until John Lennon’s happy accident when in the studio to cut “I Feel Fine.” Until then, playing with feedback was a way to demonstrate performance skills through onstage vamping.
These varied calls to liveness provide a sense of authenticity to music made via the recording studio, denoting what I understand as the spontaneous sociability of music. Count-offs and studio shout-outs provide a sense of unified presence to a performance, especially if the performance has actually been constructed piecemeal and over time. This is something of a remnant of an old-fashioned notion that recorded music is measured in quality in comparison to live performance. It’s any idea that hung around both implicitly and explicitly long after bands started experimenting in the studio with effects that ranged from the difficult to the impossible to replicate on stage, and reinforced through recordings by performers who purposefully referenced their lauded live performances.
For example, James Brown’s “Get Up (I Feel Like Being a) Sex Machine” is built on this conceit. The entire song is a conversation, a call and response between James Brown and his band, the J.B.’s. From the opening line, Brown introduces the song as moment in time in which he is compelled to do his thing, but he demands both encouragement and cooperation from the band in order to achieve it. When Brown asks Bobby Byrd, “Bobby! Should I take ‘em to the bridge?” we as listeners are invited to play along with the idea that it has suddenly came into his head to have the band play the bridge—as it might’ve happened (and thus been practiced) countless times in his legendary live shows. It suggests a form of spontaneity that the reality of recording would otherwise drain from the song. Sure, according to RJ Smith’s The One: The Life and Music of James Brown (2012), “Get Up” was recorded in only two takes–already fairly amazing–but the very nature of the song makes it sound like it was recorded in one, even if it had to be broken up into two sides of a 7-inch. That reality doesn’t matter—what matters when listening is the feeling that we, as listeners, are being allowed to partake in the capturing of what seems like one unique, and continuous, moment.
The question then arises: What about recorded music that does the opposite, that makes a point of highlighting its artificial construct—the impossibility of its spontaneous performance? While there are examples that date back at least to the 1960s, does this shift highlight a difference in aesthetic concerns by the pop music audience? If calls to “liveness” suggest a spontaneous sociability to music, what do the meta references to their songcraft suggest about what is important to music now?
The classic example is Ringo Starr’s bellow, “I GOT BLISTERS ON MY FINGERS!” at the end of the Beatles’ “Helter Skelter,” an exclamation made after umpteen takes of the song recorded on the same day, but there are more contemporary and even more obvious examples. Near the end of Outkast’s “Prototype,” (at 4:21) Andre 3000 can be heard talking to his sound engineer John Frye about the ad libs, “Hey, hey John! Are we recording our ad libs? Really? Were we recording just then? Let me hear that, that first one. . .” There is an interesting tension here between the spontaneity of an “ad lib” and listening back to pick the best one or further develop one when re-recording, and Andre in his role as producer decided to keep it in as part of the final product. The recording itself becomes part of the subject of the song as a kind of coda. The banter is actually a brilliant parallel to the content of the song, which undermines the typical “we’ll be together forever” love song trope for one that highlights the reality of serial monogamy common in American culture and lessons each relationship potentially provides us for the next. Rather than pretend that a romantic relationship is a unique and eternal thing, the song admits the work and changes involved, just as it admits that the seemingly special spontaneity of a song is developed through a process.
Of course, hip hop as a genre, with its frequent use of sampling, tends to make its recording process very evident. While it is possible to play samples “live” using a digital sampler or isolating sections on vinyl via the DJ as band member, the use of pre-recorded fragments means that rap music relies on the vocal dynamics of rap to carry the sense of spontaneity. Yet, in 1993′s “Higher Level,” KRS-One opens with a description of the time and place of the recording—“5 o’clock in the morning” at “D&D Studios,” establishing forever when and where and thus how the recording is happening. Five o’clock in the morning places the creation of the song with a context of working and rocking all through the night to get the album completed. The song may or may not have actually been recorded last, but its placement at the end of Return of the Boom Bap, gives it a sense of a last ditch effort to complete the collection of songs. The fact that “5 o’clock in the morning” is likely also among the cheapest available studio times potentially highlights budgetary concerns in the recording itself. This is a rare thing to include in recording, though the Brand New Heavies cap off the dissolution of their 1994 track “Fake” into pseudo-jazz-messing-around with one their members chiding, “a thousand dollar a day studio!” This is a different kind of call to authenticity, as a budgetary concern is an implicit to a “realness” defined by being non-commercial.
One of my all-time favorite examples is a few years older than “Higher Level”—“Nervous” by Boogie Down Production: “written, produced and directed by Blastmaster KRS-One,” which includes an attempt to explain how a song is put together on the “48-track board.” Instead of calling instructions to a band, KRS points out that DJ Doc is doing the mixing and instructs him to “break it down, Doc!” just before a beat breakdown (listen at around 1:40). He explains, “Now, here’s what we do on the 48-track board / We look around for the best possible break / And once we find it, we just BREAK,” and then the pre-recorded beat seems to obey his command, breaking down to just the bass drum and a sampled electric piano from Rhythm Heritage’s “The Sky’s the Limit.” Later, he says, “We find track seven, and break it down!” and the music shifts to just the bass guitar and some tinny synth high-hats.
So how does highlighting the recording circumstances, or just bringing attention to the fact that the song being listened to is a multiple-step process of recording and post-production benefit the song itself? Is it like I mentioned in my 2011 “Defense of Auto-Tune” post, that this kind of attention re-establishes authenticity by making its constructed nature transparent? I’d say yes, in part, but I also think that–through its violation of the expectation of seamlessness–the stray track or reference to recording within a song is a nod to a different kind of skillfulness. Exhortations such as “Take it to the Bridge” give an ironic nod to the extemporaneous to call attention to the diligent workmanship and dedication demanded by studio songcraft. Traditionally, live audiences may appreciate a flawless or nearly flawless performance and understand a masterful recovery from (and/or incorporation of) error as the signs of a good show, but, these moments that call attention to the recording studio situation claim there something to appreciate in the fact that Ringo Starr endured 18 takes of “Helter Skelter” until he had painful blisters, or that KRS-One and DJ Doc worked out the proper way to “feel around” the mixing board to make a grooving collage of sounds as disparate as the theme from “Rat Patrol” and WAR’s “Galaxy.”
KRS may have once admonished other MCs to “make sure live you is a dope rhyme-sayer,” but clearly he believes liveness—whether implicitly or explicitly—is not the only measure of musical ability. Rather, the highlighting of labor in the construction of a recording becomes its own kind of (anti-)vamping and demonstration of skill, and of a different kind of sociability in making music that these conversational snippets and references to other people in the studio make clear. This kind of attention to the group labor is especially important as various recording technologies become increasingly available to the wider public and allow for an isolated pursuit of recording music. Just as calls to liveness in recording engage the listener in ways that suggest participation as a live audience, calls to anti-liveness also engage the listener, but by bringing them across time and space into the studio to witness to a different form of great performance.
Osvaldo Oyola is a regular contributor to Sounding Out! and a PhD Candidate in English at Binghamton University working on his dissertation, “Collecting Identity: Popular Culture and Narratives of Afro-Latin Self in Transnational America.” He also regularly posts brief(ish) thoughts on music and comics on his blog, The Middle Spaces.
REWIND! . . .If you liked this post, you may also dig:
Experiments in Agent-based Sonic Composition–Andreas Pape
Welcome back to our continuing series on Orson Welles and his career in radio, prompted by the upcoming 75th anniversary of his 1938 Invasion from Mars episode and the Mercury Theater series that produced it. To help us hear Welles’s rich radio plays in new and more complicated ways, our series brings recent sound studies thought to bear on the puzzle of Mercury‘s audiocraft.
From Mercury to Mars is a joint venture with the Antenna media blog at the University of Wisconsin, and will continue into the new year. If you missed them, check out the first installment on SO! (Tom McEnaney on Welles and Latin America) and the second on Antenna (Nora Patterson on “War of the Worlds” as residual radio).
This week, Sounding Out! sinks its teeth into Orson Welles’s “Dracula,” the first in the Mercury series, and perhaps the play that solicits more “close listening” than any other—back in 1938, Variety yawned at Welles’s attempt at “Art with a capital A” and dismissed his “Dracula” as “a confused and confusing jumble of frequently inaudible and unintelligible voices and a welter of sound effects.” Here’s the full play, listen for yourself:
It’s a good thing that our guide is University of South Carolina Associate Professor and SO! newcomer Debra Rae Cohen. Cohen is a former rock critic, an editor of the essential text on radio modernism, and has also recently written a fascinating essay on the BBC publication The Listener, among other distinguished critical works on modernism. Below you’ll find the most detailed close reading of Welles’s “Dracula” (and of Welles as himself a kind of Dracula) ever done.
Didn’t even know Welles ever played Count Dracula? That’s just the first of many surprises you’ll discover thanks to Debra Rae’s keen listening.
So (to borrow a phrase), enter freely and of your own will, dear reader, and leave something of the happiness you bring. - nv
It’s one of the best-known anecdotes of the Mercury Theater: Orson Welles bursts into the apartment where producer John Houseman is holed up cut-and-pasting a script for Treasure Island, the planned debut production, and announces, only a week before airing, that Dracula will take its place. At a time when Lilith’s blood-drenched handmaidens on the current season of True Blood serve as an analogue for our own cultural oversaturation with vampires, it’s worth recalling why, in 1938, this substitution might have been more than merely the indulgence of Welles’s penchant for what Paul Heyer calls “gnomic unpredictability” (The Medium and the Magician, 52).
In fact, 1938 was a good year for vampire ballyhoo; Tod Browning’s 1931 Dracula film had been rereleased only a month before to a new flurry of Bela Lugosi press. Welles’s last-minute switch was a savvy one, allowing him to capitalize on the publicity generated by the continuing popularity of the film (and the popular Hamilton Deane and John Balderston stage adaptation from which it largely drew), while publicly disdaining its vulgarity in favor of what he seemed peculiarly to consider the high-culture status of Stoker’s original novel. Here he is defending the book:
But more importantly, Welles’s production reclaimed and exploited the novel’s own media-consciousness, a feature occluded in the play and film versions, and one to which the adaptation into radio adds, as it were, additional bite. Dracula introduced several of the radio innovations we’ve come to associate with the Mercury Theater (and The War of the Worlds in particular)—first-person retrospective narration, temporal coding, the strategic use of media reflexivity—but Stoker’s novel may have made such innovations both alluring and inevitable.
Stoker’s Dracula is made up of a patchwork of documents—shorthand diaries, transcribed dictation cylinders, newspaper clippings—that do not simply serve as a legitimizing frame, as in Frankenstein. Instead, they are deeply self-referential, obsessively chronicling the very processes of inscription and translation between media by which the novel is built. Confronted with the terrible threat of Dracula free to prey on London’s “teeming millions,” Mina Harker vows thus: “There may be a solemn duty, and if it come we must not shrink from it. …I shall get my typewriter this very hour and begin transcribing.” Processes of ordering information serve, as critics since Friedrich Kittler have noted (see for example here, here, and especially here), as the way to combat the symbolic threat of vampirism that, as Jennifer Wicke argues, stands in for “the uncanny procedures of modern life,” and a threat that may have already colonized intimate spaces of the text itself (“Vampiric Typewriting,” 473).
That threat, in the novel, sounds oddly like . . . radio. Seeping intangibly through the cracks of door frames, invading domestic spaces, riding through the ether “as elemental dust,” materializing abruptly in intimate settings, communicating across land and sea while rendering his receiver passively malleable, Stoker’s Dracula is terrifying by virtue of his insidious ubiquity, a kind of broadcast technology avant la lettre.
In adapting Dracula for radio, then, Welles could play on the deep division in the novel between the ordered forces of inscription and the Count’s occult, uncanny transmissive force in order to exploit the anxieties connected with the medium itself. Even the double role Welles plays in the production—both Dracula and the doctor Arthur Seward—functions in this regard as more than bravura.
Seward’s primary role in the drama as compère, or advocate, threads together Dracula’s multiple documentary “narration,” through what became the familiar Mercury device of retrospect-turned-enactment. As Seward, Welles performs an argumentative and editorial function that’s nowhere in Stoker’s novel, where the various documents make up a file that is explicitly uncommunicated, because unbelievable, for a case no longer necessary to make. Shuffling the various documents that make up the “case,” Seward stands outside of definite place, but also outside of time, animating “the extraordinary events of the year 1891” by directly addressing an audience of a medium that does not yet exist. Here is part of Seward’s address:
Seward is our first “First Person Singular,” and yet his persona is unsettlingly thin. Though his voice at the outset is strong and urgent, it feels bland compared with the dense goulash of “Transylvanian” effects that competes for our attention through the first ten minutes of the production—hoofbeats, thunder, wolf howls, whinnies, the sound of a coach seemingly about to clatter to bits, the singsong of prayers muttered, perhaps, in some exotic foreign tongue. The “documents” on which Seward’s claim to the trust of the audience rides are overwhelmed by the sound that saturates them. Here is the scene:
It’s not until nearly 20 minutes into the production that Seward reveals his own connection with the story—as the lover of Lucy Westenra—and from this moment forward Welles allows Seward’s authority in the “present” to be eroded by his bland inefficacy in the scenes of the “past.” By Act II, he has ceded authority by telegraph to Dr. Van Helsing (Martin Gabel, in a brilliantly crafted performance):
Without the didactic authority of Van Helsing and with small claim on audience sympathy, Seward becomes, through the second half of the production, a strangely insecure advocate, whose claim on authentic first person experience often disrupts, rather than augments, his role as presenter.
The listener does not consistently “follow” Seward either narratively or sonically—indeed, he is often displaced to the sonic periphery by Dr. Van Helsing. In the final confrontation with Dracula, Seward is explicitly shooed to the outer margins of the soundscape to pray.
Here the technical exigencies of Welles’s double role support a subtext that his unmistakable voice has already suggested: that Seward is here the “other” to Dracula (as, later, his Kurtz would be to his Marlow), waning as he waxes. As Lucy is weakened through Dracula’s occult ministrations, so too is Seward sapped of vitality, his romantic passages voiced as strangely bloodless, while Dracula’s wring from Lucy an orgasmic sonic response. Penetrating the intimate chamber Seward ineffectively desires to protect, Dracula replaces him as the production’s central sonic presence—who even when silent, possesses the sonic space.
Contrast Seward’s feeble voice during his night-time vigil here,
to Dracula’s seductive visit here,
Welles needed to distinguish his Dracula from Lugosi’s, employing, rather than an accent, a kind of sonorous unplaced otherness. But his performance shares the ponderous spacing of syllables that, in Lugosi’s case, derived from phonetic memorization of his English script; in other words, Welles is “recognizable” as Dracula without “playing” him. As an analogue to Lugosi’s glacial movement, Dracula’s voice is here surrounded by depths of silence in an otherwise effect-busy soundscape.
From the beginning, Dracula is also sonically on top of the listener, uncomfortably intimate, as in this scene of a close shave:
And although Dracula’s voice is not heard for a full thirteen minutes after Lucy’s death, it nevertheless seems to inhabit all available silences, until he quietly seeps through the door frame of Mina Harker’s bedroom:
The closely-miked phrase “blood of my blood” is reprised throughout the second half of the production—it is repeated seven times, by both Dracula and Mina (Agnes Moorhead), though it occurs only once in the novel—underscoring the ineffable aurality of Dracula’s “transmission.” The line doesn’t present as meaning, but as a tidal echo, the pulse of a carrier wave. While it signals an action unrepresentable to the ear—Dracula’s literal bite or its resonances of memory and desire—it also functions as a “signal” in the sense that Verma describes, as a repetitive element that compels listenership like an incantation (Theater of the Mind, 106). This is the power against which the “documents” are marshaled, the power of “pure” radio—ironically the very power that allows them to be shared. And the hypnotic thrum of radio rips them to shreds.
Indeed, the closing minutes of the drama present the vampire hunters, the novel’s forces of inscription, as an array of anxious noises marshaled against this lurking silence. The frenzied pacing of the final chase back to Transylvania—an element of Stoker’s novel that both plays and film sacrificed—gathers momentum through ever-shorter “diary entries” delivered, breathlessly, over the sound effects of transport:
Welles exploits the familiarity of his audience with a mechanism that Kathleen Battles calls a “radio dragnet”; the forces of order deploy the ubiquity of radio itself to shore up social cohesion, enlisting the audience within their ranks (Calling all Cars, 149). But here that very process is, simultaneously, unsettled and undermined by the identification of Dracula himself with invisible transmission. As Van Helsing repeatedly hypnotizes Mina to tap in on her communion with Dracula—radio, in a sense, deploying radio—the listener is aware of being both eavesdropper and the sharer of rapport, a position that implicates her in Mina’s enthrallment. Here is part of the sequence:
This identification intensifies in the climactic sequence, completely original to Welles’s adaptation, in which Dracula, at bay before his enemies, weakened by sunlight, calls upon the elements of his undead network:
This tour-de-force moment for Welles is also the point when radio shatters the documentary frame and undermines its logic. Though Mina hears Dracula, the others do not, and as Van Helsing’s “testimony” attests, even she does not remember it. This communication can’t, then, be part of Seward’s “evidence.” Rather, it is the radio listener—Dracula’s real prey—who who has received Dracula’s transmission, who has heard across time and space what no one else present can hear: “You must speak for me, you must speak with my heart.”
Although Mina refuses this rapport by staking Dracula at the last possible second—or does she refuse it? Is this not perhaps the Count’s secret wish?—the effect of the uncanny communion persists beyond Seward’s summation, beyond Van Helsing’s subsequent account of Dracula’s end. It renders almost unnecessary Welles’s famous playful post-credits epilogue, in which he abruptly adopts Dracula’s tones to tell us that, “There are wolves. There are vampires”:
But with the hypnotic reach of radio at your disposal, who needs them?
Debra Rae Cohen is an Associate Professor of English at the University of South Carolina. She spent several years as a rock & roll critic before returning to academe. Her current scholarship, including her co-edited volume Broadcasting Modernism (University Press of Florida, 2009, paperback 2013) focuses on the relations between radio and modernist print cultures; she’s now working on a book entitled “Sonic Citizenship: Intermedial Poetics and the BBC.”
REWIND! . . .If you liked this post, you may also dig:
“Radio’s ‘Oblong Blur’: Notes on the Corwinesque“– Neil Verma
Since the drug war began, the lives of approximately 95,000 people have been claimed, with an estimated total of 26,000 disappearances (“Continued Humanitarian Crisis at the Border” June 2013 Report). At the University of Texas, Pan American – an institution located 25 miles north of Reynosa, Mexico (a Mexican city that has been hard hit by drug violence in recent years) – I teach students who have been dramatically impacted by drug war violence. Many have close relatives affected or hear horrific stories of those who have been kidnapped by the cartels; many fear traveling to Mexico to visit loved ones as a result and, in some cases, they report having relatives involved in the drug cartel business. Dinorah Guerra, psychotherapist and head of the Red Cross in Reynosa, describes the devastating psychological and physical toll: “There is a huge risk for people’s self esteem. They cannot speak about what they have seen or what they have heard. [They] lose [themselves] and lose [their] identity” (qtd. in Pehhaul 2010).
I name the space of the drug war and its resulting terror in the U.S.-Mexico border the “soundscape of narco silence.” This soundscape includes death and intimidation, from the brutal killings of news reporters by cartel members to the decapitation of citizen-activists who use online media to alert communities of narco checkpoints. It also consists of those powerful acts that call attention to silence as a tactic of terror. The Movement for Peace with Justice and Dignity, for instance, brought together tens of thousands of people in Mexico to speak out against drug violence through silent marches. Cultural productions, such as narcocorridos, or contemporary drug ballads that document the cross-border drug economy, also become part of the soundscape. The narcocorridos function as a powerful critical response to silence and fear because they enable those in Mexican society, as Jorge Castañeda, author of El Narco: La Guerra Falllida, explains, “to come to terms with the world around them, and drug violence is a big part of that world. The songs are born out of a traditional Mexican cynicism: This is our reality, we’ve gotten used to it” (Qtd. in Josh Kun 2010).
In this blog post, I focus on the role of U.S. Latina/o theater produced in the South Texas border region as it responds to the soundscape of narco silence. Building on David W. Samuels, Louis Meintjes, Ana Maria Ochoa, and Thomas Porcello’s definition of “soundscapes” in “Soundscapes: Towards a Sounded Anthropology” as “the material spaces of performance and ceremony that are used or constructed for the purpose of propagating sound” (330), I suggest that soundscapes of silence in theater function as material spaces of performance that focus the public’s attention on silence – with the intent of intervening in acts that propagate silence and fear. Soundscapes of narco silence are characterized not only by violence and terror, but by cultural productions that function as forms of critical resistance – those works that focus the publics’ attention on the economy of silence and fear that fuels the drug war, and in the process, enable communities to cope with narco violence.
To closely listen to the soundscape of narco silence, I engage with the play script and production of Tanya Saracho’s play El Nogalar at South Texas College Theatre (STC) under the direction of Joel Jason Rodriguez in McAllen, Texas in June 2013. The play was first produced at the Goodman Theater in Chicago in April 2011, with a West Coast premiere at the Fountain Theater in Los Angeles in January 2012. I critically analyze the STC Theatre production’s incorporation of a multi-genre soundtrack that included narcocorridos, rancheras, and nortec (norteño + techno). I argue that this soundtrack focused audiences’ attention not only on the devastating effects of silence, but also the function of silence as a form of capital for those most excluded in society. I also offer a brief critical listening of the script’s rendering of silence through character dialogue and stage directions.
El Nogalar tells the story of an upper-class Mexican family comprised of three generations of women (Maité, Valeria, and Anita) whose land and home in the fictionalized estate of Los Nogales in Nuevo Leon, Mexico, and its adjacent nogalar (pecan orchard), are under threat by the maña (drug cartels) moving into the region. The play focuses on the women’s responses to the drug war economy as a result of their different relationships to home (both their estate and the space of Mexico). It also centers the experiences of Dunia, their maid, and López, a former field worker who now works for the maña.
Cecilia Ballí, in her article “Calderón’s War: The Gruesome Legacy of Mexico’s Antidrug Campaign,” explains the particular circumstances of marginalized men in this society: “The worst casualties of this ‘civil war’ were the estimated 7 million young men to whom society had closed all doors, leaving them the options of joining a drug gang or of enlisting in the military, both of which assured imprisonment of death” (January 2012, 48). With the characters López and Dunia, the play asks audiences then to listen to the impact of the drug war on the most vulnerable populations in Mexico and the US-Mexico border region.
The play script conveys how “narco silence” can be used by those who either seek to preserve traditional class hierarchies (the story of the matriarch Maité) or to survive and profit in the new drug economy (the story of López). “Narco silence,” a term coined by reporter Jonathan Gibler in his book To Die in Mexico, refers to “not the mere absence of talking, but rather the practice of not saying anything. You may talk as much as you like, as long as you avoid the facts. Newspaper headlines announce the daily death toll, but the articles will not tell you anything about who the dead were, who might have killed them or why. No detailed descriptions based on witness testimony. No investigation” (2011, 23). In an early exchange between López and Dunia, López defends “narco silence” as a strategy of survival:
DUNIA: Why are you the only one they leave alone, Memo?
DUNIA: All the men your age. Killed. Why Memo? (Beat).
LÓPEZ: Because I know when to keep my mouth shut which is not something I can say for you….
DUNIA: So that’s all it takes to be best of friends with the Maña? That doesn’t seem so hard to do? (American Theatre Magazine July/August 2011, 74).
Later in the play, Dunia, heeding López’s advice, offers a powerful observation of how “narco silence” enables her community to cope with death: “We all just walk around like we’re a movie on mute. You can see people’s mouths moving but all you hear is the static (my italics)” (American Theatre Magazine July/August 2011, 73-74).
The STC Theatre production enhanced the script’s soundscape of narco silence through its sound design, with a soundtrack that included rancheras, narcocorridos, norteño and nortec. This music provided audiences with a connection to the world of Los Nogales and captured each character’s process of coping with narco violence. For example, Maité’s soundtrack consists of several rancheras, such as Lola Beltran singing “Los Laureles” and Chavela Vargas’s powerful rendition of “Que te vaya Bonito.” Beltran’s “Los Laureles” – a cancion ranchera that includes Beltran’s powerful female vocals and mariachi orchestra instrumentation – invites audiences to hear Maité’s nostalgia and desire for an idealized Mexican society and her wish to preserve traditional class hierarchies.
Vargas’s powerful rendition of “Que te vaya Bonito” captures Maite’s pain and suffering as she loses her home to the cartels. In Vargas’s version of “Que te Vaya Bonito” – a song about love and abandonment – audiences hear Vargas’s choking and sobbing voice, accompanied by a single guitar.
Vargas’s voice conveys, what Lorena Alvarado powerfully argues, is “the body’s dilemma between the hysteria of sobbing and the intelligibility of words, between resignation and retribution” (2010, 4). Vargas’s powerful singing also conveys, as Alvarado further describes, “un nudo en la garganta,” a common expression in Spanish that describes “the knot in the throat, when one cannot speak because words will not come out, but the desperate, or quiet, breath of tears” (Alvarado 2010, 5).
To sonically register the drug cartel economy and lifestyle underlying the “new” Los Nogales, the soundtrack also included narcocorridos. The first sounds we hear in the play are from the narcocorrido “El Carril Número Tres” – which includes two acoustic guitars and an electric bass – by Los Cuates de Sinaloa.
“El Carril Número Tres” – tells the story of a secret “lane number three” that allows a Mexican drug lord to freely go back and forth between the US and Mexico because he makes a deal with the CIA and DEA. With this focus on the US government’s involvement in the drug trade, the song centers how silences north of the US-Mexico border have perpetuated drug violence.
The music also included nortec, with songs by the Mexican Institute of Sound, particularly the track “Mexico,” which is a critique of the Mexican government’s complicity with the narcos.
With “Mexico,” audiences hear a fusion of norteño, electronic, and hip-hop with lyrics that use the symbols of Mexican national identity and culture to focus the public’s attention on violence and terror. With the lyrics “green like weed, white like cocaine, red your blood” (referencing the Mexican flag) and “at the sound of the roar of the cannon” (alluding to the national anthem), the song powerfully invokes the visual and sonic soundscape of violence that terrorize Mexican residents. With this charged critique of government corruption, “Mexico” momentarily interrupts the soundscape of narco silence rendered in the play script and rest of the soundtrack.
Ultimately, the production’s combination of rancheras, narcocorridos, and nortec captured the class tensions in Mexican society and emphasized the play’s critique of class structures that have enabled drug war violence to persist. With this range of music, the director explains he wanted to “maneuver between the [various] aspects of [the story]: the nostalgia, the corridos, the narcocorridos, and also this fusion of saying ‘we want something more,’ and so that was the whole aspect of it; the blending of the old, the new, and what the present is” (Interview with author July 2013).
The production also deliberately incorporated the sound of silence, particularly in the final scene. By the end, López buys the Los Nogales estate, thereby increasing his class status and social power. Saracho’s stage directions in this final moment indicate “an interpretive sound of trees falling. Now don’t go cueing chainsaws because it’s not literal. Just make me feel trees are falling. Along with the upper class” (87). The play’s reference to the staging of “an interpretive sound of trees falling” brings to mind the philosophical question: “If a tree falls in a forest and no one is around to hear it, does it make a sound?” We might then interpret the final sounds of El Nogalar as inviting audiences to listen attentively to the soundscape of narco silence, implicating audiences as social actors in the politics of the drug war that continue to devastate Mexican society.
Featured Image: Journalists Protest against rising violence during march in Mexico, Courtesy of the Knight Foundation
Marci R. McMahon received her Ph.D. from the University of Southern California with affiliations in the Department of American Studies and Ethnicity. She is an assistant professor at the University of Texas, Pan American, where she teaches Chicana/o literature and cultural studies, gender studies, and theater and performance in the Departments of English and Mexican American Studies. She is the author of Domestic Negotiations: Gender, Nation, and Self-Fashioning in US Mexicana and Chicana Literature and Art published by Rutgers University Press’ Series Latinidad: Transnational Cultures in the United States (May 2013). Her essays on Chicana literature and cultural studies have been published in Aztlán: A Journal of Chicano Studies; Chicana/Latina Studies: The Journal of MALCS; and Frontiers: A Journal of Women’s Studies. Her second book project, Sounding Latina/o Studies: Staging Listening in US Latina/o Theater explores how contemporary Latina/o drama uses vocal bodies and sound to engage audiences with recurring debates about nationhood, immigration, and gender.
REWIND! . . .If you liked this post, you may also dig:
Quebec’s #casseroles: on participation, percussion and protest–Jonathan Sterne
Deejaying her Listening: Learning through Life Stories of Human Rights Violations– Emmanuelle Sonntag and Bronwen Low
This is part two of a three part series on live Electronic music. To review part one, click here.
In the first part of this series,”Toward a Practical Language for Live Electronic Performance,” I established the language of variability as an index for objectively measuring the quality of musical performances. From this we were able to rethink traditional musical instruments as musical objects with variables that could be used to evaluate performer proficiency, both as a universal for the instrument (proficiency on the trumpet) and within genre and style constraints (proficiency as a Shrill C trumpet player). I propose that this language can be used to describe the performance of electronic music, making it easier to parallel with traditional western forms. Having proven useful with traditional instruments we’ll now see if this language can be used to describe the musical objects of electronic performance.
We’ll start with a DJ set. While not necessarily an instrument, a performed DJ set is a live musical object comprised of a number of variables. First would be the hardware. A vinyl DJ rig consists of at least two turntables, a mixer and a selection of vinyl. A CDJ rig uses two CD decks and a mixer. Serato and other vinyl controller software require only one turntable, a mixer and a laptop. Laptop mixing can be done with or without a controller. One could also do a cassette mix, reel to reel mix, or other hardware format mixing. Critical is a means to combine audio from separate sources into one uniform mix. Some of the other variables involved in this include selection, transitions and effects.
Because DJ sets are expected to be filled with pre-recorded sounds, the selection of sounds available is as broad as all of the sounds ever recorded. Specific styles of DJ sets, like an ambient DJ set, limit the selection down to a subset of recorded music. The choice of hardware can limit that even more. An all-vinyl DJ set of ambient music presents more of a challenge, in terms of selection, than a laptop set in the same style, because there are fewer ambient records pressed to vinyl than are available in a digital format.
Connected to selections are transitions, which could be said to define a DJ. When thinking of transitions there are two component factors: the playlist and going from one song to another. The playlist is obviously directly tied to the selection; however, even if you select the most popular songs for the style, unless they are put into a logical order, the transitions between them could make the set horrible.
One of the transitional keys to keeping a mix flowing is beat matching. In a turntable DJ set the beat mapping degree of difficulty is high because all of the tempos have to be matched manually by adjusting the speed of the two selections on the spinning turntables. When the tempos are synchronized, transitioning from one to the other is accomplished via a simple crossfade. With digital hardware such as the laptop, Serato and even CDJ setups, there is commonly a way to automatically match beats between selections. This makes the degree of difficulty to beat match in these formats much lower.
Effects, another variable, rely on what’s available through the hardware medium. With the turntable DJ set, the mixer is the primary source of effects and those until recent years have been limited to disc manipulation (e.g. scratching), crossfader, and EQ effects. Many of the non vinyl setups and even some of the vinyl setups now include a variety of digital effects like delay, reverb, sampler, granular effects and more.
With these variables so defined it becomes easier to objectively analyze the expressed variability of a live DJ set. But, while the variables themselves are objective, the value placed on them and even how they are evaluated are not. The language only provides the common ground for analysis and discussion. So the next time you’re at an event and the person next to you says, “this DJ is a hack!” you can say, “well they’ve got a pretty diverse selection with rather seemless transitions, maybe you just don’t like the music,” to which they’ll reply, “yeah, I don’t think I like this music,” which is decent progress in the scheme of things. If we really want to talk about live electronic performance however we will need to move beyond the DJ set to exemplify how this variable language can work to accurately describe the other musical objects which appear at a live electronic performance.
Take for example another electronic instrument: the keyboard. The keyboard itself is a challenging instrument to define; in fact I could argue that the keyboard is itself not actually an instrument but a musical object. It is a component part of a group of instruments commonly referred to as keyboards, but the keyboard itself is not the instrument. What it is is one of the earliest examples of controllerism.
On a piano, typically fingers are used to press keys on the keyboard, which trigger the hammers to hit the strings and produce sound. The range of the instrument travels seven octaves from A0 to C8, and can theoretically have 88 voice polyphony, though in typical that polyphony is limited to the ten fingers. It can play a wide range of dynamics and includes pedals which can be used to modify the sustainabilty of pitches. With a pipe organ, the keyboard controls woodwind instruments with completely different timbre, range, and dynamics; the polyphony increases and the foot pedals can perform radically different functions. The differences from the piano grow even more once we enter the realm where the term “keyboard,” as instrument, is most commonly used: the synthesizer keyboard.
The first glaring difference is that, even if you have an encyclopedia of knowledge about keyboard synthesizers, when you see a performer with one on stage you simply cannot know by seeing what sounds it will produce. Pressing the key on a synthesizer keyboard can produce an infinite number of sounds, which can change not just from song to song, but from second to second and key to key. A performer’s left thumb can produce an entirely different sound than their left index finger. Using a keyboard equipped with a sequencer, the performer’s fingers may not press any keys at all but can still be active in the performance.
When the keyboard synthesizer was first introduced, it was being used by traditional piano players in standard band configurations, like a piano or organ, with timbres being limited to one during a song and the performance aspect being limited to fingers pressing keys. Some keyboardists however used the instrument more as a bank for sound effects and textures. They may have been playing the same keys, but one wouldn’t necessarily expect to hear a I IV V chord progression. Rather than listening for the physical dexterity of the player’s fingers, the key to listening to a keyboard in this context was evaluating the sounds produced first and then how they were played to fit into the surrounding musical context.
Could one of these performers be seen as more competent than the other? Possibly. The first performer could be said to be one of the most amazing keyboard players in the piano player sense, but where they aren’t really maximizing the variability potential of the instrument, it could be said they fall short as a keyboard synthesizer performer. The second performer on the other hand may not even know what a I IV V chord progression is and thus be considered incompetent on the keyboard in the piano player sense, but the ways in which they exploit the variable possibilities shows their mastery of the keyboard synthesizer as an instrument.
While generally speaking there isn’t a set of variables which define the keyboard synthesizer as an instrument, if we think of the keyboard synthesizer as a group of musical instruments, each of the individual types of keyboard synthesizers come with their own set of fixed variables which can be defined. Many of these variables are consistent across the various keyboards but not always in a standard arrangement.
As such, while the umbrella term “keyboard” persists it is perhaps more practical to define the instruments and their players individually. There are Juno 60 players, ARP Odyessy players, MiniMoog players, Crumar Spirit players and more. Naturally an individual player can be well versed in more than one of these instruments and thusly be thought of as a keyboardist, but their ability as a keyboardist would have to be properly contextualized per instrument in their keyboard repertoire. Using the MiniMoog as an example we can show how its variability as an instrument defines it and plays into how a performance on the instrument can be perceived.
The first variable worth considering when evaluating the MiniMoog is that it is a monophonic instrument. This is radically different from the piano; despite one’s ability to use ten fingers (or other extremity) only one note will sound at one time. The keyboard section of the instrument is only three and a half octaves long, though the range is itself variable. On the left-hand side there is a pitch wheel and a modulation wheel. The pitch wheel can vary the pitch of the currently playing note, while the modulation wheel can alter the actual sound design.
As a monophonic instrument, one does not need to have both hands on the keyboard, as only one note will ever sound at a time. This frees the hands to modify the sound being triggered by the keyboard exemplified via the pitch and modulation wheels, but also available are all of the exposed controls for the sound design. This means that in performance every aspect of the sound design and the triggering can be variable. Of course these changes are limited to what one can do with their hands, but the MiniMoog also features a common function in analog synths, a Control Voltage input. This means that an external source can control either the aspects of the sound design and/or the triggering for the instrument.
Despite this obvious difference from the piano, playing the MiniMoog does not have to be any less of a physical act. A player using their right hand to play the keyboard while modulating the sound with their left, plays with a different level of dexterity than the piano player. The right and left hand are performing different motions; while the right hand uses fingers to press keys as the arm moves it up and down the keyboard, the left hand can be adjusting the pitch or modulation wheels with a pushing action or alternately adjusting the knobs with a turning action. Like patting your head and rubbing your belly, controlling a well-timed filter sweep while simultaneously playing a melody is nowhere near as easy as it sounds.
At the same time playing the MiniMoog doesn’t have to be very physical at all. A sequencer could be responsible for all of the note triggering leaving both hands free to modulate the sound. Similarly the performer may not touch the MiniMoog at all, instead playing the sequencer itself as an intermediary between them and the sound of the instrument. In this case the MiniMoog is not being used as a keyboard, yet it retains its instrument status as all of the sounds are being generated from it, with the sequencer being used as the controller. Despite not having any physical contact with the instrument itself, the performer can still play it.
Taking it one step further – if a performer were to only touch a sequencer at the start of the performance to press play and never touch the instrument, could they still be said to be playing the MiniMoog live? There is little doubt that the MiniMoog is indeed still performing because it does not have the mechanism to play by itself, but requires agency to illicit a sonic response. In this example that agency comes from the sequencer, but that does not eliminate the performer. The sequencer itself has to be programmed in order to provide the instrument with the proper control voltages, and the instrument itself has to be set up sonically with a designed sound receptive to the sequencer’s control. If the performer is not physically manipulating either device however, they are not performing live, the machines are.
From this we can establish the first dichotomy of electronic performance; the layers of variability in an electronic performance can be isolated into two specific categories: physical variability and sonic variability. While these two aspects are also present in traditional instrument performance, they are generally thought to not be mutually exclusive without additional devices. The vibrato of an acoustic guitar is only accomplished by physically modulating the strings to produce the effect. With an electronic instrument however, vibrato can be performed by an LFO controlling the amplitude. That LFO can be controlled physically but there does not have to be a physical motion (such as a knob turn) associated with it in order for it to be a live modulation or performance. The benefit of it running without physical aid is that it frees up the body to be able to control other things, increasing the variability of the performance.
In a situation where all of the aspects of the performance are being controlled by electronic functions, the agency in performance shifts from the artist performing live, to the artists establishing the parameters by which the machines perform live. Is the artist calling this a live performance a hack? Absolutely not, but it’s important that the context of the performance is understood for it to be evaluated. Like evaluating the monophonic MiniMoog performer based on the criteria of the polyphonic pianist, evaluating a machine performance based on physical criteria is unfair.
In the evaluation of a machine performance, just as with a physical one, variability still plays an important role. At the most base level the machine has to actually be performing and this is best measured by the potential variability of the sound. This gets tricky with digital instruments, as, barring outside influences, it is completely possible to repeat the exact same performance in the digital domain, so that there is no variation between each iteration. But even such cases with a digital sequencer controlling a digital instrument, with no physical interaction, are still a machine performances; they just exhibit very little variability. The performance aspect of the machine only disappears when the possibility for variability is completely removed, at which point the machine is no longer a performance instrument but a playback device as is the case with a CD player playing a backing track. The CD player if not being manipulated physically or by an external control is not a performance instrument as all of the sound contained within it can only be heard as one fixed recorded performance, not live. It is only when these fixed performances are manipulated either physically (ie a DJ set) or by other means, that they go from fixed performances to potentially live ones.
From all of this we arrive at four basic distinctions for live electronic performances:
• The electro/mechanical manipulation of fixed sonic performances
• The physical manipulation of electronic instruments
• The mechanized manipulation of electronic instruments
• A hybrid of physical and mechanized manipulation of electronic instruments
These help set up the context for evaluating electronic performances, as before we can determine the quality of a performance we must first be able to distinguish what type of performance we are observing. So far we’ve only dealt with a monophonic instrument, but even with its limitations can see how the potential variability is quite high. As we get into the laptop as a performance instrument that variability increases exponentially.
This is part two of a three part series. In the next part we will begin to exemplify the laptop as performance instrument, using this language to show the breadth of variability available in electronic performance and perhaps show that indeed, where that variability continues to be explored, there is merit to the potential of live electronic music as an extension of jazz.
Native Frequencies at the Trocadero 2013, Featured Image Courtesy of Raymond Angelo (C)
Primus Luta is a husband and father of three. He is a writer, technologist and an artist exploring the intersection of technology and art, and their philosophical implications. He is a regular guest contributor to the Create Digital Music website, and maintains his own AvantUrb site. Luta is a regular presenter for the Rhythm Incursions Podcast series with his show RIPL. As an artist, he is a founding member of the live electronic music collective Concrète Sound System, which spun off into a record label for the exploratory realms of sound in 2012.
REWIND! . . .If you liked this post, you may also dig:
Experiments in Agent-based Sonic Composition–Andreas Pape
Sound as Art as Anti-environment–Steven Hammer