Sounding Out! Podcast #63: The Sonic Landscapes of Unwelcome: Women of Color, Sonic Harassment, and Public Space
CLICK HERE TO DOWNLOAD: The Sonic Landscapes of Unwelcome: Women of Color, Sonic Harassment, and Public Space
SUBSCRIBE TO THE SERIES VIA ITUNES
ADD OUR PODCASTS TO YOUR STITCHER FAVORITES PLAYLIST
This podcast focuses on the sonic landscapes of unwelcome which women and femmes of color step into when we walk down the street, take the bus, and navigate public and professional spaces. Women of color must navigate harassment, violent, and sexually abusive language and noise in public space. While walking to the market or bus, a man or many might yell at us, blow us an unwanted kiss, comment on our bodies, describe explicit sexual acts, or call us “bitch.” The way that women and femmes do or do not respond to such unwelcome language can result in retaliation and escalated violence. A type of harm reduction, women often wear headphones and listen to music while in public for the specific purpose of cancelling out the hostile sonic landscape into which we are walking. The way that women and femmes make use of technology and music as a tool of survival in hostile sonic landscapes is a form of femme tech as well as femme defense. What sort of psychological and emotional effect does constant and repeated exposure to abusive noise have on the minds and bodies of women of color?
Locatora Radio is a Radiophonic Novela hosted by Mala Muñoz and Diosa Femme, two self-identified locxs. Also known as “Las Mamis of Myth & Bullshit”, Las Locatoras make space for the exploration and celebration of the experiences, brilliance, creativity, and legacies of femmes and womxn of color. Each Capitulo of Locatora Radio is made with love and brujeria, a moment in time made by brown girls, for brown girls. Listen as Las Locatoras keep brown girl hour and discuss the layers and levels of femmeness and race, mental health, trauma, gender experience, sexuality, and oppression.
Featured image of Mala and Diosa is used with permission by the authors.
REWIND! . . .If you liked this post, you may also dig:
Chicana Soundscapes: Introduction — Michelle Habell-Pallán
If La Llorona Was a Punk Rocker: Detonguing The Off-Key Caos and Screams of Alice Bag– Marlen Ríos-Hernández
Editors’ note: As an interdisciplinary field, sound studies is unique in its scope—under its purview we find the science of acoustics, cultural representation through the auditory, and, to perhaps mis-paraphrase Donna Haraway, emergent ontologies. Not only are we able to see how sound impacts the physical world, but how that impact plays out in bodies and cultural tropes. Most importantly, we are able to imagine new ways of describing, adapting, and revising the aural into aspirant, liberatory ontologies. The essays in this series all aim to push what we know a bit, to question our own knowledges and see where we might be headed. In this series, co-edited by Airek Beauchamp and Jennifer Stoever you will find new takes on sound and embodiment, cultural expression, and what it means to hear. –AB
In November 2016, my colleague Imani Wadud and I were invited by professor Sherrie Tucker to judge a battle of the bands at the Lawrence Public Library in Kansas. The battle revolved around manipulation of one specific musical technology: the Adaptive Use Musical Instruments (AUMI). Developed by Pauline Oliveros in collaboration with Leaf Miller and released in 2007, the AUMI is a camera-based software that enables various forms of instrumentation. It was first created in work with (and through the labor of) children with physical disabilities in the Abilities First School (Poughkeepsie, New York) and designed with the intention of researching its potential as a model for social change.
Our local AUMI initiative KU-AUMI InterArts forms part of the international research network known as the AUMI Consortium. KU-AUMI InterArts has been tasked by the Consortium to focus specifically on interdisciplinary arts and improvisation, which led to the organization’s commitment to community-building “across abilities through creativity.” As KU-AUMI InterArts member and KU professor Nicole Hodges Persley expressed in conversation:
KU-AUMI InterArts seeks to decentralize hierarchies of ability by facilitating events that reveal the limitations of able-bodiedness as a concept altogether. An approach that does not challenge the able-bodied/disabled binary could dangerously contribute to the infantilizing and marginalization of certain bodies over others. Therefore, we must remain invested in understanding that there are scales of mobility that transcend our binary renditions of embodiment and we must continue to question how it is that we account for equality across abilities in our Lawrence community.
Local and international attempts to interpret the AUMI as a technology for the development of radical, improvisational methods are by no means a departure from its creators’ motivations. In line with KU-AUMI InterArts and the AUMI Consortium, my work here is that of naming how communal, mixed-ability interactions in Lawrence have come to disrupt the otherwise ableist communication methods that dominate musical production and performance.
The AUMI is designed to be accessed by those with profound physical disabilities. The AUMI software works using a visual tracking system, represented on-screen with a tiny red dot that begins at the very center. Performers can move the dot’s placement to determine which part of their body and its movement the AUMI should translate into sound. As one moves, so does the dot, and, in effect, the selected sound is produced through the performer’s movement.
Could this curious technology help build radical new coalitions between researchers and disabled populations? Mara Mills’s research examines how the history of communication technology in the United States has advanced through experimentation with disabled populations that have often been positioned as an exemplary pretext for funding, but then they are unable to access the final product, and sometimes even entirely erased from the history of a product’s development in the name of universal communication and capitalist accumulation. Therefore, the AUMI’s usage beyond the disabled populations first involved in its invention always stands on dubious historical, political, and philosophical ground. Yet, there is no doubt that the AUMI’s challenge to ableist musical production and performance has unexpectedly affected and reshaped communication for performers of different abilities in the Lawrence jam sessions, which speaks to its impressive coalitional potential. Institutional (especially academic) research invested in the AUMI’s potential then ought to, as its perpetual point of departure, loop back its energies in the service of disabled populations marginalized by ableist musical production and communication.
Facilitators of the library jam sessions, including myself, deliberately avoid exoticizing the AUMI and separating its initial developers and users from its present incarnations. To market the AUMI primarily as a peculiar or fringe musical experience would unnecessarily “Other” both the technology and its users. Instead, we have emphasized the communal practices that, for us, have made the AUMI work as a radically accessible, inclusionary, and democratic social technology. We are mainly invested in how the AUMI invites us to reframe the improvisational aspects of human communication upon a technology that always disorients and reorients what is being shared, how it is being shared, and the relationships between everyone performing. Disorientations reorient when it comes to our Lawrence AUMI community, because a tradition is being co-created around the transformative potential of the AUMI’s response-rate latency and its sporadic visual mode of recognition.
In his work on the AUMI, KU alumni and sound studies scholar Pete Williams explains how the wide range of mobility typically encouraged in what he calls “standard practice” across theatre, music, and dance is challenged by the AUMI’s tendency to inspire “smaller” movements from performers. While he sees in this affective/physical shift the opportunity for able-bodied performers to encounter “…an embodied understanding of the experience of someone with limited mobility,” my work here focuses less on the software’s potential for able-bodied performers to empathize with “limited” mobility and more on the atypical forms of social interaction and communication the AUMI seems to evoke in mixed-ability settings. An attempt to frame this technology as a disability simulator not only demarcates a troubling departure from its original, intended use by children with severe physical disabilities, but also constitutes a prioritization of able-bodied curiosity that contradicts what I’ve witnessed during mixed-ability AUMI jam sessions in Lawrence.
Sure, some able-bodied performers may come to describe such an experience of simulated “limited” mobility as meaningful, but how we integrate this dynamic into our analyses of the AUMI matters, through and through. What I aim to imply in my read of this technology is that there is no “limited” mobility to experientially empathize with in the first place. If we hold the AUMI’s early history close, then the AUMI is, first and foremost, designed to facilitate musical access for performers with severe physical disabilities. Its structural schematic and even its response-rate latency and sporadic visual mode of recognition ought to be treated as enabling functions rather than limiting ones. From this position, nothing about the AUMI exists for the recreation of disability for able-bodied performers. It is only from this specific position that the collectively disorienting/reorienting modes of communication enabled by the AUMI among mixed-ability groups may be read as resisting the violent history of labor exploitation, erasure, and appropriation Mills warns us about: that is, when AUMI initiatives, no matter how benevolently universal in their reach, act fundamentally as a strategy for the efficacious and responsible unsettling of ableist binaries.
The way the AUMI latches on to unexpected parts of a performer’s body and the “discrepancies” of its body-to-sound response rate are at the core of what sets this technology apart from many other instruments, but it is not the mechanical features alone that accomplish this. Sure, we can find similar dynamics in electronics of all sorts that are “failing,” in one way or another, to respond with accuracies intended during regular use, or we can emulate similar latencies within most recording software available today. But what I contend sets the AUMI apart goes beyond its clever camera-based visual tracking system and the sheer presence of said “incoherencies” in visual recognition and response rate.
What makes the AUMI a unique improvisational instrument is the tradition currently being co-created around its mechanisms in the Lawrence area, and the way these practices disrupt the borders between able-bodied and disabled musical production, participation, and communication. The most important component of our Lawrence-area AUMI culture is how facilitators engage the instrument’s “discrepancies” as regular functions of the technology and as mechanical dynamics worthy of celebration. At every AUMI library jam session I have participated in, not once have I heard Tucker or other facilitators make announcements about a future “fix” for these functions. Rather, I have witnessed an embrace of these features as intentionally integrated aspects of the AUMI. It comes as no surprise, then, that a “Battle of the Bands” event was organized as a way of leaning even further into what makes the AUMI more than a radically accessible musical instrument––that is, its relationship to orientation.
Perhaps it was the competitive framing of the event––we offered small prizes to every participating band––or the diversity among that day’s participants, or even the numerous times some of the performers had previously used this technology, but our event evoked a deliberate and collaborative improvisational method unfold in preparation for the performances. An ensemble mentality began to congeal even before performers entered the studio space, when Tucker first encouraged performers to choose their own fellow band members and come up with a working band name. The two newly-formed bands––Jayhawk Band and The Human Pianos––took turns, laying down collaboratively premeditated improvisations with composition (and perhaps even prizes) in mind. iPad AUMIs were installed in a circle on stands, with studio monitor headphones available for each performer.
Jayhawk Band’s eponymous improvisation “Jayhawks,” which brings together stylized steel drums, synthesizers, an 80’s-sounding floor tom, and a plucked woodblock sound, exemplifies this collaborative sensory ethos, unique in the seemingly discontinuous melding of its various sections and the play between its mercurial tessellations and amalgamations:
In “Jayhawks,” the floor tom riffs are set along a rhythmic trajectory defiant of any recognizable time signature, and the player switches suddenly to a wood block/plucking instrument mid-song (00:49). The composition’s lower-pitched instrument, sounding a bit like an electronic bass clarinet, opens the piece and, starting at 00:11, repeats a melodically ascending progression also uninhibited by the temporal strictures of time signature. In fact, all the melodic layers in “Jayhawk,” demonstrate a kind of temporally “unhinged” ensemble dynamic present in most of the library jam sessions that I’ve witnessed. Yet unexpected moves and elements ultimately cohere for jam session performers, such as Jayhawk Band’s members, because certain general directions were agreed upon prior to hitting “record,” whether this entails sound bank selections or compositional structure. All that to say that collective formalities are certainly at play here, despite the song’s fluid temporal/melodic nuances suggesting otherwise.
Five months after the battle of the bands, The Human Pianos and Jayhawk Band reunited at the library for a jam session. This time, performers were given the opportunity to prepare their individual iPad setup prior to entering the studio space. These customized setup selections were then transferred to the iPads inside the studio, where the new supergroup recorded their notoriously polyrhythmic, interspecies, sax-riddled composition “Animal Parade”:
As heard throughout the fascinating and unexpected moments of “Animal Parade,” the AUMI’s sensitivity can be adjusted for even the most minimal physical exertion and its sound bank variety spans from orchestral instruments, animal sounds, synthesizers, to various percussive instruments, dynamic adjustments, and even prefabricated loops. Yet, no matter how familiar a traditionally trained (and often able-bodied) musician may be with their sound selection, the concepts of rhythmic precision and musical proficiency––as they are understood within dominant understandings of time and consistency––are thoroughly scrambled by the visual tracking system’s sporadic mode of recognition and its inherent latency. As described above, it is structurally guaranteed that the AUMI’s red dot will not remain in its original place during a performance, but instead, latch onto unexpected parts of the body.
Simultaneously, the dot-to-movement response rate is not immediate. My own involvement with “the unexpected” in communal musical production and performance moulds my interpretation of what is socially (and politically) at work in both “Jayhawks” and “Animal Parade.” While participating in AUMI jam sessions I could not help but reminisce on similar experiences with the collective management of orientations/disorientations that, while depending on quite different technological structures, produced similar effects regarding performer communication.
Being a researcher steeped in the L.A. area Salsa, Latin Jazz, and Black Gospel scenes meant that I was immediately drawn to the AUMI’s most disorienting-yet-reorienting qualities. In Timba, the form of contemporary Afrocuban music that I most closely studied back in Los Angeles, disorientations and reorientations are the most prized structural moments in any composition. For example, Issac Delgado’s ensemble 1997 performance of “No Me Mires a Los Ojos” (“Don’t Look at Me In the Eyes”)– featuring now-legendary performances by Ivan “Melon” Lewis (keyboard), Alain Pérez (bass), and Andrés Cuayo (timbales)—sonically reveals the tradition’s call to disorient and reorient performers and dancers alike through collaborative improvisations:
Video Filmed by Michael Croy.
“No Me Mires a los Ojos” is riddled with moments of improvisational coalition formed rather immediately and then resolved in a return to the song’s basic structure. For listeners disciplined by Western musical training, the piece may seem to traverse several time signatures, even though it is written entirely in 4/4 time signature. Timba accomplishes an intense, percussively demanding, melodically multifaceted set of improvisations that happen all at once, with the end goal of making people dance, nodding at the principle tradition it draws its elements from: Afrocuban Rumba. Every performer that is not a horn player or a vocalist is articulating patterns specific to their instrument, played in the form of basic rhythms expected at certain sections. These patterns and their variations evolved from similar Rumba drum and bell formats and the improvisational contributions each musician is expected to integrate into their basic pattern too comes from Rumba’s long-standing tradition of formalized improvisation. The formal and the improvisational function as single communicative practice in Timba. Performers recall format from their embodied knowledge of Rumba and other pertinent influences while disrupting, animating, and transforming pre-written compositions with constant layers of improvisation.
What ultimately interests me the most about the formal registers within the improvisational tradition that is Timba, is that these seem to function, on at least one level, as premeditated terms for communal engagement. This kind of communication enables a social set of interactions that, like Jazz, grants every performer the opportunity to improvise at will, insofar as the terms of engagement are seriously considered. As with the AUMI library jam sessions, timba’s disorientations, too, seem to reorient. What is different, though, is how the AUMI’s sound bank acts in tandem with a performer’s own embodied musical knowledge as an extension of the archive available for improvisation. In Timba, the sound bank and knowledge of form are both entirely embodied, with synthesizers being the only exception.
Timba ensembles and their interpretations of traditional and non-Cuban forms, like the AUMI and its sound bank, use reliable and predictable knowledge bases to break with dominant notions of time and its coherence, only to wrangle performers back to whatever terms of communal engagement were previously decided upon. In this sense, I read the AUMI not as a solitary instrument but as a partial orchestration of sorts, with functions that enable not only an accessible musical experience but also social arrangements that rely deeply on a more responsible management of the unexpected. While the Timba ensemble is required to collaboratively instantiate the potential for disorientations, the AUMI provides an effective and generative incorporation of said potential as a default mechanism of instrumentation itself.
As the AUMI continues on its early trajectory as a free, downloadable software designed to be accessed by performers of mixed abilities, it behooves us to listen deeply to the lessons learned by orchestral traditions older than our own. Timba does not come without its own problems of social inequity––it is often a “boy’s club,” for one––but there is much to learn about how the traditions built around its instruments have managed to centralize the value of unexpected, multilayered, and even complexly simultaneous patterns of communication. There is also something to be said about the necessity of studying the improvisational communication patterns of musical traditions that have not yet been institutionalized or misappropriated within “first world” societies. Timba teaches us that the conga alone will not speak without the support of a community that celebrates difference, the nuances of its organization, and the call to return to difference. It teaches us, in other words, to see the constant need for difference and its reorganization as a singular practice.
The work started with the AUMI’s earliest users in Poughkeepsie, New York and that involving mixed-ability ensembles in Lawrence, Kansas today is connected through the AUMI Consortium’s commitment to a kind of research aimed at listening closely and deeply to the AUMI’s improvisational potential interdisciplinarily and undisciplinarily across various sites. A tech innovation alone will not sustain the work of disrupting the longstanding, rooted forms of ableism ever-present in dominant musical production, performance, and communication, but mixed-ability performer coalitions organized around a radical interrogation of coherence and expectation may have a fighting chance. I hope the technology team never succeeds at working out all of the “discrepancies,” as these are helping us to build traditions that frame the AUMI’s mechanical propensity towards disorientation as the raw core of its democratic potential.
Featured Image: by Ray Mizumura-Pence at The Commons, Spooner Hall, KU, at rehearsals for “(Un)Rolling the Boulder: Improvising New Communities” performance in October 2013.
Caleb Lázaro Moreno is a doctoral student in the Department of American Studies at the University of Kansas. He was born in Trujillo, Peru and grew up in the Los Angeles area. Lázaro Moreno is currently writing about methodological designs for “the unexpected,” contributing thought and praxis that redistributes agency, narrative development, and social relations within academic research. He is also a multi-instrumentalist, composer, and producer, check out his Soundcloud.
REWIND! . . .If you liked this post, you may also dig:
Introduction to Sound, Ability, and Emergence Forum –Airek Beauchamp
Experiments in Agent-based Sonic Composition — Andreas Duus Pape
SO! Amplifies. . .a highly-curated, rolling mini-post series by which we editors hip you to cultural makers and organizations doing work we really really dig. You’re welcome!
Currently on the faculty and the associate technical director of California Institute of the Arts Sharon Lund Disney School of Dance, Allison Smartt worked for several years in Hampshire’s dance program as intern-turned-program assistant. A sound engineer, designer, producer, and educator for theater and dance, she has created designs seen and heard at La MaMa, The Yard, Arts In Odd Places Festival, Barrington Stage Company, the Five College Consortium, and other venues.
She is also the owner of Smartt Productions, a production company that develops and tours innovative performances about social justice. Its repertory includes the nationally acclaimed solo-show about reproductive rights, MOM BABY GOD, and the empowering, new hip-hop theatre performance, Mixed-Race Mixtape. Her productions have toured 17 U.S. cities and counting.
Ariel Taub is currently interning at Sounding Out! responsible for assisting with layout, scoping out talent and in the process uncovering articles that may relate to or reflect work being done in the field of Sound Studies. She is a Junior pursuing a degree in English and Sociology from Binghamton University.
Recently turned on to several of the projects Allison Smartt has been involved in, I became especially fascinated with MOM BABY GOD 3.0, of which Smartt was sound designer and producer. The crew of MOM BABY GOD 3.o sets the stage for what to expect in a performance with the following introduction:
Take a cupcake, put on a name tag, and prepare to be thrown into the world of the Christian Right, where sexual purity workshops and anti-abortion rallies are sandwiched between karaoke sing-alongs, Christian EDM raves and pro-life slumber parties. An immersive dark comedy about American girl culture in the right-wing, written and performed by Madeline Burrows. One is thrown into the world of the Christian Right, where sexual purity workshops and anti-abortion rallies are sandwiched between karaoke sing-alongs, Christian EDM raves and pro-life slumber parties.
It’s 2018 and the anti-abortion movement has a new sense of urgency. Teens 4 Life is video-blogging live from the Students for Life of America Conference, and right-wing teenagers are vying for popularity while preparing for political battle. Our tour guide is fourteen-year-old Destinee Grace Ramsey, ascending to prominence as the new It-Girl of the Christian Right while struggling to contain her crush on John Paul, a flirtatious Christian boy with blossoming Youtube stardom and a purity ring.
MOM BABY GOD toured nationally to sold-out houses from 2013-2015 and was the subject of a national right-wing smear campaign. In a newly expanded and updated version premiering at Forum Theatre and Single Carrot Theatre in March 2017, MOM BABY GOD takes us inside the right-wing’s youth training ground at a more urgent time than ever.
I reached out to Smartt about these endeavors with some sound-specific questions. What follows is our April 2017 email exchange [edited for length].
Ariel Taub (AT): What do you think of the voices Madeline Burrows [the writer and solo actor of MOM BABY GOD] uses in the piece? How important is the role of sound in creating the characters?
Allison Smartt (AS): I want to accurately represent Burrows’s use of voice in the show. For those who haven’t seen it, she’s not an impersonator or impressionist conjuring up voices for solely comedy’s sake. Since she is a woman portraying a wide range of ages and genders on stage and voice is a tool in a toolbox she uses to indicate a character shift. Madeline has a great sense of people’s natural speaking rhythms and an ability to incorporate bits of others’ unique vocal elements into the characters she portrays. Physicality is another tool. Sound cues are yet another…lighting, costume, staging, and so on.
I do think there’s something subversive about a queer woman voicing ideology and portraying people that inherently aim to repress her existence/identity/reproductive rights.
Many times, when actors are learning accents they have a cue line that helps them jump into that accent. Something that they can’t help but say in a southern, or Irish, or Canadian accent. In MOM BABY GOD, I think of my sound design in a similar way. The “I’m a Pro-Life Teen” theme is the most obvious example. It’s short and sweet, with a homemade flair and most importantly: it’s catchy. The audience learns to immediately associate that riff with Destinee (the host of “I’m a Pro-Life Teen”), so much so that I stop playing the full theme almost immediately, yet it still commands the laugh and upbeat response from the audience.
AT: Does [the impersonation and transformation of people on the opposite side of a controversial issues into] characters [mark them as] inherently mockable? (I asked Smartt about this specifically because of the reaction the show elicited from some people in the Pro-Life group.)
AS: Definitely not. I think the context and intention of the show really humanizes the people and movement that Madeline portrays. The show isn’t cruel or demeaning towards the people or movement – if anything, our audience has a lot of fun. But it is essential that Madeline portray the type of leaders in the movement (in any movement really) in a realistic, yet theatrical way. It’s a difficult needle to thread and think she does it really well. A preacher has a certain cadence – it’s mesmerizing, it’s uplifting. A certain type of teen girl is bubbly, dynamic. How does a gruff (some may say manly), galvanizing leader speak? It’s important the audience feel the unique draw of each character – and their voices are a large part of that draw.
AT: What sounds [and sound production] were used to help carry the performance [of MOM BABY GOD]? What role does sound have in making plays [and any performance] cohesive?
AS: Sound designing for theatre is a mix of many elements, from pre-show music, sound effects and original music to reinforcement, writing cues, and sound system design. For a lot of projects, I’m also my own sound engineer so I also implement the system designs and make sure everything functions and sounds tip top.
Each design process is a little different. If it’s a new work in development, like MOM BABY GOD and Mixed-Race Mixtape, I am involved in a different way than if I’m designing for a completed work (and designing for dance is a whole other thing). There are constants, however. I’m always asking myself, “Are my ideas supporting the work and its intentions?” I always try to be cognizant of self-indulgence. I may make something really, really cool but that ultimately, after hearing it in context and conversations with the other artistic team members, is obviously doing too much more than supporting the work. A music journalism professor I had used to say, “You have to shoot that puppy.” Meaning, cut the cue you really love for the benefit of the overall piece.
I like to set myself limitations to work within when starting a design. I find that narrowing my focus to say…music only performed on harmonica or sound effects generated only from modes of transportation, help get my creative juices flowing (Sidenote: why is that a phrase? It give me the creeps)[. . .]I may relinquish these limitations later after they’ve helped me launch into creating a sonic character that feels complex, interesting, and fun.
AT: The show is described as being comprised of, “karaoke sing-alongs, Christian EDM raves and pro-life slumber parties,” each of these has its own distinct associations, how do “sing alongs” and “raves” and our connotations with those things add to the pieces?
AS: Since sound is subjective, the associations that you make with karaoke sing-alongs are probably slightly different from what I associated with karaoke sing-alongs. You may think karaoke sing-along = a group of drunk BFFs belting Mariah Carey after a long day of work. I may think karaoke sing-alongs = middle aged men and women shoulder to shoulder in a dive bar singing “Friends In Low Places” while clinking their glasses of whiskey and draft beer. The similarity in those two scenarios is people singing along to something, but the character and feeling of each image is very different. You bring that context with you as you read the description of the show and given the challenging themes of the show, this is a real draw for people usually resistant to solo and/or political theatre. The way the description is written and what it highlights intentionally invites the audience to feel invited, excited, and maybe strangely upbeat about going to see a show about reproductive rights.
As a sound designer and theatre artist, one of my favorite moments is when the audience collectively readjusts their idea of a karaoke sing-along to the experience we create for them in the show. I feel everyone silently say, “Oh, this is not what I expected, but I love it,” or “This is exactly what I imagined!” or “I am so uncomfortable but I’m going with it.” I think the marketing of the show does a great job creating excited curiosity, and the show itself harnesses that and morphs it into confused excitement and surprise (reviewers articulate this phenomenon much better that I could).
AT: In this video the intentionally black screen feels like deep space. What sounds [and techniques] are being used? Are we on a train, a space ship, in a Church? What can you [tell us] about this piece?
AS: There are so many different elements in this cue…it’s one of my favorites. This cue is lead in and background to Destinee’s first experience with sexual pleasure. Not to give too much away: She falls asleep and has a sex dream about Justin Bieber. I compiled a bunch of sounds that are anticipatory: a rocket launch, a train pulling into a station, a remix/slowed down version of a Bieber track. These lead into sounds that feel more harsh: alarm clocks, crumpling paper…I also wanted to translate the feeling of being woken up abruptly from a really pleasant dream…like you were being ripped out of heaven or something. It was important to reassociate for Destinee and the audience, sounds that had previously brought joy with this very confusing and painful moment, so it ends with heartbeats and church bells.
I shoved the entire arc of the show into this one sound cue. And Madeline and Kathleen let me and I love them for that.
AT: What do individuals bring of themselves when they listen to music? How is music a way of entering conversations otherwise avoided?
AS: The answer to this question is deeper than I can articulate but I’ll try.
Talking about bias, race, class, even in MOM BABY GOD introducing a pro-life video blog – broaching these topics are made easier and more interesting through music. Why? I think it’s because you are giving the listener multiple threads from which to sew their own tapestry…their own understanding of the thing. The changing emotions in a score, multiplicity of lyrical meaning, tempo, stage presence, on and on. If you were to just present a lecture on any one of those topics, the messages feel too stark, too heavy to be absorbed (especially to be absorbed by people who don’t already agree with the lecture or are approaching that idea for the first time). Put them to music and suddenly you open up people’s hearts.
As a sound designer, I have to be conscious of what people bring to their listening experience, but can’t let this rule my every decision. The most obvious example is when faced with the request to use popular music. Take maybe one of the most overused classics of the 20th century, “Hallelujah” by Leonard Cohen. If you felt an urge just now to stop reading this interview because you really love that song and how dare I naysay “Hallelujah” – my point has been made. Songs can evoke strong reactions. If you heard “Hallelujah” for the first time while seeing the Northern Lights (which would arguably be pretty epic), then you associate that memory and those emotions with that song. When a designer uses popular music in their design, this is a reality you have to think hard about.
It’s similar with sound effects. For Mixed-Race Mixtape, Fig wanted to start the show with the sound of a cassette tape being loaded into a deck and played. While I understood why he wanted that sound cue, I had to disagree. Our target demographic are of an age where they may have never seen or used a cassette tape before – and using this sound effect wouldn’t elicit the nostalgic reaction he was hoping for.
Regarding how deeply the show moves people, I give all the credit to Fig’s lyrics and the entire casts’ performance, as well as the construction of the songs by the musicians and composers. As well as to Jorrell, our director, who has focused the intention of all these elements to coalesce very effectively. The cast puts a lot of emotion and energy into their performances and when people are genuine and earnest on stage, audiences can sense that and are deeply engaged.
I do a lot of work in the dance world and have come to understand how essential music and movement are to the human experience. We’ve always made music and moved our bodies and there is something deeply grounding and joining about collective listening and movement – even if it’s just tapping your fingers and toes.
AT: How did you and the other artists involved come up with the name/ idea for Mixed-Race Mixtape? How did the Mixed-Race Mixtape come about?
AS: Mixed-Race Mixtape is the brainchild of writer/performer Andrew “Fig” Figueroa. I’ll let him tell the story.
A mixtape is a collection of music from various artists and genres on one tape, CD or playlist. In Hip-Hop, a mixtape is a rapper’s first attempt to show the world there skills and who they are, more often than not, performing original lyrics over sampled/borrowed instrumentals that compliment their style and vision. The show is about “mixed” identity and I mean, I’m a rapper so thank God “Mixed-Race” rhymed with “Mixtape.”
The show grew from my desire to tell my story/help myself make sense of growing up in a confusing, ambiguous, and colorful culture. I began writing a series of raps and monologues about my family, community and youth and slowly it formed into something cohesive.
AT: I love the quote, “the conversation about race in America is one sided and missing discussions of how class and race are connected and how multiple identities can exist in one person,” how does Mixed-Race Mixtape fill in these gaps?
AS: Mixed-Race Mixtape is an alternative narrative that is complex, personal, and authentic. In America, our ideas about race largely oscillate between White and Black. MRMT is alternative because it tells the story of someone who sits in the grey area of Americans’ concept of race and dispels the racist subtext that middle class America belongs to White people. Because these grey areas are illuminated, I believe a wide variety of people are able to find connections with the story.
AT: In this video people discuss the connection they [felt to the music and performance] even if they weren’t expecting to. What do you think is responsible for sound connecting and moving people from different backgrounds? Why are there the assumptions about the event that there are, that they wouldn’t connect to the Hip Hop or that there would be “good vibes.”
AS: Some people do feel uncertain that they’d be able to connect with the show because it’s a “hip-hop” show. When they see it though, it’s obvious that it extends beyond the bounds of what they imagine a hip-hop show to be. And while I’ve never had someone say they were disappointed or unmoved by the show, I have had people say they couldn’t understand the words. And a lot of times they want to blame that on the reinforcement.
I’d argue that the people who don’t understand the lyrics of MRMT are often the same ones who were trepidatious to begin with, because I think hip-hop is not a genre they have practice listening to. I had to practice really actively listening to rap to train my brain to process words, word play, metaphor, etc. as fast as rap can transmit them. Fig, an experienced hip-hop listener and artist amazes me with how fast he can understand lyrics on the first listen. I’m still learning. And the fact is, it’s not a one and done thing. You have to listen to rap more than once to get all the nuances the artists wrote in. And this extends to hip-hop music, sans lyrics. I miss so many really clever, artful remixes, samples, and references on the first listen. This is one of the reasons we released an EP of some of the songs from the show (and are in the process of recording a full album).
The theatre experience obviously provides a tremendously moving experience for the audience, but there’s more to be extracted from the music and lyrics than can be transmitted in one live performance.
AT: What future plans do you have for projects? You mentioned utilizing sounds from protests? How is sound important in protest? What stands out to you about what you recorded?
AS: I have only the vaguest idea of a future project. I participate in a lot of rallies and marches for causes across the spectrum of human rights. At a really basic level, it feels really good to get together with like minded people and shout your frustrations, hopes, and fears into the world for others to hear. I’m interested in translating this catharsis to people who are wary of protests/hate them/don’t understand them. So I’ve started with my iPhone. I record clever chants I’ve never heard, or try to capture the inevitable moment in a large crowd when the front changes the chant and it works its way to the back.
I record marching through different spaces…how does it sound when we’re in a tunnel versus in a park or inside a building? I’m not sure where these recordings will lead me, but I felt it was important to take them.
REWIND! . . .If you liked this post, you may also dig:
Mariah Carey’s New Year’s Eve 2016 didn’t go so well. The pop diva graced a stage in the middle of Times Square as the clock ticked down to 2017 on Dick Clark’s Rockin New Year’s Eve, hosted by Ryan Seacrest. After Carey’s melismatic rendition of “Auld Lang Syne,” the instrumental for “Emotions” kicked in and Carey, instead of singing, informed viewers that she couldn’t hear anything. What followed was five minutes of heartburn. Carey strutted across the stage, hitting all her marks along with her dancers but barely singing. She took a stab at a phrase here and there, mostly on pitch, unable to be sure. And she narrated the whole thing, clearly perturbed to be hung out to dry on such a cold night with millions watching. I imagine if we asked Carey about her producer after the show, we’d get a “I don’t know her.”
These things happen. Ashlee Simpson’s singing career, such as it was, screeched to a halt in 2004 on the stage of Saturday Night Live when the wrong backing track cued. Even Queen Bey herself had to deal with lip syncing outrage after using a backing track at former President Barack Obama’s second inauguration. So the reaction to Carey, replete with schadenfreude and metaphorical pearl-clutching, was unsurprising, if also entirely inane. (The New York Times suggested that Carey forgot the lyrics to “Emotions,” an occurrence that would be slightly more outlandish than if she forgot how to breathe, considering it’s one of her most popular tracks). But yeah, this happens: singers—especially singers in the cold—use backing tracks. I’m not filming a “leave Mariah alone!!” video, but there’s really nothing salacious in this performance. The reason I’m circling around Mariah Carey’s frosty New Year’s Eve performance is because it highlights an idea I’m thinking about—what I’m calling the “produced voice” —as well as some of the details that are a subset of that idea; namely, all voices are produced.
I mean “produced” in a couple of ways. One is the Judith Butler way: voices, like gender (and, importantly, in tandem with gender), are performed and constructed. What does my natural voice sound like? I dunno. AO Roberts underlines this in a 2015 Sounding Out! post: “we’ll never really know how we sound,” but we’ll know that social constructions of gender helped shape that sound. Race, too. And class. Cultural norms makes physical impacts on us, perhaps in the particular curve of our spines as we learn to show raced or gendered deference or dominance, perhaps in the texture of our hands as we perform classed labor, or perhaps in the stress we apply to our vocal cords as we learn to sound in appropriately gendered frequency ranges or at appropriately raced volumes. That cultural norms literally shape our bodies is an important assumption that informs my approach to the “produced voice.” In this sense, the passive construction of my statement “all voices are produced” matters; we may play an active role in vibrating our vocal cords, but there are social and cultural forces that we don’t control acting on the sounds from those vocal cords at the same moment.
Another way I mean that all voices are produced is that all recorded singing voices are shaped by studio production. This can take a few different forms, ranging from obvious to subtle. In the Migos song “T-Shirt,” Quavo’s voice is run through pitch-correction software so that the last word of each line of his verse (ie, the rhyming words: “five,” “five,” “eyes,” “alive”) takes on an obvious robotic quality colloquially known as the AutoTune effect. Quavo (and T-Pain and Kanye and Future and all the other rappers and crooners who have employed this effect over the years) isn’t trying to hide the production of his voice; it’s a behind-the-glass technique, but that glass is transparent. Less obvious is the way a voice like Adele’s is processed. Because Adele’s entire persona is built around the natural power of her voice, any studio production applied to it—like, say, the cavernous reverb and delay on “Hello” —must land in a sweet spot that enhances the perceived naturalness of her voice.
Vocal production can also hinge on how other instruments in a mix are processed. Take Remy Ma’s recent diss of Nicki Minaj, “ShETHER.” “ShETHER”’s instrumental, which is a re-performance of Nas’s “Ether,” draws attention to the lower end of Remy’s voice. “Ether” and “ShETHER” are pitched in identical keys and Nas’s vocals fall in the same range as Remy’s. But the synth that bangs out the looping chord progression in “ShETHER” is slightly brighter than the one on “Ether,” with a metallic, digital high end the original lacks. At the same time, the bass that marks the downbeat of each measure is quieter in “ShETHER” than it is in “Ether.” The overall effect, with less instrumental occupying “ShETHER”’s low frequency range and more digital overtones hanging in the high frequency range, causes Remy Ma’s voice to seem lower, manlier, than Nas’s voice because of the space cleared for her vocals in the mix. The perceived depth of Remy’s produced voice toys with the hypermasculine nature of hip hop beefs, and queers perhaps the most famous diss track in the genre. While engineers apply production effects directly to the vocal tracks of Quavo and Adele to make them sound like a robot or a power diva, the Remy Ma example demonstrates how gender play can be produced through a voice by processing what happens around the vocals.
Let’s return to Times Square last New Year’s Eve to consider the produced voice in a hybrid live/recorded setting. Carey’s first and third songs “Auld Lang Syne” and “We Belong Together”) were entirely back-tracked—meaning the audience could hear a recorded Mariah Carey even if the Mariah Carey moving around on our screen wasn’t producing any (sung) vocals. The second, “Emotions,” had only some background vocals and the ridiculously high notes that young Mariah Carey was known for. So, had the show gone to plan, the audience would’ve heard on-stage Mariah Carey singing along with pre-recorded studio Mariah Carey on the first and third songs, while on-stage Mariah Carey would’ve sung the second song entirely, only passing the mic to a much younger studio version of herself when she needed to hit some notes that her body can’t always, well, produce anymore. And had the show gone to plan, most members of the audience wouldn’t have known the difference between on-stage and pre-recorded Mariah Carey. It would’ve been a seamless production. Since nothing really went to plan (unless, you know, you’re into some level of conspiracy theory that involves self-sabotage for the purpose of trending on Twitter for a while), we were all privy to a component of vocal production—the backing track that aids a live singer—that is often meant to go undetected.
The produced-ness of Mariah Carey’s voice is compelling precisely because of her tremendous singing talent, and this is where we circle back around to Butler. If I were to start in a different place–if I were, in fact, to write something like, “Y’all, you’ll never believe this, but Britney Spears’s singing voice is the result of a good deal of studio intervention”–well, we wouldn’t be dealing with many blown minds from that one, would we? Spears’s career isn’t built around vocal prowess, and she often explores robotic effects that, as with Quavo and other rappers, make the technological intervention on her voice easy to hear. But Mariah Carey belongs to a class of singers—along with Adele, Christina Aguilera, Beyoncé, Ariana Grande—who are perceived to have naturally impressive voices, voices that aren’t produced so much as just sung. The Butler comparison would be to a person who seems to fit quite naturally into a gender category, the constructed nature of that gender performance passing nearly undetected. By focusing on Mariah Carey, I want to highlight that even the most impressive sung voices are produced, and that means that we can not only ask questions about the social and cultural impact of gender, race, class, ability, sexuality, and other norms may have on those voices, but also how any sung voice (from Mariah Carey’s to Quavo’s) is collaboratively produced—by singer, technician, producer, listener—in relation to those same norms.
Being able to ask those questions can get us to some pretty intriguing details. At the end of the third song, “We Belong Together,” she commented “It just don’t get any better” before abandoning the giant white feathers that were framing her onstage. After an awkward pause (during which I imagine Chris Tucker’s “Don’t cut to me!” face), the unflappable Ryan Seacrest noted, “No matter what Mariah does, the crowd absolutely loves it. You can’t go wrong with Ms. Carey, and those hits, those songs, everybody knows.” Everybody knows. We didn’t need to hear Mariah Carey sing “Emotions” that night because we could fill it all in–everybody knows that song. Wayne Marshall has written about listeners’ ability to fill in the low frequencies of songs even when we’re listening on lousy systems—like earbuds or cell phone speakers—that can’t really carry it to our ears. In the moment of technological failure, whether because a listener’s speakers are terrible or a performer’s monitors are, listeners become performers. We heard what was supposed to be there, and we supplied the missing content.
Sound is intimate, a meeting of bodies vibrating in time with one another. Yvon Bonenfant, citing Stephen Connor’s idea of the “vocalic body,” notes this physicality of sound as a “vibratory field” that leaves a vocalizer and “voyages through space. Other people hear it. Other people feel it.” But in the case of “Emotions” on New Year’s Eve, I heard a voice that wasn’t there. It was Mariah Carey’s, her vocalic body sympathetically vibrated into being. The question that catches me here is this: what happens in these moments when a listener takes over as performer? In my case, I played the role of Mariah Carey for a moment. I was on my couch, surrounded by my family, but I felt a little colder, like I was maybe wearing a swimsuit in the middle of Times Square in December, and my heart rate ticked up a bit, like maybe I was kinda panicked about something going wrong, and I heard Mariah Carey’s voice—not, crucially, my voice singing Mariah Carey’s lyrics—singing in my head. I could feel my vocal cords compressing and stretching along with Carey’s voice in my head, as if her voice were coming from my body. Which, in fact it was—just not my throat—as this was a collaborative and intimate production, my body saying, “Hey, Mariah, I got this,” and performing “Emotions” when her body wasn’t.
By stressing the collaborative nature of the produced voice, I don’t intend to arrive at some “I am Mariah” moment that I could poignantly underline by changing my profile picture on Facebook. Rather, I’m thinking of ways someone else’s voice is could lodge itself in other bodies, turning listeners into collaborators too. The produced voice, ultimately, is a way to theorize unlikely combinations of voices and bodies.
Featured image: By all-systems-go at Flickr, CC BY-SA 2.0, via Wikimedia Commons
REWIND! . . .If you liked this post, you may also dig:
Gendered Sonic Violence, from the Waiting Room to the Locker Room-Rebecca Lentjes