I first realized there was a problem with my voice on the first day of tenth grade English class. The teacher, Mrs. C, had a formidable reputation of strictness and high standards. She had us sit in alphabetical order row after row, and then insisted on calling roll aloud while she sat at her desk. Each name emerged as both a command and a threat in her firm voice.
“Here,” I mumbled quietly. I was a Honor Roll student with consistent good grades, all A’s and one B on each report card, yet I was shy and softspoken in classes. This was an excellent way to make teachers amiable but largely go unnoticed. The softness of my voice made me less visible and less recognizable.
Mrs. C repeated my name. Caught off guard, I repeated “here” a little more loudly. She rose to her feet to get a better look at me. I knew what she saw: a petite girl with long ash blonde hair, big brown eyes, and overalls embroidered with white daisies on the bib. When her gaze finally met mine, Mrs. C frowned at me and cleared her throat loudly. I curled into my desk, hoping to disappear.
“Miss Barfield, did you hear me call your name twice? In this class, when I call roll, you respond.” I gave a quick nod, but Mrs. C wasn’t finished: “We use our strong voices in here, not our girly, breathy ones.” My cheeks flushed red while Mrs. C droned on about confidence and classroom expectations.
“Do you understand me?”
I stammered a “yes.” Mrs. C turned her attention back to the roll call. Her harsh words rang in my ears. I sank low in my chair, humiliated and angry. I couldn’t help that I sounded girly: I was, in fact, a girl. This was the way my voice sounded. It was not an attempt to sound like the dumb blonde she appeared to think I was.
That day I decided that I would never speak up in her class. Forget the Honor Roll. If the sound of my voice was such a problem, then my mouth would remain firmly shut in this class and all of my others. I would never speak up again.
My vow to stop speaking lived a short life. I enjoyed Mrs. C’s serious fixation on diagramming sentences and her attempts to show sophomores that literature offered ideas and worlds we didn’t quite know. At first, I spoke up with hesitation and fear of the inevitable dismissal, but I continued to speak. Becoming louder became my method to seem confident, even when I felt anything but.
Throughout high school, my voice emerged again and again as a problem. Despite the increased volume, my voice still sounded tremulous, squeaky, hesitant, and shrill to my own ears. Other girls had these steady, warm voices that encouraged others to listen to them. Some had higher voices that were melodic and lovely. I craved a lower, more resonant voice, but I was stuck with what I had. In drama club, our director scolded me with increasing frustration about my tendency to end my lines in the form of a question. My nerves materialized as upspeak. The more he yelled at me, the more pronounced the habit became. He eventually gave up, disgusted by my inability to control my vocal patterns.It wasn’t just the theater director who commented on my voice; fellow students expressed shock and occasionally dismay that the soft-spoken blonde had smart things to say if you stopped to listen to her. Teenage girls were supposed to sound confident (but not too confident), loud enough to be audible (but not too loud), warm (never cold), and smart (but not smarter than the boys), all while cultural norms suggested that voices of teenage girls were also annoying. Teenage girls were supposed to be seen, but when they spoke they had to master the right combination in order to be heard. I could never master it.
Meanwhile, at a big state university in my native Florida, I learned quickly that a Southern accent marks you as a dumb redneck from some rural town that no one had heard of. Students in my classes asked me to say particular words and then giggled at my pronunciations. “You sound like a Southern belle,” one student noted. This was not really a compliment. According to my peers, Southern belles didn’t have a place in the classroom. Southern belles didn’t easily match up with “college student. As a working-class girl from a trailer park, I learned that I surely didn’t sound like a college student should. I worked desperately to rid myself of any hint of twang. I dropped y’all and reckon.
I listened carefully to how other students talked. I mimicked their speech patterns by being more abrupt and deadpan, slowly killing my drawl. When I finally removed all traces of my hometown from my voice, my friends both from home and from college explained that now I sounded like an extra from Clueless. My voice was all Valley girl. I was smarter, they noted with humor, than I sounded and looked. My voice now alternated between high-pitched and fried. Occasionally, it would squeak or crack. I thought I sounded too feminine and too much like an airhead, even when I avidly tried not to. I began to hate the sound of my voice.
My voice betrayed me because it refused to sound like I thought I needed it to. It refused to sound like anyone but me.
When I started teaching and receiving student evaluations, my voice became the target for students to express their displeasure with the course and me. According to students, my voice was too high and grating. Screechy, even: one student said my voice was at a frequency that only bats could hear. In every set of evaluations, a handful of students declared that I sounded annoying. This experience, however, was not something I alone faced. Women professors and lecturers routinely face gender bias in teaching evaluations. According to the interactive chart, Gender Language in Teaching Evaluations, female professors are more likely to be called “annoying” than their male counterparts in all 25 disciplines evaluated. The sound of my voice was only part of the problem, but I couldn’t help but wonder if how I sounded was an obstacle to what I was teaching them.
Once again, I tried to fix my problematic voice. I lowered it. I listened to NPR hosts in my search for a smooth, accentless, and educated sound, and I attempted to create a sound more like them. I practiced pronouncing words like they did. I modulated my volume. I paid careful attention to the length of my vowels. I avoided my natural drawl. None of my attempts seemed to last. Some days, I dreaded lecturing in my courses. I had to speak, but I didn’t want to. I wondered if my students listened, but I wondered more about what they heard.
The sound of your voice is a distinct trait of each human being, created by your lungs, the length of your vocal cords, and your larnyx. Your lungs provide the air pressure to vibrate your vocal cords. The muscles of your larnyx adjust both the length and the tension of the cords to provide pitch and tone. Your voice is how you sound beyond the resonances that you hear when you speak. It is dependent on both the length and thickness of the vocal cords. Biology determines your pitch and tone. Your pitch is a result of the rate at which your vocal cords vibrate. The faster the rate, the higher your voice. Women tend to have shorter cords than men, which makes our voices higher.
Emotion also alters pitch. Fright, excitement, and nervousness all make your voice sound higher. Nerves would make a teenage girl have an even higher voice than she normally would. Her anxious adult self would too. Her voice would seem tinny because her larnyx clenched her vocal cords tight. Perhaps this is the only sound she can make. Perhaps she is trying to communicate with bats because they at least would attempt to listen.
Biology, the body, gives us the voices we have. Biology doesn’t care if we like the ways in which we sound. Biology might not care, but culture is the real asshole. Culture marks a voice as weak, grating, shrill, or hard to listen to.
My attempts to change my voice were always destined to fail. I fought against my body and lost. I couldn’t have won even if I tried harder. My vocal cords are determined that my voice would be high, so it is. The culture around me, however, taught me to hate myself for it. Voice and body seem to cast aspersions on intelligence or credentials. It’s the routineness of it all that wears on me. I expect the reactions now.
I wonder if I’m drawn to the quietness of writing because I don’t have to hear myself speak. I crave the silence while simultaneously bristling at it. Why is my voice a problem that I must resolve to placate others? How can I get others to hear me and not the stereotypes that have chased me for years?
My silence has become fruitful. The words I don’t say appear on the page of an essay, a post, or an article. I type them up. I read aloud what I first refused to say. I wince as I hear my voice reciting my words. I listen carefully to the cadence and tone. This separation of words and voice is why writing appeals to me. I can say what I want to say without the sound of my voice causing things to go awry.
People can read what I write, yet they can’t dismiss my voice by its sound. Instead, they read what I have to say. They imagine my voice; my actual sound can’t bother them. But, they aren’t really hearing me. They just have my words on the page. They don’t know how I wrap the sound around them. They don’t hear me.
Rebecca Solnit, in “Men Explain Things to Me,” writes “Credibility is a basic survival tool.” Solnit continues that to be credible is to be audible. We must be heard to for our credibility to be realized. This right to speak is crucial to Solnit. Too many women have been silenced. Too many men refuse to listen. To speak is essential “to survival, to dignity, and to liberty.”
I agree with her. I underline her words. I say them aloud. The more I engage with her argument, the more I worry. What about our right to be heard? When women speak, do people listen? Women can speak and speak and speak and never be heard. Our words dismissed because of gender and sound. Being able to speak is not enough, we need to be heard.
We get caught up in the power of speaking, but we forget that there’s power in listening too. Listening is political. It is act of compassion and empathy. When we listen, we make space for other people, their stories, their voices. We grant them room to be. We let them inhabit our world, and for a moment, we inhabit theirs. Yes, we need to be able to speak, but the world also needs to be ready to listen us.
We need to be listened to. Will you hear me? Will you hear us? Will you grant us room to be?
When I think of times I’ve been silenced and of the times I haven’t been heard, I feel the sharp pain of exclusion, of realizing that my personhood didn’t matter because of how I sounded. I remember the burning anger because no one would listen. I think of the way that silence and the policing of how I sound made me feel small, unimportant, or disposable. As a teenager, a college student, and a grown woman, I wanted to be heard, but couldn’t figure out exactly how to make that happen. I blamed my voice for a problem that wasn’t its fault. My voice wasn’t the problem at all; the problem was the failure of others to listen.
While writing this essay on my voice, I almost lost mine, not once but twice. I caught a cold and then the flu. My throat ached, and I found it difficult to swallow. A stuffy nose gave my voice a muted quality, but then, it sounded lower and huskier. I could hear the congestion disrupting the timber of my words. My voice blipped in and out as I were radio finding and losing signal. It hurt to speak, so I was quiet.
“You sound awful,” my husband said in passing. He was right. My voice sounded unfamiliar and monstrous. I tested out this version of my voice. It was rougher and almost masculine. I can’t decide if this is the stronger, more authoritative voice I wanted all along or some crude mockery of what I can never really have. I couldn’t sing along with my favorite songs because my voice breaks at the higher register. I wheezed out words. I croaked my way through conversations. “Are you sick?” my daughter asked, “You don’t sound like you.”
Her passing comment stuck with me. You don’t sound like you. Suddenly, I missed the sound of my voice. I disliked this alien version of it. I craved that problematic voice that I’ve tried to change over the years. I wanted my voice to return.
After twenty years, I decided to acknowledge the sound of me, even if others don’t. I want to be heard, and I’m done trying to make anyone listen.
Featured image: “Speak” by Flickr user Ash Zing, CC BY-NC-ND 2.0
Kelly Baker is a freelance writer with a religious studies PhD who covers religion, higher education, gender, labor, motherhood, and popular culture. She’s also an essayist, historian, and reporter. You can find her writing at the Chronicle for Higher Education‘s Vitae project, Women in Higher Education, Killing the Buddha, and Sacred Matters. She’s also written for The Atlantic, Bearings, The Rumpus, The Manifest-Station, Religion Dispatches, Christian Century’s Then & Now, Washington Post, and Brain, Child. She’s on Twitter at @kelly_j_baker and at her website.
REWIND!…If you liked this post, you may also dig:
On Sound and Pleasure: Meditations on the Human Voice– Yvon Bonefant
I’m fortunate to have quite a few friends with eclectic musical tastes, who continually expose me some of the best, albeit often obscure, sources for inspiration. They arrive as random selections sent with a simple “you’d appreciate this” note attached. Good friends that they are, they rarely miss the mark. Most intriguing is when a cluster of things from different people carry a similar theme, converging to a need on my part for some sort of musical action.
A few years back I received a huge dump of gigabytes of audio and video. Within it were some concert footage and performances this friend and I had been discussing; I consumed those quickly in an effort to keep that conversation going. Tucked amidst that dump however, was a copy of the movie Liquid Sky. I asked the friend about it because the description of the plot–“heroin-pushing aliens invade 80’s New York”–led me to believe it wasn’t really my thing (not a big fan of needles). Although my friend insisted I’d enjoy it, it took me several months if not a whole year before I finally pressed play.
Even though Liquid Sky was not my favorite movie by any measure, it was immediately apparent to my ears why my friend insisted I check it out. The film’s score was performed completely on a Fairlight CMI, capturing the synthesized undercurrent of the early 80’s New York music scene, more popularly seen in the cult classic Downtown 81, starring Jean Michel Basquiat. While the performances in that movie are perhaps closer to my tastes, none of them compare to one scene from Liquid Sky that I fell in love with, instantly:
The song grabbed me so much, I quickly churned out a cover version.Primus Luta “Me & My Rhythm Box (V1)”
While felt good to make, there remained something less than satisfying about it. The cover had captured my sound, but at a moment of transition. More specifically, the means by which I was trying to achieve my sound at the time had shifted from a DAW-in-the-box aesthetic to a live performance feel, one that I had already begun writing about here on Sounding Out! in 2013. Interestingly, the inspiration to cover the song pushed me back to my in-the-box comfort zone.
It was good, but I knew I could do more.
As I said, these inspirations tend to group around a theme. Prior to receiving the Liquid Sky dump, I had received an email out of the blue from Hank Shocklee, producer and member of the Bomb Squad. I’ve been a longtime fan, and we had the opportunity to meet a few years prior. Since then he’s played a bit of a mentoring role for me. In the email he asked if I wanted to join an experimental electronic jazz project he was pulling together as the drummer.
I was taken aback. Hank Shocklee asking me to be his drummer. Honestly, I was shook.
Not that I didn’t know why he might think to ask me, but immediately I started to question whether I was good enough. Rather than dwell on those feelings, though, I started stepping up my game. While the project itself never came to fruition, Shocklee’s email led me to building my drmcrshr set of digital instruments.
A year or so later, I ran into Shocklee again when he was in Philadelphia for King Britt’s Afrofuturism event with mutual friend artist HPrizm. By this time I had already recorded the “Me and My Rhythm Box” cover. Serendipitously, HPrizm ended up dropping a sample from it in the midst of his set that night. A month or so later, HPrizm and I met up in the studio with longtime collaborator Takuma Kanaiwa to record a live set on which I played my drmcrshr instruments.Primus Luta x HPrizm x Takuma Kanaiwa – “Excerpt”
Not too long after, I received an email from NYC-based electronic musician Elucid, saying he was digging for samples on this awesome soundtrack. . .Liquid Sky.
The final convergence point had been hanging over my head for a while. Having finished the first part of my “Toward a Practical Language series on Live Performance” series, I knew I wanted the next part to focus on electronic instruments, but wasn’t yet sure how to approach it. I had an inkling about a practicum on the actual design and development of an electronic instrument, but I didn’t yet have a project in mind.
As all of these things, people, and sounds came together–Liquid Sky, Shocklee, HPrizm, Elucid–it became clear that I needed to build a rhythm box.
What stands out in Paula Sheppard’s performance from Liquid Sky is the visual itself. She stands in the warehouse performance space surrounded by 80’s scenesters posing with one hand in the air, mic in the other while strapped to her side is her rhythm box, the Roland CR-78, wires dangling from it to connect to the venue’s sound system. She hits play to start the beat launching into the ode for the rhythm machine.
Contextually, it’s far more performance art than music performance. There isn’t much evidence from the clip that the CR-78 is any more than a prop, as the synthesizer lines indicate the use of a backing track. The commentary in the lyrics however, hone in on an intent to present the rhythm box as the perfect musical companion, reminiscent of comments Raymond Scott often made about his desire to make a machine to replace musicians.
My rhythm box is sweet
Never forgets a beat
It does its rule
Do you want to know why?
It is pre-programmed
Rhythm machines such as the CR-78 were originally designed as accompaniment machines, specifically for organ players. They came pre-programmed with a number of traditional rhythm patterns–the standards being rock, swing, waltz and samba–though the CR-78 had many more variations. Such machines were not designed to be instruments themselves, rather musicians would play other instruments to them.
In 1978 when the CR-78 was introduced, rhythm machines were becoming quite sophisticated. The CR-78 included automatic fills that could be set to play at set intervals, providing natural breaks for songs. As with a few other machines, selecting multiple rhythms could combine patterns into new rhythms. The CR-78 also had mute buttons and a small mixer, which allowed slight customization of patterns, but what truly set the CR-78 apart was the fact that users could program their own patterns and even save them.
By the time it appeared in Liquid Sky, the CR-78 had already been succeeded by other CR lines culminating in the CR-8000. Roland also had the TR series including the TR-808 and the TR-909, which was released in 1982, the same year Liquid Sky premiered.
In 1980 however, Roger Linn’s LM-1 premiered. What distinguished the LM-1 from other drum machines was that it used drum samples–rather than analog sounds–giving it more “real” sounding drum rhythms (for the time). The LM-1 and its predecessor, the Linn Drum both had individual drum triggers for its sounds that could be programmed into user sequences or played live. These features in particular marked the shift from rhythm machines to drum machines.
In the post-MIDI decades since, we’ve come to think less and less about rhythm machines. With the rise of in-the-box virtual instruments, the idea of drum programming limitations (such as those found on most rhythm machines) seems absurd or arcane to modern tastes. People love the sounds of these older machines, evidenced by the tons of analog drum samples and virtual and hardware clones/remakes on the market, but they want the level of control modern technologies have grown them accustomed to.Controlling the Roland CR-5000 from an Akai MPC-1000 using a custom built converter
The general assumption is that rhythm machines aren’t traditionally playable, and considering how outdated their rhythms tend to seem, lacking in the modern sensibility. My challenge thus, became clearer: I sought out to build a rhythm machine that would challenge this notion, while retaining the spirit of the traditional rhythm box.
Challenges and Limitations
At the outset, I wanted to base my rhythm machine on analog circuitry. I had previously built a number of digital drum machines–both sample and synthesis-based–for my Heads collection. Working in the analog arena allowed me to approach the design of my instrument in a way that respected the limitations my rhythm machine predecessors worked with and around.
By this time I had spent a couple of years mentoring with Jeff Blenkinsopp at The Analog Lab in New York, a place devoted to helping people from all over the world gain “further understanding the inner workings of their musical equipment.” I had already designed a rather complex analog signal processor, so I felt comfortable in the format. However, I hadn’t truly honed my skills around instrument design. In many ways, I wanted this project to be the testing ground for my own ability to create instruments, but prior experience taught me that going into such a complex project without the proper skills would be self defeating. Even more, my true goal was centered more around functionality rather than details like circuit board designs for individual sounds.
To avoid those rabbit holes–at least temporarily, I’ve since gone full circuit design on my analog sound projects–I chose to use DIY designs from the modular synth community as the basis for my rhythm box. That said, I limited myself to designs that featured analog sound sources, and only allowed myself to use designs that were available as PCB only. I would source all my own parts, solder all of my boards and configure them into the rhythm machine of my dreams.
The wonderful thing about the modular synth community is that there is a lot of stuff out there. The difficult thing about the modular synth community is that there’s a lot of stuff out there. If you’ve got enough rack space, you can pretty much put together a modular that will perform whatever functionality you want. How modules patch together fundamentally defines your instrument, making module selection the most essential process. I was aiming to build a more semi-modular configuration, forgoing the patch cables, but that didn’t make my selection any easier. I wanted to have three sound sources (nominally: kick, snare and hi-hat), a sequencer and some sort of filter, which would all flow into a simple monophonic mixer design of my own.
For the sounds I chose a simple kick module from Barton, and the Jupiter Storm unit from Hex Inverter. The sound of the kick module was rooted enough in the classic analog sound while offering enough modulation points to make it mutable. The triple square wave design of the Jupiter Storm really excited me as It had the range to pull off hi-hat and snare sounds in addition to other percussive and drone sounds, plus it featured two outputs giving me all three of my voices on in two pcb sets.
Filters are often considered the heart of a modular set up, as they way they shape the sound tends to define its character. In choosing one for my rhythm machine the main thing I wanted was control over multiple frequency bands. Because there would be three different sound sources I needed to be able to tailor the filter for a wide spectrum of sounds. As such I chose the AM2140 Resonant Filter.
The AMS2140 PCB layout, based on the classic eMu filter
I had no plans to include triggers for the sounds on my rhythm machine so the sequencer was going to be the heart of the performance as it would be responsible for any and all triggering of sounds. Needing to control three sounds simultaneously without any stored memory was quite a tall order, but fortunately I found the perfect solution in the amazing Turing Machine modules. With its expansion board the Turing machine can put out four different patterns based on it’s main pattern creator which can create fully random patterns or patterns that mutate as they progress.
I spent a couple of weeks after getting all the pcb’s parts and hardware together, wiring and rewiring connections until I got comfortable with how all of these parts were interacting with each other. I was fortunate to happen upon a vintage White Instruments box, which formally housed an attenuation meter, that was perfect for my machine. After testing with cardboard I laid out my own faceplates, which and put everything in the box. As soon as I plugged it in and started playing, I knew I had succeeded.Early test of RIDM before it went in the Box
I call it the RIDM Box (Rhythmically Intelligent Drum Machine Box). I’ve been playing it now for over two years, to the point where today I would say it is my primary instrument. Almost immediately afterward I built a companion piece called the Snare Bender which works both as a standalone and as a controller for the RIDM Box. That one I did from scratch hand wired with no layouts.
My current live rig with the RIDM Box and the Snare Bender (on the right)
While this is by no means a standard approach to modern electronic instrument design (if a standard approach even exists), what I learned through the process is really the value of looking back. With so much of modern technology being future forward in its approach, the assumption is that we’re at better starting positions for innovation than our predecessors. While we have so many more resources at our disposal, I think the limitations of the past were often more conductive to truly innovative approaches. By exploring those limitations with modern eyes a doorway opened up for me, the result of which is an instrument like no other, past or present.
I will probably continue playing the two of these instruments together for a while, but ultimately I’m leaning toward a new original design which takes the learnings from these projects and fully flushes out the performing instrument aspect of analog design. In the meantime, my process would not be complete if I did not return to the original inspiration. So I’ll leave you with the RIDM Box version of “Me & My Rhythm Box”—available on my library sessions release for the instrument.
Primus Luta is a husband and father of three. He is a writer and an artist exploring the intersection of technology and art, and their philosophical implications.
REWIND!…If you liked this post, you may also dig:
Heads: Reblurring The Lines–Primus Luta
Editor’s Note: Today we start off a series, a propos for World Listening Day 2016 on digital humanities and listening. As I mentioned in my Call for Abstracts in March, this forum considers the role of “listening” in the digital humanities (DH, for short). We at Sounding Out! are stoked to hear about (and listen to) all the new projects out there that archive sound, but we wonder whether the digital humanities engage enough with the the notion of listening. After all, what’s a sound without someone to listen to it? The posts this month consider: how have particular digital studies, projects, apps, and online archives addressed, challenged, expanded, played with, sharpened, questioned, and/or shifted “listening”? What happens to digital humanities when we use “listening” as a keyword rather than (or alongside) “sound”?
We will be hosting the work of DH scholars who are doing exactly that: prompting readers to consider what it means to listen in the context of DH projects. Fabiola Hanna will be reflecting upon what DH means when it talks about participatory practices. Emmanuelle Sonntag, who has written for SO! before, will be addressing listening from the starting point of the documentary Chosen (Custody of the Eyes). Today, however, we start things off with a collaborative piece from the Vibrant Lives team on the ethics of listening to 20th century sterilization victims’ records.
Don’t just stand there. Take a seat and listen.-Liana M. Silva, Managing Editor
In the 1920s a young woman was admitted by her mother to a mental institution in California. The local doctor recommended her for sterilization with the following notes:
has been reported to have interest in sexual encounters
Mother is pregnant and cannot care for her (thinks she may be able to post-sterilization).
This brief note is representative of the stories of the roughly 20,000 people who were sterilized in California institutions of mental health. The soundscape of these institutions is largely lost to the past. We cannot recover the sounds of treatment spaces, family visits, recreation, and everyday life of those in the care of the state of California who were considered feeble, insane, or otherwise out of control.
Like the conversations about illness and reproduction presumably had in those halls, the sounds of salpingectomies (removal of fallopian tubes), vasectomies (severing the vas deferens), and, later, tubal ligations are lost to us. In the absence of human rights violations, this is perhaps as it should be; we cannot collect the minutiae of everyday life. But in situations where reproductive and disability rights have been limited, where we can see race and gendered bias, we may well have need of telling such stories.
Reparative justice best practices dictate that survivors should be able to tell their own stories on their own terms. How can we listen to such stories when the majority of our survivors have died and we have little to nothing in their own words?
While conversations between patients, parents, and doctors might be lost to us in terms of playback, they have embodied traces in the nearly 20,000 people sterilized in California between 1919 and the 1950s under eugenic sterilization laws. The 19,995 sterilization recommendations and notes, brought together under the project Eugenic Rubicon: California’s Sterilization Stories, cannot currently be made publicly available due to U.S. patient privacy laws. Important documentary films like No Más Bebés, which tells the story of Mexican-American women sterilized without consent at Los Angeles County – USC Medical Center in the 1960s and 1970s, have made it possible for us to hear accounts of such reproductive injustice first hand. But for the thousands of people sterilized between 1909 and the repeal of eugenics laws in 1979, we must find other ways to listen and to hear.
Given the privacy restrictions on working with this dataset and our concerns to care for the people who are represented therein, we (the Vibrant Lives team) felt it was important to find alternative methods that did more than de-identified and quantified graphs could do. We know all too well that we can’t recover the past “as it was.” Nevertheless, we are working to bring the emotional and intellectual power of sound and critical listening to a largely unheard history of sterilization of Latinx people. Specifically, our project prompts listeners to consider how listening fits into reparative justice for the victims of sterilization.
Listening Toward under the Law
That eugenics laws and their surgical enactments played out in racialized and gendered ways is not surprising but bears repeating. For example, according to work by Alexandra Minna Stern, Nicole Novak, Natalie Lira, and Kate O’Connor, patients with Spanish or Hispanic surnames were three times as likely to be sterilized as their non-Hispanic counterparts. Those lost sounds have traces in California’s Latinx communities, both in terms of the community structures themselves, but also in terms of soundscapes that never were because of sterilization. This acoustic ecosystem in which the politics of race, gender, nation, and mental health converged in dramatic fashion is recorded only in the bodies and medical records of the patients and the 21st century communities shaped by the children, born and unborn, of these patients.
Not only are we limited to working with the textual, institutionally generated remnants of the past, we are also constrained by 21st century health and personal data privacy laws. Our archive is a set of medical records and as such this collection contains sensitive patient data that must be de-identified and used in accordance with contemporary HIPAA (Health Information Portability and Accountability Act) regulations and IRB protocols.
This means that we cannot reveal names, dates, and other identifying information regarding those who were sterilized in the first half of the 20th century. We are unable to tell individual stories of sterilization lest the individual be identified. Traditionally, historians have used fictional composites to tell such stories and our collaborator Alexandra Minna Stern used this method in her 2015 second edition of Eugenic Nation.
The HIPAA guidelines and their impact on how we tell the history of medicine raises important legal questions about how we might balance a public right to know about practices (we’d call them abuses) within state-run facilities with the need to protect patients’ rights to privacy regarding their own reproductive and mental health. In some cases, it seems as though the privacy guidelines protect the state more than they protect any individual patient. In fact, we have seen a remarkable lack of concern for these records in their discovery and transmission. The records themselves were largely abandoned when Stern discovered the microfilm reels in the 2000s. They were lost again after she returned them to the state after having made a copy. The originals are lost as far as we know.
Listening Toward the Past
Vibrant Lives is working not with sounds found, but with archival records found and then sonified (transformed into sound) as a way of listening toward those rooms, conversations, and procedures. In brief, this sonification entails the following steps
- Selecting a subset of the large data set (we can’t currently process the whole)
- Selecting between two and four axes of information, such as gender, race, age at sterilization recommendation, consent, or nationality
- Mapping the informational values into numerical space – sonification requires the creation of a dataset whose limits are 1 and -1 (based on how the speakers work)
This work has been done to date using two tools: Sonification Sandbox, an open source tool developed at the University of Georgia, and GarageBand, a proprietary music making tool that comes with Macintosh computers. We use Sonification Sandbox to create the score first and then turn to GarageBand because it has a greater range of instrumentation available. The sonification process is still very experimental and exploratory. Team member Jacqueline Wernimont does all of our sonifications for us and she is trained as a historian of literature and technology. While she has extensive experience within digital humanities methodologies, sonification is a new effort for us.
We have begun producing short sample tracks that allow us to enact the kind of listening toward that we’re advocating for. In the track below, we have data from the age, gender, and consent axes for the period 1940-1949. Additionally, this sample draws only from what we’ve described as “Spanish surname” patients, the vast majority of whom were American-born of Mexican descent, although they also include some other Latinx national communities.
Latinx Eugenics Sample Track
As you listen, each note represents one Spanish-surnamed person recommended for sterilization. The children, both boys and girls under 18, who were sterilized without consent are the highest notes, and the adult men who were sterilized with consent are the lowest.
Listening Toward as Ethical and Communal
Listening is always about an ethical relationship and it is particularly fraught when the effort to listen and to encourage others to listen entails hearing about a person’s most intimate health information and experiences. This is particularly true when those experiences may include trauma from unwanted surgery or other experiences.
While we might think of patient privacy as a form of care, in this instance we find ourselves wondering who these regulations actually serve. According to the updated 2013 HIPAA guidelines, personal health records are no longer considered sensitive information 50 years after death (it was previously 100 years). Preliminary estimates by our team indicate that as many as 1,000 survivors might be alive in 2016. However, while the vast majority of the people discussed in the records are no longer alive, family and friends may well be.
We respect the need for family members and friends to privacy when it comes to the health records of their loved ones. At the same time, an essential component of most restorative justice programs, like those undertaken for North Carolina eugenic sterilizations, is an articulation of the violations, which HIPAA blocks in many ways (North Carolina’s cases were revealed by investigative journalists who are not subject to HIPAA and the IRB regulations that we must adhere to as academics). As a consequence, those who might most benefit from reparations – sterilized individuals and their immediate families, including children – are likely to die before the privacy laws enable us to draw attention to the individual impacted by the racialized and gendered discrimination evident in the records.
The sonification of these records and the companion participatory performances that we facilitate allow us to intervene and share these important stories before all of the survivors and family members have passed away. We have the opportunity to drive justice-oriented processes forward while there is still time.
Consent/Non-consent Sample Track (entire population)
Vibrant Lives focuses not just on the stories but also on the people who listen to the audio. We spend time watching how our audiences participate in listening toward the history of eugenic sterilization in California. Below are images of recent presentations of this work in which we’ve incorporated both haptic (touch-based) and sonic performance.
Part of what we see here is the attentive posture of our participants – leaning in to feel a history of sterilization. The haptics are being shared with a thin, red metal wire that the participants have to touch lightly in order to not dampen the signal for others. For us, this is an effort to bring care for the experiences of others into the performance. The history of eugenics has impacted communities and we are creating communal aural and tactile experiences as a way to disrupt the notion that academic work and knowledge is a solitary endeavor.
The performance captured above is also an exercise in patience and as such expresses a willingness on the part of the participants to sit with a disturbing history. The sample people are listening to and feeling here is 100 seconds long with each note/vibration corresponding to one person who was sterilized. In most performances the participants stay for the duration of the piece, but there have been instances where people have touched a haptic piece and then walked quickly away. We can’t know why some have chosen to walk away.
Some of those who have stayed have shared with us that they felt responsible to feel and hear each person. It’s an abstraction, to be sure, but we are intrigued by the power of listening and feeling to encourage people to not simply look and walk away. As one participant at a Michigan performance noted, the “tingling (from the haptics) lingers, it’s spooky.” Another participant at the same performance indicated that she felt “more implicated” having engaged with a multi-media experience than with a visual like a graph or chart. When asked why, she responded “I’ve felt it and will continue to remember that, but still will likely do nothing in response.”
In creating performances where participants have to care for one another and care enough about the people represented in the data to stay through a durational piece, we are working to redress the extraordinary lack of care that the records represent, both in terms of testifying to the violence done to men’s and women’s bodies and in terms of the State of California’s lack of regard for this history.
Sounds Felt, Sounds Touched
Our work is an ongoing experiment. We’ve moved from haptics along a wire, to haptic spheres that vibrate with the sonification. The image above is from one of these events this spring. We’ve retained the communal effect while transforming the embodied structure of the event. Participants now gather around, encircling the object as they listen toward a history of reproductive injustices. People still tend to lean in – to have heads lowered in a posture of intense focus. The sphere itself demands that someone cradle it and it also requires that people touch lightly once again so as to not dampen the experience for others.
We plan to expand our durational events in our next iteration known as “Safe Harbor” in which we hope to explore how to best care for those people sterilized by the state by caring for their data. In this instance we are thinking of sounds (and more) that we’ll make together with impacted communities. For this work we are particularly interested in engaging audience members in the hosting and care of the eugenics data and, by extension, the survivors.
As a way of enacting a site-specific response to both historical and contemporary human and reproductive rights violations that have occurred in the state, we plan to stage this durational event in California. We’ll begin by inviting audiences to help build and shape an empty warehouse space with us, transforming the empty space into a place of care where we can listen toward these histories. The audience will be invited to converse about the research and reflect upon conversations through making, creating, and ultimately building up our safe harbor.
We plan to listen to and co-create with impacted communities through collective making of the space. As a result, Safe Harbor will enact a cooperative improvisational process shaping socially responsive dialogue – performing, hearing, listening, documenting, and rebuilding notions of care in real time. What we hope to discover here are shared sounds of resistance, repair, and healing. Sounds that might let us listen toward the past, while also creating more just futures.
Featured image: “Water under 12.5 Hz vibration” by Jordi Torrents, CC BY-SA 4.0, via Wikimedia Commons
Vibrant Lives is a collaborative team that makes, stages, and performs as part of interactive multimedia installations. Jessica Rajko and Eileen Standley are both professors in the Dance area of the School of Film, Theater, and Dance at Arizona State University (ASU). Jacqueline Wernimont’s home department at ASU is English and she’s a digital humanities and digital archives specialist. Wernimont and Rajko are also multimedia artists/faculty working in Arts, Media, and Engineering.
The data derives from a larger project, known as Eugenic Rubicon: California’s Sterilization Stories, a multidisciplinary collaboration among Arizona State University, University of Illinois Urbana-Champaign, and University of Michigan. This larger collaboration includes historical demography and epidemiology, public health, history of medicine, digital storytelling, data visualization, and the construction of interactive digital platforms. This team is quite large, with our center of gravity residing at the University of Michigan where historian of science Alexandra Minna Stern directs the Eugenic Rubicon lab. Stern discovered the microfilms of more than 20,000 eugenic sterilization patient records in 2013. Stern and her team have created a dataset with this unique set of patient records that includes 212 discrete variables culled from over 30,000 individual documents. This resource is the first of its kind, encompassing almost one-third of the total sterilizations performed in 32 states in the U.S. in the 20th century.
REWIND!…If you liked this post, you may also dig:
EPISODE LI: Creating New Words from Old Sounds–Marcella Ernest, Candace Gala, Leslie Harper, and Daryn McKenny
This beat ‘bout to get murdered
Thought this was Future when I heard it
Desiigner sounds kinda like Future. Probably you’ve noticed? Everyone else has. While some reactions are a register of genuine surprise that “Panda” isn’t a Future song (cf Uncle Murda epigraph), many are a combination of reflexive skepticism about Desiigner’s authenticity (He’s never even been to Atlanta!!)–or even the authenticity of New York as a hip hop city–alongside a sort of schadenfreude over his ability to notch a higher rated song than Future has ever managed (“Panda” hit #1 for two weeks in May 2016). This latter observation is certainly true: Southern trap god Future has cracked the Billboard Hot 100 top 10 just once, as a featured artist on Lil Wayne’s “Love Me,” and his other appearances in the top 30 are similarly collaboration. (My discussion of trap focuses here on the hip hop wing of trap. The related but not identical EDM genre also called “trap” lies outside the scope of this particular analysis.) But pointing to the chart “failure” of Future’s singles is also entirely disingenuous, as all four of his official album releases have landed in the Billboard 200 top 10, including a #1 for 2015’s DS2 and 2016’s EVOL. In other words, Future isn’t exactly struggling to be relevant, which is why the nearly reflexive journalistic pairing of “Desiigner sounds like Future” and “Desiigner’s song is more successful than any Future song” gets my critical side-eye popping. The reception of Desiigner as a fake-but-more-successful Future strikes me as a dig at trap music as an easily replicable and therefore unserious genre. Here, I’m listening closely to the ways Desiigner’s vocals sound like Future as an entry point to trap’s political work: a sonic aesthetics of dis-organized polity, of sonic blackness in a post-racial society that I call trap irony.
Sounds Like Future
Though I’ve found several instances of writers comparing Desiigner to Future, that comparison usually includes little detailed support about the Future-istic elements of Desiigner’s sound. There are a number of sonic cues in “Panda” that could lead listeners to mistake the singer for Future, but I’m going to focus on the most obvious similarity: Desiigner’s recorded vocals share timbral and affective similarities to some of Future’s recorded vocals. When critics say Desiigner sounds like Future, the vocals are likely their main point of reference, so I’ve identified five points of sonic similarity between Desiigner and Future.
- Desiigner’s voice on “Panda” is detuned, resonating slightly off pitch with the instrumental, a technique so common in Future songs that I could link to any number of examples. Here are four, all released in the last two years, as a representative sample: “Stick Talk,” “Where Ya At (feat. Drake),” “March Madness,” and “Codeine Crazy.”
- Second, Desiigner delivers his vocals with a flat affect, conveying little emotion through inflection. Listen to the sections in the video above where he repeats the word “panda” [0:33-39, 1:38-46, 2:44-52, 3:51-58]. These repetitions precede each verse and then punctuate the end of the song. Rhythmically they signal what should be a turn-up— a run of at least a measure’s worth of eighth notes just before the full beat drops. But Desiigner’s recitation is emotionless, each instance of the word sounding just like the last. Throughout the rest of the song, if a listener didn’t understand the words, it would be hard to guess what Desiigner is rapping about based on any emotive signals. Love? Aggression? Loss? The vocal performance is reportorial, dispassionate. Future adopts a similar technique in up-tempo songs. His repetition of the words “jumpman” (1:08-10) and “noble” (1:28-30) in “Jumpman” and the word “wicked” (0:13-24) in “Wicked” provide parallels to Desiigner’s recitation of “panda.” And in “Ain’t No Time,” Future delivers lines about his clothes and money as casually as he predicts his enemies ending up outlined in chalk (0:13-26); just as in “Panda,” a listener who didn’t catch the lyrics to “Ain’t No Time” wouldn’t be able to attach any particular emotional content to the song.
- Speaking of not catching lyrics, Desiigner and Future are both notoriously mushmouths: enunciation is optional. A number of online videos and fluff posts revolve around the fact that it’s hard to make out what Desiigner or Future is saying.
- Both Desiigner’s and Future’s performed voices seem to sit low in their registers, produced by opening the backs of their throats and elongating their vocal chords. For context, both artists seem to speak in the same register their recorded vocals fall in, and each is also likely to perform their vocals a little higher in a live setting.
- The bulk of “Panda”’s verses are in “Migos flow.” Named for the ATL trap trio who popularized it in their song, “Versace,” Migos flow is a triplet figure that rises from low to high, 3-1-2 (where 1 is the downbeat). The first twenty seconds of the “Versace” link above is a constant string of Migos flow. It’s pervasive throughout “Panda,” but 0:49-52 stacks two Migos flow lines back-to-back. Future’s verse on Drake’s “Digital Dash” (0:18-2:00) is a good example of an extended Migos flow.
In other words, Desiigner does sound like Future in some significant ways. But that’s not all he sounds like. Detuned vocals isn’t just a Future thing. Adam Krims theorizes this as part of the “hip hop sublime,” and it’s especially common among Southern rappers (for example, Young Jeezy sounded like Future before Future even did) (73-74). Many trap artists rap in a way that confounds efforts to understand what they’re saying; Young Thug, for instance, employs a vocal style distinct from Future and Desiigner but is equally difficult to understand. And the Migos flow, as partially demonstrated in this video, is not Future’s (or Migos’s) proprietary style. It’s been adopted by several (especially Southern) rappers, most recently in conjunction with trap. The elements I describe in the previous paragraph point to some specific ways Desiigner sounds like Future, which in turn points to ways that Desiigner sounds, more broadly, like trap.
The “Panda” beat, which comes from UK producer Menace, bears this out. Southern trap, as can be heard by surveying the songs linked above, features instrumentals with deep, tuned kick drums, usually dry 808 snares, high and bright synth lines, and punctuation from low brass and strings (0:40-1:33 in “Panda,” for the latter). This low/high frequency spread, with the mid-range mostly open, characterizes a good deal of trap music; the freed mid-range leaves more room for the bass to be amplified to soul-rattling levels without crowding out the rest of the instrumental. Also, one of the most iconic sonic elements of trap is the rattling hihat, cruising through subdivisions of the beat at inhuman rates (for instance, Metro Boomin’s hats at 0:16 in the aforementioned “Digital Dash” rattle but good when the full beat drops). Here’s the thing about “Panda,” though: those hats don’t rattle. Instead, they enter oh-so-quietly at 1:06 and bang out a steady eighth note pattern punctuated with a crash cymbal on every fourth beat until the end of the verse.
Sounds Like Trap
The missing hihats are an important piece of “Panda”’s sonic puzzle, and point to some broader observations about trap aesthetics as politics, what I’m calling trap irony. Trap music moves through society in ways it shouldn’t. The image of the trap is a house with only one way in and out, yet trap aesthetics produce a music that seems to constantly find a secret exit, a path not offered, a way around established norms. Materially, the bulk of trap music circulates through and out of Atlanta on mixtapes, beyond the purview of major record labels and, in part because it isn’t controlled by labels, at an astonishing rate—for instance, from January 2015-February 2016, Future released four mixtapes and two official albums. Moreover, trap reverberates as sonic blackness in a society whose mainstream has been explicitly peddling a post-racial ideology for nearly a decade. Trap aesthetics become trap politics.
Sonic blackness, as Nina Sun Eidsheim defines it and as Regina Bradley has expanded it, is the interplay of vocal timbre and current norms about what constitutes blackness; it’s a moving target that nonetheless shapes and is shaped by a society’s notions of race and racialization (Eidsheim, 663-64). In the case of trap, I argue that its sonic blackness is apparent in the context of post-racial ideology. Post-race politics depends on the notion that racism has ended and that race doesn’t matter anymore. In this framework, as Jared Sexton argues in Amalgamation Schemes, multiracialism, the blending of many races together until distinct racial backgrounds are purportedly indecipherable, becomes the ideal. The problem Sexton finds with multiracialism as a discourse is that it doesn’t account for the historical racial hierarchies that institutionalize whiteness as ideal; rather, multiracialism “is a tendency to neutralize the political antagonism set loose by the critical affirmation of blackness” (65).
Trap irony describes the way trap picks up recognizable markers of hip hop blackness (urban spaces, violence, drugs, sexual voracity, conspicuous consumption) so that its existence becomes an affirmation of blackness in a post-racial milieu. In fact, ironies abound in trap. Kemi Adeyemi has written about the use of lean, the codeine-based concoction of choice for many Dirty Southern rappers, as “generat[ing] productively intoxicated states that counter the violent realities of a particularly black everyday life” (first emphasis mine). LH Stallings has argued for the hip hop strip club — trap’s home away from home — to be understood as an always already queer space despite its surface heteronormativity. I’ve elsewhere used Stallings’s “black ratchet imagination” to think about party politics in the south, the way a group like Rae Sremmurd use party music as a refusal to produce and re-produce for the benefit of whiteness. The flat affect of rappers like Desiigner and Future is a similar shirking of emotional labor; where an artist like Kendrick Lamar brings fire and brimstone, Future shows up with dispassionate Autotune warble. Intoxicated but productive, heteronormative but queer, partying but political, affected but flat: in each case, we can hear trap irony navigating the complex assemblages of blackness in a purportedly post-racial society.
The last piece of the “Panda” puzzle is another trap irony, the sonification of a dis-organized polity, a bloc that doesn’t voice its interests as one. Listening to “Panda,” it’s hard to notice that the rattling hihat, integral to so much ATL trap, is missing. That’s because Desiigner vocalizes it himself. Throughout the track, he adds a handful of background vocals that trigger at seemingly random points. Unlike the flat affect of his flow, Desiigner’s vocal ad-libs are full of energy, as if he’s egging himself on. One of these vocals is “brrrrrrrrrrrrrrrah,” a tongue roll of varying lengths that replaces the missing hihat rattle. Listen back to the other trap songs I’ve linked in this essay, or check out nearly any track from trap artists like Young Thug, Rae Sremmurd, or Kevin Gates, and you’ll hear the pervasiveness of the hyped trap background vocals.
Trap background vocals, like the aesthetics, politics, and economy of trap itself, is a messy business. Desiigner’s background vocals on “Panda” move in meter and sometimes lock into a sequence, but he triggers enough different ones at unexpected moments that a listener can’t know exactly what sound to expect next nor when it will occur. Desiigner sounds like Future, which is to say he sounds like trap, which is to say he sounds like blackness, and his background vocals, which he turns up loud, are emblematic of the aesthetics and politics of trap. Trap irony means that a genre that renders blackness audible in 2016 does so not through a multiracial neutralization of the critical affirmation of blackness, but by setting loose a disparate set of recognizably black voices sounding from all directions, rattling across the soundscape, routing themselves through any path that doesn’t lead to the designated entry/exit point of the trap.
Justin D Burton is Assistant Professor of Music at Rider University, and a regular writer at Sounding Out!. His research revolves around critical race and gender theory in hip hop and pop, and his current book project is called Posthuman Pop. He is co-editor with Ali Colleen Neff of the Journal of Popular Music Studies 27:4, “Sounding Global Southernness,” and with Jason Lee Oakes of the Oxford Handbook of Hip Hop Music Studies (2017). You can catch him at justindburton.com and on Twitter @justindburton. His favorite rapper is Right Said Fred.
“The (Magic) Upper Room: Sonic Pleasure Politics in Southern Hip Hop“–Regina Bradley
“Tomahawk Chopped and Screwed: The Indeterminacy of Listening“–Justin Burton