Editor’s Note: Here’s installment #2 of Sounding Out!‘s blog forum on gender and voice! Last week we hosted Christine Ehrick‘s selections from her forthcoming book; she introduced us to the idea of the gendered soundscape, which she uses in her analysis on women’s radio speech from the 1930s to the 1950s. In the next few weeks we’ll have SO regular writer Regina Bradley, with a look at how music is gendered in Shonda Rhimes’ hit show Scandal, A.O. Roberts with synthesized voices and gender, Art Blake with his reflections on how his experience shifting his voice from feminine to masculine as a transgender man intersects with his work on John Cage, and lastly Robin James with an analysis of how ideas of what women should sound like have roots in Greek philosophy.
As I planned for SO!’s February forum, I wondered about my own connection to the topic: how is the loudness of a voice gendered? Does it matter who we call “loud”? As a Latina, I’m familiar with the stereotypes of the loud Latina, and as a Puerto Rican I faced them at every gathering. So for this week I decided to reflect upon my experiences in a personal essay. Lean in, close your eyes, and don’t let the voices startle you.–Liana M. Silva, Managing Editor
I was 22 years old when someone called me deaf. I was finishing my bachelor’s degree at the University of Puerto Rico, Rio Piedras campus. After four years of living in San Juan, I still hadn’t gotten used to the class and race microaggressions I encountered regularly because I was a brown girl who grew up in the country and was going to school in the urban capital, el área metropolitana. These microaggressions were usually assumptions about who I was based on how I talked: I called pots a certain way, I referred to nickels in another way, and I couldn’t keep my voice down–all indications, according to my “urban” friends, that I grew up in the country. But being called “deaf” was a new one.
My boyfriend at the time had no cellphone, and his mother would call me regularly to see if he was on his way home from a gig or to ask him to run an errand. She and I were not close, but we were cordial. I always felt we didn’t click on some level. This particular weekend day, she had called to ask if he had left San Juan already to come visit her, and I told her I had just seen him that morning before he left. Somehow she and I went from small talk into a conversation.
In my head, I thought I was making headway with her and that this was a huge step forward in our relationship. We talked about his gig the night before, about how my family was doing, things like that. Then she asked me if my family had a medical history of people losing their hearing. “No? I don’t think so. Why do you ask?” I said in Spanish.
“Because you talk so loud, and so do your father and your sister. Your mom isn’t loud.”
That was over 10 years ago, but the comment still stings. I am certain that wasn’t the only time someone called me “loud” or pointed out the tone of my voice, but it’s the one time that still rings in my ears when I think about the intersection of gender and sound. It wasn’t just that I spoke at a high volume, it was that I was a woman who spoke at a high volume. I was the girlfriend who was loud.
Of course we’re not born loud- or soft-speakers – we learn to use the volume level that prevails in our culture, and then turn it up or lower it depending on our subculture and peer group.
-Anne Karpf, The Human Voice
What does “loud” mean, anyway? Denotations fade into connotations. As I write this, I struggle to think of how to describe loud in a way that doesn’t feel negative. Because every time I think of “loud” its negative connotations float up to the surface. Just take this Merriam-Webster online dictionary entry for “loud.” Aside from the reference to volume, “loud” also means sounds that are offensive, obtrusive—annoying.
To be fair, I’ve always been self-conscious of my voice, and not in the way most people hate the sound of their voice. I always felt my voice was not girly enough. I always felt as a teenager and a young adult not “pretty” enough, not thin enough, not “feminine” enough, so my insecurities also extended to my voice.
Growing up, I heard people tell me time and time again to keep my voice down, that I was talking too loud, that people next door could hear me, et cetera. Grandparents, cousins, parents, friends: I got it from every corner. Shush. But I don’t recall anybody saying that about the boys/men I hung out with. Add to that the comments I got about my appearance: “you’re too fat,” “your hair is too frizzy,” ‘you’re ugly.” I associated being loud with being unattractive. Just another flaw.
It’s no coincidence then that describing a woman as loud is almost never said as a compliment. Although a man can be loud—he might even be expected to have a deep, booming, commanding voice, as the above video describes—when a woman is described as loud, it’s almost never in a good light. Karpf mentions in The Human Voice: The Story of a Remarkable Talent that “Loudness certainly seems to be judged differently depending on the sex of the speaker. Talking loudly is considered an act of aggression in women, but in men as no more than they’re entitled to.” In other words, society deems men to be allowed to be loud, and by extension loudness comes off as a masculine feature. So loudness, something that at its base means high volume, ends up being constructed as more than just decibels. Women who are “loud” become noisy, rude, unapologetic, unbridled.
Mija pero que duro tu hablas.
In Puerto Rico, the word for “loud” was alto (high) but also duro (hard). I knew early on that when someone told me that I spoke duro they didn’t mean it in a kind way. The voice was described as hard, harsh, shards of glass. It hurt to be called loud. It hurt to be called hard. Especially when you understand that society accepts only certain ways of being a woman: soft, delicate, fragile, dainty. It was never meant as a compliment to have someone call your voice “hard.”
If I was listening to my mother and my aunts or cousins speaking, and then chimed in, I would get the “shhhh” or if they wanted to be discreet they would make a gesture with their hands to indicate to me that I should bring my voice down. I learned early on that a lower voice was more appealing than the loud voice hiding in my vocal box.
I am Puerto Rican, and even though I was born in New York City, I was raised in a small town on the western side of Puerto Rico. I was already well-aware of stereotypes and digs about my being born in New York, even at a young age. My cousins would tell me I was stuck up, I thought I was better than other people because I had cable, I only listened to music in English (I guess that was a bad thing to them). When I moved to San Juan, I was no longer a displaced Nuyorican but a country bumpkin. Peers, friends, and new acquaintances would not classify me as a Nuyorican but, because I was living in San Juan at the time, would categorize me as an islander, de la isla, which basically meant I was not from el “area metro.” I was, in short, a country bumpkin to them.
The loudness of my voice was not just a marker of where I came from (the country, with all of the classicism that the phrase entails) but for me became conflated with gender. I knew that even when I wasn’t living in the city, I had been called loud. It’s just that when my peers asked me to lower my voice or to not speak so “duro” it was also because they thought of me as jíbara, country.
Sometimes I would get carried away when I was telling a joke among my female roommates, or I’d be excited to share some news, and eventually someone would tell me to tone it down. Baja la voz. As I reflect upon my college years living with roommates in a crowded apartment in a crowded city, I remember that we often got together and laughed, talked over each other, shouted across the apartment. But I would get carried away and then someone would say something about it. Mira que nos van a mandar a callar. Someone’s gonna tell us to shut up.
It was in college, however, that I learned to modulate my voice. I am physically capable of whispering, but when I spoke in English in a classroom setting (I was an English major in a school whose language of instruction was Spanish) I felt even louder in English. So I made the effort to tone down my voice, literally. I equated English with career, and by extension with my professional persona.
Ultimately, English would be the language I spoke (and still speak) in academic circles; with the language came also the tone and the volume. Men in my classes seemed more often to initiate conversations in my classes, and sometimes even in the ones where they were a minority. Meanwhile, the driven graduate student that I was, I wanted to step in but not stand out because of my voice. I didn’t want to give them (or the professor for that matter) a chance to discount me because I was a loud Puerto Rican woman at an American school. Eventually I learned how to switch back and forth. So did my fellow female classmates.
I remember as a teacher modulating my voice so I would be less loud and less abrasive in a college classroom. I wanted to assert my authority. If some women resort to vocal fry in order to be taken seriously, as this 2014 article in The Atlantic (online) suggests, I resorted to modulating my voice. That was my way of passing: passing for creative elite, passing for feminine, passing for authoritative. I tried to assert my credibility as a burgeoning scholar and professor by tweaking my voice. I laughed a little softer, I spoke a little slower, I sounded a little lower. I teetered between trying to sound feminine and trying to downplay my femininity through my voice.
Was I trying to sound more like the stereotype of a woman so I could be more credible in the classroom? Was this my own version of respectability politics? “Don’t be so loud and they’ll listen to you”?
“White supremacy grants white people the ability to be understood as expressing a dynamic range; whites can legitimately shout because we hear them/ourselves as mainly normalized. At the same time, white supremacy paints black people as always-already too loud.”
The negative rhetoric about women and loudness is also connected to respectability politics. Take for example the stereotype of the angry black woman (which is in the vicinity of the loud Latina). If women must be delicate and feminine, being loud would be unattractive, unseemly. Loud also means “not being silent,” in other words, speaking when not spoken to. Robin James touches upon “loudness” in contemporary music, and how the turn toward less loud tracks also has to do with racialized ideas about who can speak and who can be loud–in other words, what counts as noise and what counts as harmonious sound. She cites Goldie Taylor’s piece in The Daily Beast about how, regardless of how angry she felt about the racial injustices in the United States, she would never be able to scream and shout without consequences. Loudness is something racialized people cannot afford.
The stereotype of the angry woman points to how the notion of who is loud and what tone of voice is considered loud are constructed. Although there are studies that point out that the sound of one’s voice indicates to others that one is in a position of authority or that one’s voice can make or break one’s career, there is yet to be a study that shows how the biology of the body that produces the voice affects what one can or cannot do. In other words, the connection between voice and our abilities, or our social class, is constructed—in our heads.
Assertive, aggressive, leader: these descriptions benefit men, for the most part. Aggressiveness is seen as a masculine trait, and along with that a loud tone of voice is also seen as masculine. (This idea is also problematic, for it sets anything that isn’t aggressive and assertive as female, and therefore negative.) The opposite applies to women; the same way our society associates fragile delicate things with femininity, a fragile, soft, low tone of voice is the acceptable range for a woman. And James and Taylor’s comments point to how race also changes the equation. Damned if we speak, damned if we don’t.
Over the years, I’ve become more comfortable with the way I sound. I’ve also become more comfortable switching between my aural codes, like I do with English, Spanish, and Spanglish. I know that there’s a volume that I use in certain spaces. I also know that in other spaces I don’t have to watch over how loud I am. If I am in a familiar space, with people I am close to, I feel less inclined to watch myself. I feel safe, not judged. I can be as loud as I want to be. But loudness is also an accepted way of speaking around my family. If I spoke in a low tone, I’d probably be picked on for that. My father, for one, has a booming, deep, loud voice, and so do many of my family members.
For me, embracing my voice is also a kind of body acceptance. My body, plus-sized and all, takes up space. My voice takes up space too. As a teenager and an adult I was constantly shamed for the way I look (skin too brown, voice too loud, face too painted, hair too short), and for a time tweaking my voice became a way to try to fit in. But I later learned how to respond to the remarks. I learned to be sarcastic. I learned to make jokes. I learned to talk back. I didn’t find my voice; I embraced my voice.
Dear readers, let us know in the comments: have you been chastised for being loud? Or for not speaking loudly enough?
Featured image: property of the author.
Liana M. Silva is co-founder and Managing Editor of Sounding Out!.
REWIND! . . .If you liked this post, you may also dig:
I Been On: BaddieBey and Beyoncé’s Sonic Masculinity–Regina Bradley
In 1912, British physicist Edmund Fournier d’Albe built a device that he called the optophone, which converted light into tones. The first model—“the exploring optophone”—was meant to be a travel aid; it converted light into a sound of analogous intensity. A subsequent model, “the reading optophone,” scanned print using lamp-light separated into beams by a perforated disk. The pattern of light reflected back from a given character triggered a corresponding set of tones in a telephone receiver. d’Albe initially worked with 8 beams, producing 8 tones based on a diatonic scale. He settled on 5 notes: lower G, and then middle C, D, E and G. (Sol, do, re, mi, sol.) The optophone became known as a “musical print” machine. It was popularized by Mary Jameson, a blind student who achieved reading speeds of 60 words per minute.
In the field of media studies, the optophone has become renowned through its imaginary repurposings by a number of modernist artists. For one thing, the optophone finds brief mention in Finnegan’s Wake. In turn, Marshall McLuhan credited James Joyce’s novel for being a new medium, turning text into sound. In “New Media as Political Forms,” McLuhan says that Joyce’s own “optophone principle” releases us from “the metallic and rectilinear embrace of the printed page.” More familiar within media studies today, Dada artist Raoul Hausmann patented (London 1935), but did not successfully build, an optophone presumably inspired by d’Albe’s model, which he hoped would be employed in audiovisual performances. This optophone was meant to convert sound into light as well as the reverse. It was part of a broader contemporary impulse to produce color music and synaesthetic art. Hausmann also wrote optophonetic poetry, based on the sounds and rhythms of “pure phonemes” and non-linguistic noises. In response, Francis Picabia painted two optophone portraits in 1921 and 22. Optophone I, below, is composed of lines that might be sound waves, with a pattern that disorders vision.
Theorists have repeatedly located Hausmann’s device at the origin of new media. Authors in the Audiovisuology, Media Archaeology, and Beyond Art: A Third Culture anthologies credit Hausmann’s optophone with bringing-into-being cybernetics, digitization, the CD-ROM, audiovisual experiments in video art, and “primitive computers.” It seems to have escaped notice that d’Albe also used the optophone to create electrical music. In his book, The Moon Element, he writes:
d’Albe’s device is typically portrayed as a historical cul-de-sac, with few users and no real technical influence. Yet optophones continued to be designed for blind people throughout the twentieth century; at least one model has users even today. Musical print machines, or “direct translators,” co-existed with more complex OCR-devices—optical character recognizers that converted printed words into synthetic speech. Both types of reading machine contributed to today’s procedures for scanning and document digitization. Arguably, reading optophones intervened more profoundly into the order of print than did Hausmann’s synaesthetic machine: they not only translated between the senses, they introduced a new symbolic system by which to read. Like braille, later vibrating models proposed that the skin could also read.
In December 1922, the Optophone was brought to the United States from the United Kingdom for a demonstration before a number of educators who worked with blind children; only two schools ordered the device. Reading machine development accelerated in the U.S. around World War II. In his position as chair of the National Defense Research Committee, Vannevar Bush established a Committee on Sensory Devices in 1944, largely for the purpose of rehabilitating blind soldiers. The other options for reading—braille and Talking Books—were relatively scarce and had a high cost of production. Reading machines promised to give blind readers access to magazines and ephemeral print (recipes, signs, mail), which was arguably more important than access to books.
At RCA (Radio Corporation of America), the television innovator Vladimir Zworykin became involved with this project. Zworykin had visited Fournier d’Albe in London in the 19-teens and seen a demonstration of the optophone. Working with Les Flory and Winthrop Pike, Zworykin built an initial machine known as the A-2 that operated on the same principles, but used a different mechanism for scanning—an electric stylus, which was publicized as “the first pen that reads.” Following the trail of citations for RCA’s “Reading Aid for the Blind” patent (US 2420716A, filed 1944), it is clear that the “pen” became an aid in domains far afield from blindness. It was repurposed as an optical probe for measuring the oxygen content of blood (1958); an “optical system for facsimile scanners” (1972); and, in a patent awarded to Burroughs Corporation in 1964, a light gun. This gun, in turn, found its way into the handheld controls for the first home video game system, produced by Sanders Associates.
The A-2 optophone was tested on three blind research subjects, including ham radio enthusiast Joe Piechowski, who was more of a technical collaborator. According to the reports RCA submitted to the CSD, these readers were able to correlate the “chirping” or “tweeting” sounds of the machine with letters “at random with about eighty percent accuracy” after 60 hours of practice. Close spacing on a printed page made it difficult to differentiate between letters; readers also had difficulty moving the stylus at a steady pace and in a straight line. Piechowski achieved reading speeds of 20 words per minute, which RCA deemed too slow.
Attempts were made to incorporate “human factors” and create a more efficient tonal code, to reduce reading time as well as learning time and confusion between letters. One alternate auditory display was known as the compressed optophone. Rather than generate multiple tones or chords for a single printed letter, which was highly redundant and confusing to the ear, the compressed version identified only certain features of a printed letter: such as the presence of an ascender or descender. Below is a comparison between the tones of the original optophone and the compressed version, recorded by physicist Patrick Nye in 1965. The following eight lower case letters make up the source material: f, i, k, j, p, q, r, z.
Original record in the author’s possession. With thanks to Elaine Nye, who generously tracked down two of her personal copies at the author’s request. The second copy is now held at Haskins Laboratories.
Because of the seeming limitations of tonal reading, RCA engineers re-directed their research to add character recognition to the scanning process. This was controversial, direct translators like the optophone being perceived as too difficult because they required blind people to do something akin to learning to read print—learning a symbolic tonal or tactile code. At an earlier moment, braille had been critiqued on similar grounds; many in the blind community have argued that mainstream anxieties about braille sprang from its symbolic difference. Speed, moreover, is relative. Reading machine users protested that direct translators like the optophone were inexpensive to build and already available—why wait for the refinement of OCR and synthetic speech? Nevertheless, between November 1946 and May 1947, Zworykin, Flory, and Pike worked on a prototype “letter reading machine,” today widely considered to be the first successful example of optical character recognition (OCR). Before reliable synthetic speech, this device spelled out words letter by letter using tape recordings. The Letter-Reader was too massive and expensive for personal use, however. It also had an operating speed of 20 words per minute—thus it was hardly an improvement over the A-2 translator.
Haskins Laboratories, another affiliate of the Committee on Sensory Devices, began working on the reading machine problem around the same time, ultimately completing an enormous amount of research into synthetic speech and—as argued by Donald Shankweiler and Carol Fowler—the “speech code” itself. In the 1940s, before workable text-to-speech, researchers at Haskins wanted to determine whether tones or artificial phonemes (“speech-like speech”) were easier to read by ear. They developed a “machine dialect of English,” named wuhzi: “a transliteration of written English which preserved the phonetic patterns of the words.” An example can be played below. The eight source words are: With, Will, Were, From, Been, Have, This, That.
Original record in the author’s possession. From Patrick Nye, “An Investigation of Audio Outputs for a Reading Machine” (1965). With thanks to Elaine Nye.
Based on the results of tests with several human subjects, the Haskins researchers concluded that aural reading via speech-like sounds was necessarily faster than reading musical tones. Like the RCA engineers, they felt that a requirement of these machines should be a fast rate of reading. Minimally, they felt that reading speed should keep pace with rapid speech, at about 200 words per minute.
Funded by the Veterans Administration, members of Mauch Laboratories in Ohio worked on both musical optophones and spelled-speech recognition machines from the 1950s into the 1970s. One of their many devices, the Visotactor, was a direct-translator with vibro-tactile output for four fingers. Another, the Visotoner, was a portable nine-channel optophone. All of the Mauch machines were tested by Harvey Lauer, a technology transfer specialist for the Veterans Administration for over thirty years, himself blind. Below is an excerpt from a Visotoner demonstration, recorded by Lauer in 1971.
Visotoner demonstration. Original 7” open reel tape in author’s possession. With thanks to Harvey Lauer for sharing items from his impressive collection and for collaborating with the author over many years.
Later on the same tape, Lauer discusses using the Visotoner to read mail, identify currency, check over his own typing, and read printed charts or graphics. He achieved reading speeds of 40 words per minute with the device. Lauer has also told me that he prefers the sound of the Visotoner to that of other optophone models—he compares its sound to Debussy, or the music for dream sequences in films.
Mauch also developed a spelled speech OCR machine called the Cognodictor, which was similar to the RCA model but made use of synthetic speech. In the recording below, Lauer demonstrates this device by reading a print-out about IBM fonts. He simultaneously reads the document with the Visotoner, which reveals glitches in the Cognodictor’s spelling.
Original 7” open reel tape in the author’s possession. With thanks to Harvey Lauer.
In 1972, at the request of Lauer and other blind reading machine users, Mauch assembled a stereo-optophone with ten channels, called the Stereotoner. This device was distributed through the VA but never marketed, and most of the documentation exists in audio format, specifically in sets of training tapes that were made for blinded veterans who were the test subjects. Some promotional materials, such as the short video below, were recorded for sighted audiences—presumably teachers, rehabilitation specialists, or funding agencies.
Video courtesy of Harvey Lauer.
Mary Jameson corresponded with Lauer about the stereotoner, via tape and braille, in the 1970s. In the braille letter pictured below she comments, “I think that stereotoner signals are the clearest I have heard.”
In 1973, with the marketing of the Kurzweil Reader, funding for direct translation optophones ceased. The Kurzweil Reader was advertised as the first machine capable of multi-font OCR; it was made up of a digital computer and flatbed scanner and it could recognize a relatively large number of typefaces. Kurzweil recalls in his book The Age of Spiritual Machines that this technology quickly transferred to Lexis-Nexis as a way to retrieve information from scanned documents. As Lauer explained to me, the abandonment of optophones was a serious problem for people with print disabilities: the Kurzweil Readers were expensive ($10,000-$50,000 each); early models were not portable and were mostly purchased by libraries. Despite being advertised as omnifont readers, they could not in fact recognize most printed material. The very fact of captchas speaks to the continued failures of perfect character recognition by machines. And, as the “familiarization tapes” distributed to blind readers indicate, the early synthetic speech interface was not transparent—training was required to use the Kurzweil machines.
Original cassette in the author’s possession.
Lauer always felt that the ideal reading machine should have both talking OCR and direct-translation capabilities, the latter being used to get a sense of the non-text items on a printed page, or to “preview material and read unusual and degraded print.” Yet the long history of the optophone demonstrates that certain styles of decoding have been more easily naturalized than others—and symbols have increasingly been favored if they bear a close relation to conventional print or speech. Finally, as computers became widely available, the focus for blind readers shifted, as Lauer puts it, “from reading print to gaining access to computers.” Today, many electronic documents continue to be produced without OCR, and thus cannot be translated by screen readers; graphical displays and videos are largely inaccessible; and portable scanners are far from universal, leaving most “ephemeral” print still unreadable.
Mara Mills is an Assistant Professor of Media, Culture, and Communication at New York University, working at the intersection of disability studies and media studies. She is currently completing a book titled On the Phone: Deafness and Communication Engineering. Articles from this project can be found in Social Text, differences, the IEEE Annals of the History of Computing, and The Oxford Handbook of Sound Studies. Her second book project, Print Disability and New Reading Formats, examines the reformatting of print over the course of the past century by blind and other print disabled readers, with a focus on Talking Books and electronic reading machines. This research is supported by NSF Award #1354297.
The following video installation by Mandie O’Connell, is part three of a four part series, “Round Circle of Resonance” by the Berlin based arts collective La Mission that performs connections between the theory of José Esteban Muñoz and sound art/study/theory/performance.
The first installment and second installments ran last Monday. The opening salvo, written by La Mission’s resident essayist / deranged propagandist LMGM (Luis-Manuel Garcia) provides a brief introduction to our collective, some reflections on Muñoz’s relevance to our activities, and a frame for the next three missives from our fellow cultists. It is backed with a rousing sermon-cum-manifesto from our charismatic cult-leader/prophet, El Jefe (Pablo Roman-Alcalá). Next Monday, our saucy Choir Boy/Linguist (Johannes Brandis) will close the forum with a dirge to our dearly departed José (August 9, 1967- December, 4, 2013).
—LMGM a.k.a. Luis-Manuel Garcia (curator)
Concept and Performance: Mandie O’Connell
Filming and Editing: Piss Nelke
Music: Khrom Ju (La Mission)
Piss is Power.
Power exists in urination, in this basic and most crucial of bodily acts. Problems with urination can result in embarrassment, infection, hospitalization. And yet so many of us women encounter confining, unfair, cruel, and Puritan limitations to where, when, and how we can pee, while our male counterparts traipse around urinating wherever they please. It is time, brothers and sisters, to re-politicize piss.
Brother Muñoz taught us that utopian projects require fellow participants, not audiences. We need a Urinary Utopia, a Piss Paradise that is open to men, women, trans and intersex people of all colors. Let’s shower down a blissful piss, a rainbow-colored golden shower where we all can piss wherever the fuck we want to!
In my performance video, I attempt to create a Muñoz-inspired utopian sensibility through the enactment of a new modality of an everyday action. I use a Female Urination Device—which enables me to stand up and urinate—to take a Yellow Adventure around my neighborhood. I piss freely in places where my penis-having brethren piss. I piss in a urinal next to which “Piss on me Bitch” is crudely scrawled. I piss into the river Spree, symbolically owning it with my liquid gold. Finally, I write my name in piss, a macho action turned feminine, the power and privilege of said action redirected towards my vagina.
In “Standing Up,” three different sounds are mixed together to create the soundscape of the performance: ambient noise, music, and sound clips of urination. The ambient noise serves to locate the scene in space/time. The music by Khrom Ju was selected to give the performance an eerie, strange, and repetitive undertone. The sound of urination was recorded live and is the sound of female urination. We use this sound both as a cue and as comic relief. Piss is funny, piss is strange, and piss happens all around us.
Urination and the female struggle around it is a real struggle that really happens and really matters. Exceptionally long lines for the ladies’ room, the inability to publically urinate at festivals due to feeling exposed and shamed, being charged money to use toilet facilities when males can piss outdoors for free, getting forced to use a ladies’ room when your sexuality sways towards using the men’s room, the list of complaints goes on and on. So I say: pee where you want, not where others want you to. Pee on administrators, police, politicians, and oppressors of all kinds while you’re at it!
I refuse to adhere to these rules anymore, and I beg you to follow my lead.
Piss is Power.
Featured Image adapted from “Pee” by Flickr User Melissa Eleftherion Carr
Mandie O’Connell (yo) aka “Knuckle Cartel, is a former big cheese and intellectual powerhouse behind the wildly successful Seattle-based experimental theater company Implied Violence. I, Mandie, have experienced the same “conservatism” and capitalistic partnership between Money and Art in the performance/theater scene. Witnessing firsthand the immense power that cash-wielding creeps hold over creatives is sickening, sad, and sordid. I’ve had enough, and so have you…right? Let’s fix a broken system. If we can’t fix it, let’s circumvent it.
REWIND!…If you liked this post, check out:
On Sound and Pleasure: Meditations on the Human Voice–Yvon Bonenfant