Tag Archive | Walter Benjamin

detritus 1 & 2 and V.F(i)n_1&2 : The Sounds and Images of Postnational Violence in Mexico

ActsofSonicInterventionThis April forum, Acts of Sonic Intervention, explores what we over here at Sounding Out! are calling “Sound Studies 2.0”–the movement of the field beyond the initial excitement for and indexing of sound toward new applications and challenges to the status quo.

Two years ago at the first meeting of the European Sound Studies Association, I was inspired by the work of scholar and sound artist Linda O’Keeffe and her compelling application of the theories and methodologies of sound studies to immediate community issues.  In what would later become a post for SO!, “(Sound)Walking Through Smithfield Square in Dublin,” O’Keeffe discussed her Smithfield Square project and how she taught local Dublin high school students field recording methodologies and then tasked them with documenting how they heard the space of the recently “refurbished” square and the displacement of their lives within it.  For me, O’Keeffe’s ideas were electrifying, and I worked to enact a public praxis of my own via ReSounding Binghamton and the Binghamton Historical Soundwalk Project.  Both are still in their initial stages; the work has been fascinating and rewarding, but arduous, slow, and uncharted. Acts of Sonic Intervention stems from my own hunger to hear more from scholars, artists, theorists, and/or practicioners to guide my efforts and to inspire others to take up this challenge.  Given the exciting knowledge that the field has produced regarding sound and power (a good amount of it published here), can sound studies actually be a site for civic intervention, disruption, and resistance?

Last week, we heard from the Assistant Director at Binghamton University’s Center for Civic Engagement, Christie Zwahlen, who argues that any act of intervention must necessarily begin with self-reflexivity and examination of how one listens.  In coming weeks, we will catch up with Linda O’Keeffes newest project, a pilot workshop with older people at the U3A (University of the Third Age) centre in Foyle, Derry, “grounded in an examination of the digital divide, social inclusion and the formation of artists collectives.”  We will also hear from artist, theorist, and writer Salomé Voegelinwho will treat us to a multimedia re-sonification of the keynote she gave at 2014’s Invisible Places, Sounding Cities conference in Viseu, Portugal, “Sound Art as Public Art,” which revivified the idea of the “civic” as a social responsibility enacted through sound and listening.  This week, artist/scholar Luz María Sánchez gives us the privilege of a behind-the-scenes discussion of her latest work, detritus.2/ V.F(i)n_1–1st prize winner at the 2015 Biennial of the Frontiers in Matamoros, Mexico —which uses found recordings and images to break the deleterious silence created by narco violence in Mexico.

–JS, Editor-in-Chief

detritus3

There is no document of civilization which is not at the same time a document of barbarism.

Walter Benjamin, Illuminations

detritus is an open-ended art project I started in 2011, that has as its main subject the portrayal of violence in Mexico. I introduce the sounds and images of what I call the Postnational Violence in Mexico using the concept of detritus as the nucleus; I use the cultural objects I produce through my artistic practice as the vehicle. detritus actually explores violence (1) as it is portrayed through media (radio, TV, newspapers and online platforms) and (2) as it is registered, manipulated and transmitted by the different participants of it –civilians, the government, NGOs, the military, the cartels–.

detritus

The first stage of detritus deals with Mexican media, specifically online newspapers, radio and TV, during the Presidency of Felipe Calderón (2006-2012). The whole strategy of [former] President Calderón —even before he took office— was to knock down violence associated to drug trafficking in Mexico and, actually, just a few days after he did his pledge as President of Mexico, he declared the war against drug trafficking that underwent from 11December 2006 —when Calderón actually started this war by sending 5,000 soldiers and police officers to the state of Michoacán— until the last day he was in Office: 31 November 2012.

During the six years that this war took place, former President Calderón appeared in military garments as “Mexico’s Drug War Commander in Chief.” The main target of this military strategy was to re-claim the control on those states where Mexican cartels were in charge. As Guillermo Pereyra argues in México: violencia criminal y “Guerra contra el narcotráfico” (2012), “Mexico’s Drug War” began as a decision to recover sovereignty in a context of political and social crisis. At the end of this period, there were more than 45,000 officers deployed in the states of Mexico, Baja California, Tamaulipas, Michoacán, Sinaloa and Durango, and more than 60,000 casualties. US media called this war “The Mexican War on Drugs” or “Mexico’s Drug War.”

detritus4The research for the visuals of detritus included every single [online] edition of Milenio and Jornada —Mexican national newspapers—from 11 December 2006 until 31 November 2012, and eventually it also included Proceso magazine and El Blog del Narco, an online independent news outlet. This research allowed me to investigate how the media has steadily been increasing the volume of news and images dealing with this war, therefore contributing to the “normalization” of the very violence it covers. As Colombian artist Doris Salcedo states the normalization of barbarism comes from the excessive number of deaths that violence is leaving to the society and, [I will add] to the excessive number of images and sounds that media and individuals put on circulation and make it viral through social networks and online independent outlets. All of us are, either as transmitters or as receivers, building this texture of violence.

detritus13At the end of 2013 detritus was completed: more than 10,200 images, all of them categorized in a database that includes: title of newspaper, section, header, author of the photograph, caption, and a brief description of the image itself. I used a very simple process of photographic manipulation to alter those 10,200 images. Once transformed, these images are projected, for a very short period of time [2 seconds each] in a large screen. We could be standing in front of this projection for hours and never see any of those images repeated. For those who are drawn to numbers, we could see that at the beginning of this war, during a whole weekend, there will be four or five images related to the subject; by the end of 2012, there were more than 40 images during the same period of time.

detritus.2

But the description of the horror through Mexican media does not include all the necessary voices. That is why civilians started a process to empower themselves using the tools they have at hand–such as mobile phone’s cameras–a medium they can use without restrictions. Over the Internet, civilians circulated images, videos, and sounds of their day-to-day experiences dealing with extreme violence. They are not alone on this viralization of violence through audiovisual documents: members of drug cartels and self-defense groups are also uploading their combats. The big difference is each group’s “agenda.” Civilians are in search of an arena to share their experiences; cartels and other military groups are either in search of validation or in search of documenting the systematic violence used in order to control whole populations.

Therefore, the audio complement I designed for detritus, first detritus.2 and then its current iteration V.F(i)n_1 features the sounds of shootings, recorded by civilians who happened to be at close range. Generally this footage was taken via mobile phone and uploaded onto YouTube, and, unlike the newspaper representations, the image is not necessarily what is most engaging, since the individual that is making the recording is usually at floor level, protected, in order to avoid being hit by a stray bullet. But the sounds are pristine: even if the image is almost motionless -in the corner of a room, looking through a small part of a window-, the sound describes better what is at stake: violence at a very close range. The sounds on these recordings are very similar: the shootings are placed in the background, and we generally listen to voices in the foreground.

guns close up

Each of the twenty recordings that integrate to create detritus.2 was taken from You Tube. The shootings occurred in the cities of Nuevo Laredo, Reynosa, Zupango, Orizaba, Saltillo, Juarez, Changuitiro, Purépero, Xalapa, Jiquilpan, Santa María del Oro and Mexico City. All of them, played together, contribute to the assembly of what Salcedo calls a texture of sound.  The recordings are reproduced/played by twenty portable digital speakers in the shape of guns. These sound-reproduction machines are completely autonomous–no power or sound cables attached–and each speaker is a sound component by itself.  Once the battery is worn, the sound is gone until the battery is recharged, therefore restarting the process performance / sound – waste / silence.  Silence is one of the worst problems when dealing with violence.The government and the drug cartels alike don’t want anybody to openly discuss these issues. Working with families within specific communities in Mexico and the US will help make their stories visible -out of the anonymous data- and visibility could empower them.

The Inferno

But exploring the “normalization” of violence through media is not my only intervention with detritus and detritus.2. Far from the sound art movement, where soundscape often functions as a neutral label that includes organized sounds taken from the surroundings, detritus.2 deals with Mexican contemporary cities’ sounds, recorded and disseminated by the same individuals that live within these acoustic situations. Those are the sounds that [also] construct the Mexican landscape, telling the story of the failed nation.  Taken together, the sounds of detritus.2 amplifies the fact that we are standing in front of the failure of the Mexican state as we know it, and its civilian population has been dealing with this irregular situation for many decades. We have witnessed drug cartels infiltrate every layer of life; and just because many civilians end up surviving —with and around it—does not make the problem disappear. On the contrary, every broken boundary makes the problem harder and harder to be resolved.

detritus16The failure of the Mexican State, or the “inferno” as is being called now, is something Mexico can no longer hide.  When I say Mexico here, I am not referring to its general population–already exhausted already from decades on “survival mode”– but rather the Capitol elite: the government, investors, intellectuals, and journalists alike.  This situation is not new to civilians living outside of Mexico City. Entire communities in the north of Mexico have been abandoning their belongings-jobs-lives, in extremely fast exodus, either to the US or to tranquil states like Yucatán. Thousands of mothers and fathers are looking for their sons and daughters taken by the cartels, in the best-case scenario they are put to work as slaves either at the drug camps or as prostitutes, in the worst they may be in the thousands of mass graves that pollute the country. Civilians understood early in the story that any complaint to the police would result in an even worse situation. For years, it has been known in the bus industry that a lot of young male and female travelers have been kidnapped to make them join this industry of slaves, and only recently they started to admit it: tons of luggage at bus terminals on the northern states of Mexico speak for those that went missing, and nobody said a word. Just the past 19 October 2014 a corpse of a went-missing-police-officer’s mother was placed in front of the Ministry of the Interior’s building: they never pursued an investigation over the disappearance of the young officer, and the last will of this ailing mother was her coffin to be placed in the street outside of the Ministry of the Interior as a way of extreme protest.

Listening Ahead: V. (u)nF_2

In the next phase of detrius.2, V. (u)nF_2–an acronym for Vis. (un) necessary force–I am making sculptural objects and sounds to construct a multi-channel sound-installation exploring the question: how do civilians in Mexico live through the extreme violence product of the fight against drug cartels in a state that has revealed its own failure? The artwork consists of a multiple series of custom-made ceramic-sound devices/megaphones in the shape of human heads/faces, molded after living family members of civilians that are still on the “missing” lists,  maybe kidnapped and/or killed by drug cartels. In order to make an archive that includes each family’s data, I will collaborate with organizations that assist civilians on finding their relatives. To make a representative selection, I plan to analyze data through a mathematic-algorithm; chosen families will be invited to be part of the project. Each family will designate a member to participate symbolically as the “missing” person. A 3D-scan data portrait will be made of each participant, followed by a ceramic-3D-print.  I will then install an electronic-circuit and megaphone inside of the hollow-human-head/faces-ceramic-objects. To develop the sound element –a thick stratum of noise– I will digitally modify a multiple-layered-construction of sounds after the stored data. The specifics of each story/participant will be presented at the exhibition space through an interactive database. Custom-made ceramic-objects/megaphones will be resting on the floor; in in order to cross the exhibition-space, visitors will have to carefully move these 3D-ceramic-portraits, each one representing an individual story.

V. (u)nF_2 is a gesture that listens forward, taking those 24,000–and counting–missing-individuals outside of data-archives and rehumanizing them through storytelling, 3D-scan/print technology and sound.  The fact that I will use traditional methods to approach my subject —the horror of this war against civilians– but will also use state-of-the-art-technology in order to shape the hardware needed for sound-installation, combines a human-scale project with the possibilities of the digital-world, which places this project within the so-called Third-Industrial-Revolution but grounds it in the real.

V.F(i)n_1  is now on view at the Museo de Arte Contemporáneo de Tamaulipas (MACT) in Matamoros (the border city with Brownsville). It will open on August-September at the Museo de Arte Carrillo Gil in Mexico City. 

 

Listen to other sound installations  by Luz María Sánchez:

Frecuencias Policiacas// Police Frequencies: “Las grabaciones que forman parte del audio multicanal de la instalación, fueron llevadas a cabo en la central de radiocomunicación de la policía de Nuevo Laredo, y fueron facilitadas a la artista por reporteros del diario El Mañana en agosto de 2005. Los audios registran una confrontación entre la policía de Nuevo Laredo y un grupo criminal no identificado, y por las características de los mismos, se pueden escuchar a diversos elementos policiacos, así como a las controladoras de la radiocomunicación. La re-transmisión de estos sonidos en una matriz multi-líneal, colocan a la obra en nuevos niveles de codificación en los que la complejidad visual, auditiva y político social de esta realidad, se hacen patentes.” –Description by Roberto Arcaute y Manuel Rocha Iturbide

 

Frecuencias Policiacas// Police Frequencies: “The recordings are part of the multichannel audio installation carried out in the central police radio Nuevo Laredo, provided to the artist by El Mañana newspaper reporters in August 2005.  The audio recorded a confrontation between police and an unidentified criminal Nuevo Laredo group. . .The re-transmission of these sounds in a multi-linear matrix placed to work in new levels of encryption that make evident the social visual, auditory and political complexity of this reality.” –Description by Roberto Arcaute y Manuel Rocha Iturbide

 

2487: “2487 speaks the names of the two thousand four hundred eighty seven people who died crossing the U.S./Mexico border . The work employs digital technology and sound as a means for transborder memorialization and protest, imposing the absence of those lost into the public sphere. Sánchez’ immersive sound environment remaps social history as the names of the deceased fly across the border through soundscape and digital media. Drawing from data acquired from activist websites, Sánchez created a sound map of names which she recorded digitally. Her final score, along with the database, has been exhibited widely but lives permanently on the world wide web, in commemoration and quiet protest. Sánchez’ work connects the digital and geographic landscape to the listener’s body, gaining entry through sound and transcending political and physical barriers”– Description from UCR Critical Digital 8/19/2012

 

Sound and visual artist Luz María Sánchez  studied both music and literature. Through her doctoral studies Sánchez has focused on the role of sound-in-art since its inception in the 19th century through its evolution as an independent art practice in the 20th century. Sánchez then examined the radio-plays of Samuel Beckett linking them to the sound-practices that emerged in the mid-20th century. Sánchez has continued her research on technologized-sound: she was part of the conference Mapping Sound and Urban Space in the Americas at Cornell University, and her book Technological Epiphanies: Samuel Beckett’s Use of Audiovisual Machines will be published in 2015. Her artwork has been included in major sound-and-music festivals such as Zéppellin-Sound-Art-Festival (Spain), Bourges-International-Festival-of-Electronic-Music-and-Sonic-Art (France), Festival-Internacional-de-Arte-Sonoro (Mexico), and has presented exhibitions at Marion-Koogler-McNay-Art-Museum, Dallas Center for Contemporary Art, Galería de la Raza (San Francisco), John-Michael-Kohler-Arts-Center (Sheboygan), Illinois State Museum (Chicago/Springfield), and Centro de Cultura Contemporánea (Barcelona) amongst others. She was granted a special distinction in the category Nouvea-Musiques at the Phonurgia-Nova-Prix (Arles), was the recipient of a Círculo-de-Bellas-Artes-de-Madrid’s grant, and Yuko Hasegawa selected her for the Artpace-International-Artist-in-Residence. She is member of the Board-of-the-Sound Experimentation-Space at Museum-of Contemporary-Art (MUAC). Sanchez was recently awarded the First Prize of the Frontiers Biennial (2015).

tape reelREWIND!…If you liked this post, you may also dig:

“A Listening Mind: Sound Learning in a Literature Classroom”–Nicole Brittingham Furlonge

“Soundscapes of Narco Silence”Marci R. McMahon

Listening to the Border: ‘”2487″: Giving Voice in Diaspora’ and the Sound Art of Luz María Sánchez”-D. Ines Casillas

 

 

 

 

A Brief History of Auto-Tune

Sound and TechThis is the final article  in Sounding Out!‘s April  Forum on “Sound and Technology.” Every Monday this month, you’ve heard new insights on this age-old pairing from the likes of Sounding Out! veteranos Aaron Trammell and Primus Luta along with new voices Andrew Salvati and Owen Marshall.  These fast-forward folks have shared their thinking about everything from Auto-tune to techie manifestos. Today, Marshall helps us understand just why we want to shift pitch-time so darn bad. Wait, let me clean that up a little bit. . .so darn badly. . .no wait, run that back one more time. . .jjuuuuust a little bit more. . .so damn badly. Whew! There! Perfect!–JS, Editor-in-Chief

A recording engineer once told me a story about a time when he was tasked with “tuning” the lead vocals from a recording session (identifying details have been changed to protect the innocent). Polishing-up vocals is an increasingly common job in the recording business, with some dedicated vocal producers even making it their specialty. Being able to comp, tune, and repair the timing of a vocal take is now a standard skill set among engineers, but in this case things were not going smoothly. Whereas singers usually tend towards being either consistently sharp or flat (“men go flat, women go sharp” as another engineer explained), in this case the vocalist was all over the map, making it difficult to always know exactly what note they were even trying to hit. Complicating matters further was the fact that this band had a decidedly lo-fi, garage-y reputation, making your standard-issue, Glee-grade tuning job decidedly inappropriate.

Undaunted, our engineer pulled up the Auto-Tune plugin inside Pro-Tools and set to work tuning the vocal, to use his words, “artistically” – that is, not perfectly, but enough to keep it from being annoyingly off-key. When the band heard the result, however, they were incensed – “this sounds way too good! Do it again!” The engineer went back to work, this time tuning “even more artistically,” going so far as to pull the singer’s original performance out of tune here and there to compensate for necessary macro-level tuning changes elsewhere.

"Melodyne screencap" by Flickr user Ethan Hein, CC BY-NC-SA 2.0

“Melodyne screencap” by Flickr user Ethan Hein, CC BY-NC-SA 2.0

The product of the tortuous process of tuning and re-tuning apparently satisfied the band, but the story left me puzzled… Why tune the track at all? If the band was so committed to not sounding overproduced, why go to such great lengths to make it sound like you didn’t mess with it? This, I was told, simply wasn’t an option. The engineer couldn’t in good conscience let the performance go un-tuned. Digital pitch correction, it seems, has become the rule, not the exception, so much so that the accepted solution for too much pitch correction is more pitch correction.

Since 1997, recording engineers have used Auto-Tune (or, more accurately, the growing pantheon of digital pitch correction plugins for which Auto-Tune, Kleenex-like, has become the household name) to fix pitchy vocal takes, lend T-Pain his signature vocal sound, and reveal the hidden vocal talents of political pundits. It’s the technology that can make the tone-deaf sing in key, make skilled singers perform more consistently, and make MLK sound like Akon. And at 17 years of age, “The Gerbil,” as some like to call Auto-Tune, is getting a little long in the tooth (certainly by meme standards.) The next U.S. presidential election will include a contingent of voters who have never drawn air that wasn’t once rippled by Cher’s electronically warbling voice in the pre-chorus of “Believe.” A couple of years after that, the Auto-Tune patent will expire and its proprietary status will dissolve into to the collective ownership of the public domain.

.

Growing pains aside, digital vocal tuning doesn’t seem to be leaving any time soon. Exact numbers are hard to come by, but it’s safe to say that the vast majority of commercial music produced in the last decade or so has most likely been digitally tuned. Future Music editor Daniel Griffiths has ballpark-estimated that, as early as 2010, pitch correction was used in about 99% of recorded music. Reports of its death are thus premature at best. If pitch correction is seems banal it doesn’t mean it’s on the decline; rather, it’s a sign that we are increasingly accepting its underlying assumptions and internalizing the habits of thought and listening that go along with them.

Headlines in tech journalism are typically reserved for the newest, most groundbreaking gadgets. Often, though, the really interesting stuff only happens once a technology begins to lose its novelty, recede into the background, and quietly incorporate itself into fundamental ways we think about, perceive, and act in the world. Think, for example, about all the ways your embodied perceptual being has been shaped by and tuned-in to, say, the very computer or mobile device you’re reading this on. Setting value judgments aside for a moment, then, it’s worth thinking about where pitch correction technology came from, what assumptions underlie the way it works and how we work with it, and what it means that it feels like “old news.”

"Anti-Tune symbol"

“Anti-Tune symbol”

As is often the case with new musical technologies, digital pitch correction has been the target for no small amount of controversy and even hate. The list of indictments typically includes the homogenization of music, the devaluation of “actual talent,” and the destruction of emotional authenticity. Suffice to say, the technological possibility of ostensibly producing technically “pitch-perfect” performances has wreaked a fair amount of havoc on conventional ways of performing and evaluating music. As Primus Luta reminded us in his SO! piece on the powerful-yet-untranscribable “blue notes” that emerged from the idiosyncrasies of early hardware samplers, musical creativity is at least as much about digging-into and interrogating the apparent limits of a technology as it is about the successful removal of all obstacles to total control of the end result.

Paradoxically, it’s exactly in this spirit that others have come to the technology’s defense: Brian Eno, ever open to the unexpected creative agency of perplexing objects, credits the quantized sound of an overtaxed pitch corrector with renewing his interest in vocal performances. SO!’s own Osvaldo Oyola, channeling Walter Benjamin, has similarly offered a defense of Auto-Tune as a democratizing technology, one that both destabilizes conventional ideas about musical ability and allows everyone to sing in-tune, free from the “tyranny of talent and its proscriptive aesthetics.”

"Audiodatenkompression: Manowar, The Power of Thy Sword" by Wikimedia user Moehre1992, CC BY-SA 3.0

“Audiodatenkompression: Manowar, The Power of Thy Sword” by Wikimedia user Moehre1992, CC BY-SA 3.0

Jonathan Sterne, in his book MP3, offers an alternative to normative accounts of media technology (in this case, narratives either of the decline or rise of expressive technological potential) in the form of “compression histories” – accounts of how media technologies and practices directed towards increasing their efficiency, economy, and mobility can take on unintended cultural lives that reshape the very realities they were supposed to capture in the first place. The algorithms behind the MP3 format, for example, were based in part on psychoacoustic research into the nature of human hearing, framed primarily around the question of how many human voices the telephone company could fit into a limited bandwidth electrical cable while preserving signal intelligibility. The way compressed music files sound to us today, along with the way in which we typically acquire (illegally) and listen to them (distractedly), is deeply conditioned by the practical problems of early telephony. The model listener extracted from psychoacoustic research was created in an effort to learn about the way people listen. Over time, however, through our use of media technologies that have a simulated psychoacoustic subject built-in, we’ve actually learned collectively to listen like a psychoacoustic subject.

Pitch-time manipulation runs largely in parallel to Sterne’s bandwidth compression story. The ability to change a recorded sound’s pitch independently of its playback rate had its origins not in the realm of music technology, but in efforts to time-compress signals for faster communication. Instead of reducing a signal’s bandwidth, pitch manipulation technologies were pioneered to reduce the time required to push the message through the listener’s ears and into their brain. As early as the 1920s, the mechanism of the rotating playback head was being used to manipulate pitch and time interchangeably. By spinning a continuous playback head relative to the motion of the magnetic tape, researchers in electrical engineering, educational psychology, and pedagogy of the blind found that they could increase playback rate of recorded voices without turning the speakers into chipmunks. Alternatively, they could rotate the head against a static piece of tape and allow a single moment of recorded sound to unfold continuously in time – a phenomenon that influenced the development of a quantum theory of information

In the early days of recorded sound some people had found a metaphor for human thought in the path of a phonograph’s needle. When the needle became a head and that head began to spin, ideas about how we think, listen, and communicate followed suit: In 1954 Grant Fairbanks, the director of the University of Illinois’ Speech Research Laboratory, put forth an influential model of the speech-hearing mechanism as a system where the speaker’s conscious intention of what to say next is analogized to a tape recorder full of instructions, its drive “alternately started and stopped, and when the tape is stationary a given unit of instruction is reproduced by a moving scanning head”(136). Pitch time changing was more a model for thinking than it was for singing, and its imagined applications were thus primarily non-musical.

Take for example the Eltro Information Rate Changer. The first commercially available dedicated pitch-time changer, the Eltro advertised its uses as including “pitch correction of helium speech as found in deep sea; Dictation speed testing for typing and steno; Transcribing of material directly to typewriter by adjusting speed of speech to typing ability; medical teaching of heart sounds, breathing sounds etc.by slow playback of these rapid occurrences.” (It was also, incidentally, used by Kubrick to produce the eerily deliberate vocal pacing of HAL 9000). In short, for the earliest “pitch-time correction” technologies, the pitch itself was largely a secondary concern, of interest primarily because it was desirable for the sake of intelligibility to pitch-change time-altered sounds into a more normal-sounding frequency range.

.

This coupling of time compression with pitch changing continued well into the era of digital processing. The Eventide Harmonizer, one of the first digital hardware pitch shifters, was initially used to pitch-correct episodes of “I Love Lucy” which had been time-compressed to free-up broadcast time for advertising. Similar broadcast time compression techniques have proliferated and become common in radio and television (see, for example, Davis Foster Wallace’s account of the “cashbox” compressor in his essay on an LA talk radio station.) Speed listening technology initially developed for the visually impaired has similarly become a way of producing the audio “fine print” at the end of radio advertisements.

"H910 Harmonizer" by Wikimedia user Nalzatron, CC BY-SA 3.0

“H910 Harmonizer” by Wikimedia user Nalzatron, CC BY-SA 3.0

Though the popular conversation about Auto-Tune often leaves this part out, it’s hardly a secret that pitch-time correction is as much about saving time as it is about hitting the right note. As Auto-Tune inventor Andy Hildebrand put it,

[Auto-Tune’s] largest effect in the community is it’s changed the economics of sound studios…Before Auto-Tune, sound studios would spend a lot of time with singers, getting them on pitch and getting a good emotional performance. Now they just do the emotional performance, they don’t worry about the pitch, the singer goes home, and they fix it in the mix.

Whereas early pitch-shifters aimed to speed-up our consumption of recorded voices, the ones now used in recording are meant to reduce the actual time spent tracking musicians in studio. One of the implications of this framing is that emotion, pitch, and the performer take on a very particular relationship, one we can find sketched out in the Auto-Tune patent language:

Voices or instruments are out of tune when their pitch is not sufficiently close to standard pitches expected by the listener, given the harmonic fabric and genre of the ensemble. When voices or instruments are out of tune, the emotional qualities of the performance are lost. Correcting intonation, that is, measuring the actual pitch of a note and changing the measured pitch to a standard, solves this problem and restores the performance. (Emphasis mine. Similar passages can be found in Auto-Tune’s technical documentation.)

In the world according to Auto-Tune, the engineer is in the business of getting emotional signals from place to place. Emotion is the message, and pitch is the medium. Incorrect (i.e. unexpected) pitch therefore causes the emotion to be “lost.” While this formulation may strike some people as strange (for example, does it mean that we are unable to register the emotional qualities of a performance from singers who can’t hit notes reliably? Is there no emotionally expressive role for pitched performances that defy their genre’s expectations?), it makes perfect sense within the current affective economy and division of labor and affective economy of the recording studio. It’s a framing that makes it possible, intelligible, and at least somewhat compulsory to have singers “express emotion” as a quality distinct from the notes they hit and have vocal producers fix up the actual pitches after the fact. Both this emotional model of the voice and the model of the psychoacoustic subject are useful frameworks for the particular purposes they serve. The trick is to pay attention to the ways we might find ourselves bending to fit them.

.

Owen Marshall is a PhD candidate in Science and Technology Studies at Cornell University. His dissertation research focuses on the articulation of embodied perceptual skills, technological systems, and economies of affect in the recording studio. He is particularly interested in the history and politics of pitch-time correction, cybernetics, and ideas and practices about sensory-technological attunement in general. 

Featured image: “Epic iPhone Auto-Tune App” by Flickr user Photo Giddy, CC BY-NC 2.0

tape reelREWIND!…If you liked this post, you may also dig:

“From the Archive #1: It is art?”-Jennifer Stoever

“Garageland! Authenticity and Musical Taste”-Aaron Trammell

“Evoking the Object: Physicality in the Digital Age of Music”-Primus Luta

“HOW YOU SOUND??”: The Poet’s Voice, Aura, and the Challenge of Listening to Poetry

4373951170_d10a224608_z

This post is dedicated to the memory of Amiri Baraka, who passed away on January 9, 2014 in Newark, New Jersey.

I began writing this post while my wife, Sarah, was at a conference on writing curriculum for high school literature. Over the phone one night she asked how to help students better understand the language of Shakespeare, and at a loss for suggestions (not only because I don’t study early modern drama), I recalled my own adolescent struggles with Macbeth, Hamlet, and Julius Caesar. I recalled well-intentioned teachers who gave me recordings, telling me that they would help me get an “ear” for Shakespeare’s language—yet all I remember, maybe all I learned, while listening to the Caedmon recording of Macbeth on vinyl, was that, to my mid-1990s ear, Shakespeare (anachronistically) sounded like Star Wars (which appeared 15 years after the 1960 Caedmon album).

My high school confusion has not completely faded when it comes to the sound of recorded poetic language, even more so when the notion of the poet’s voice is thrown into the mix. As opposed to verse recited by actors (the Caedmon Macbeth featured Anthony Quayle), or the sound of the syllables when we read a poem silently to ourselves, I find it tough to parse the idea of the sound of the poem in terms of the poet’s voice because “voice” is a slippery category—a constructed one, contingent upon the given historical moment of inscription and reception. It is tough because this idea of the sound of the poem, located in the voice of the poet, gets complicated with sonic technologies where voice is subject to the shifting conditions of fidelity.

"Record Player" by DeviantArt user SomeAreLove, CC-BY-NC-ND-3.0

“Record Player” by DeviantArt user SomeAreLove, CC-BY-NC-ND-3.0

The act of listening to recorded poetry thus poses particular analytic challenges, which become more complex when the politics of identity are brought to bear on these questions of voice and poetry. As a site for identity production, the recorded poetry performance projects a mediated voice that is a potential self. The “sound” of this poetic subjectivity is different from recording to recording, even of the same poem. In an effort to work through these complexities, this post takes up three different recordings of Amiri Baraka’s poem “Black Dada Nihilismus,” which offer variations in delivery and performance that each depend upon the social, political, and aesthetic dimensions of the soundscape that each recording is embedded within.

“Black Dada Nihilismus” is an excellent opportunity to consider the overlapping challenges of voice, performance and the politics of identity in recorded poetry. Published in the early 1960s, this poem was written before Baraka’s shift in politics, which was precipitated by the assassination of Malcolm X in 1965, yet the poem anticipates the intersection of aesthetics and politics during the Black Arts Movement in the late 1960s into the 70s. This shift can be tracked in the sonic details of the first two recordings, made in 1964 and 1965. In the third version, a 1993 remix by DJ Spooky, we can hear how this shift reverberates beyond its historical moment.

In a statement of poetics included in Donald Allen’s classic 1959 anthology The New American Poetry, Baraka (then Leroi Jones) asked: “HOW YOU SOUND??” How a poet’s poem sounded mattered most for him: “you have to start and finish there … your own voice … how you sound” (425). Primarily referencing the poem on the page, he wasn’t whistling in the dark: often thought of as a vocal performance of language, poetry has a long history with sound. One thread of this history is the Homeric tradition of an “oral poetics,” a tradition where, as Albert Lord notes in The Singer of Tales, socialized performances of poetry were simultaneously modes of composition. The feel of language in the body remained inseparable from the poetry that relayed the heroic tales of the ancient world. In The Sounds of Poetry, Robert Pinsky offers a similar account of sound and voice, suggesting that the “sound” of language, the sensuous play of speech, is the material for poetic composition. Or as Charles Bernstein has it in Close Listening, “poetry needs to be sounded” because it is a way to understand it better (7).

"Le Roi Jones" by Flickr user UIC Digital Collections, CC BY-NC-ND 2.0

“Le Roi Jones” by Flickr user UIC Digital Collections, CC BY-NC-ND 2.0

Poetry is often said to be difficult—but how would a poet’s “sounding” of a poem help a listener better understand it, as Bernstein suggests? How is the recorded voice resonating in air different from inert marks on a page? What is the status of that difference? Why or how would the sound recording signify differently than the poem on the silent page? In short, is listening easier than reading? My answer to the final question is a resounding “no.” For me, the challenge is how to consider the recorded poetry performance in both formal and aural terms so as to remain tuned in to the aesthetic and the poetic as well as the social and historical dimensions of a particular poet’s work. This is not easily done.

“Black Dada Nihilismus” was first published in The Dead Lecturer (1964) and later included in Transbluesency (1995). Written in two parts, it asserts a black aesthetic by critiquing the dominance of (white) light in Western art and suggesting a connection between this light, ethnic violence, and religious ideology. This is how the poem opens:

                                        .Against what light
is false what breath
sucked, for deadness.

                                        Murder, the cleansed
purpose, frail, against
God, if they bring him

                                        Bleeding, I would not
forgive, or even call him
black dada nihilismus.

The protestant love, wide windows,
color blocked to Modrian, and the
ugly silent deaths of jews […]

(Transbluesency 97)

Through critique the poem develops the connections between aesthetics and racial dominance and violence. These connections take on different inflections in each recorded version of the poem, and with each inflection another aspect of them is amplified.

The first version is a bootleg of a reading at the Asilomar Negro Writers Conference that was held in Pacific Grove, California, in early August, 1964.

Asilomar Conference Grounds, site of 1964 Negro Writers Conference

Asilomar Conference Grounds, site of 1964 Negro Writers Conference

In addition to the preamble, where Baraka explains some of the poem’s key terms such as Dada, which he describes as a movement in France (rather than Germany or Switzerland), another sonic detail that marks this as “live” is at the 2:59 minute mark when we hear the flap of a turning page, reminding us that Baraka is treating the poem as a script in these recordings. In this version, the opening lines are sharply delivered, the voice fully pausing at the linebreaks and acutely pronouncing the hard vowels (e.g. “sucked”). Against the continuous background hush of the original reel-to-reel recording, Baraka punches his words into the air, as if trying to find a rhythm between these harder vowels and the softer ones that often denote the poem’s object of critique (e.g. “light”).

The next version is off the A side of New York Art Quartet and Imamu Amiri Baraka (ESP Disk 1965), where the poem’s rhythm is immediately established by the musical accompaniment.

Between the first recording and this one a shift began in Baraka’s development as a poet. The assassination of Malcolm X pushed him to think even more about race, politics, and art. In this version the opening lines, delivered with punch and pause in the bootleg, take on a different register when juxtaposed with the smooth coolness of the quartet. Overall, though, the poem is delivered more militantly here. In the first version the opening lines are delivered forcefully, but ultimately this forcefulness subsides over the course of the reading. The opposite is the case in this studio version that slowly builds to the apex of the poem, the point of most force, this stanza:

Black scream
and chant, scream,
and dull, un
earthly
hollering.

(Transbluesency 99)

In the bootleg, the turn of the page—between “earthly” and “hollering”—interrupts this stanza, and Baraka hesitates and slowly finds his way toward the poem’s close, while in the studio version, the musical accompaniment reaches a fevered pitch here, making it feel as if it is at the edge of the scream that it names. This prepares us for the closing litany of names of black figures of “black dada nihilismus,” which goes like this:

For tambo, willie best, dubois, patrice, mantan, the
bronze buckaroos.

For Jack Johnson, asbestos, tonto, buckwheat,
billie holiday.

(Ibid.)

In the final version, which is DJ Spooky’s remix of the second one, included on the CD Offbeat: A Red Hot Soundtrip (TVT Records 1996), this litany feels more like the outro (that is meant as) against Spooky’s beats and moody reverb.

An aspect of the poem amplified in the remix is the stanzas leading up to the apex stanza of the “black scream.” In a series of tercets that open the second section, the speaker addresses the experience of racial oppression and a growing need to strike back:

The razor. Our flail against them, why
you carry knives? Or brutaled lumps of

heart? Why you stay, where they can
reach? Why you sit, or stand, or walk
in this place, a window on a dark

warehouse […]

(Transbluesency 98)

The “why” is significantly amplified in the remix, forcing us to hear the ironic indictment of the oppressive “light,” not as audible in the other two tracks, explicit in Baraka’s tercets.

“IMG_0433″ by Flickr user Beyond Baroque, CC-BY-NC-SA-2.0

“IMG_0433″ by Flickr user Beyond Baroque, CC-BY-NC-SA-2.0

The original recordings of these versions of “Black Dada Nihilismus” are each in a different format: vinyl LP, tape-to-tape reel, and CD. I have been working with digitized versions, so the way I am hearing these recordings—through a smooth digitized MP3 file or Youtube clip—is not the same as the crackle of a needle running an LP’s groove or a nearly noiseless laser tracing a CD. These variations in format mean that the different ways these versions individual signify—their respective “sounds”—are flattened out by compression. Despite this loss of material context, Baraka still sounds different in each of these tracks. Each version of Baraka’s poem offers us another iteration of his “voice,” and the poem, but listening to each of them does not necessarily provide a better understanding of it. We are, though, given different sonic experiences that depend upon the purpose of Baraka’s performance, the listener imagined during the reading, and the voice enunciated through the mediated environment.

Some of the voice details do remain consistent across these recordings. For example, the delivery of one of the poem’s most memorable phrase—“Hermes, the/the blacker art”—that occurs toward the close of the poem’s first section is steadily delivered in a lower register, in the hush of an aside, and might be taken as the motif of each of these variations.

"Stage Microphone TTV" by Flickr user Keith Bloomfield, CC-BY-NC-ND-2.0

“Stage Microphone TTV” by Flickr user Keith Bloomfield, CC-BY-NC-ND-2.0

A vast archive of recorded poetry exists. Mid-century recording projects by Caedmon and Folkways made “voices” of well-known poets, such as Robert Frost and Dylan Thomas, available for mainstream consumption. More recent anthologies and series like Poetry Speaks and The Voice of the Poet suggest that the “voice of the poet” still holds appeal. The proliferation of online sound archives such as Penn Sound and From the Fishouse further attest to an ongoing investment in recording, storing, and making available sound files of poets reading their work. And this fascination with the “sound” of poetry is not limited to mainstream cultural spheres or web-based archives. Several scholarly collections on this convergence of sound, voice, and poetry such as Bernstein’s already-mentioned Close Listening, Adelaide Morris’s Sound States, and Marjorie Perloff’s and Craig Dworkin’s The Sound of Poetry/The Poetry of Sound have appeared over the last decade.

The idea of the sound of the poem, located in the mediated voice of the poet, therefore remains relevant today. In many of these instances, however, the poet’s voice falsely takes on an authoritative “aura,” as Walter Benjamin used that word in his (recently re-translated) “The Work of Art in the Age of Its Technological Reproducibility.” Benjamin uses “aura” to talk about authenticity in art and how that is lost when images (or sounds) can be reproduced and widely distributed, and this is not a bad thing: “technological reproducibility emancipates the work of art from its parasitic subservience to ritual. To an ever-increasing degree, the work produced becomes the reproduction of work designed to be reproduced” (24). When Benjamin’s concept is applied to recorded poetry, two key points emerge. First, the “sound” of a poet’s voice is the product of technological conditions. Second, just as a book editor makes aesthetic judgments based on a perceived audience, a listener is imagined when a poetry performance is recorded. Too bad I didn’t know this in high school.

“A Poem for Speculative Hipsters by Amiri Baraka” by Flickr user Shawn Calhoun, CC-BY-NC-2.0

“A Poem for Speculative Hipsters by Amiri Baraka” by Flickr user Shawn Calhoun, CC-BY-NC-2.0

Featured image: “Paula Varjack” by Flickr user Very Quiet, CC-BY-SA-2.0

John Hyland recently completed his dissertation on sound, poetics, and the black diaspora, titled “Atlantic Reverberations: The Sonic Performances of Black Diasporic Poetries,” at the University at Buffalo, SUNY. His poems, essays, and reviews have appeared (or are forthcoming) in a range of journals, such as The Journal of Postcolonial Writing, College Literature, and Borderlands. Recently, he has enjoyed performing with the Buffalo Poets Theater and co-edited a special issue of the poetry journal kadar koli on the relationship between violence and the expressive arts.  

tape reelREWIND! . . .If you liked this post, you may also dig:

Hearing the Tenor of the Vendler/Dove Conversation: Race, Listening, and the “Noise” of Texts-Christina Sharpe

Pretty, Fast, and Loud: The Audible Ali–Tara Betts

The Sounds of Anti-Anti-Essentialism: Listening to Black Consciousness in the Classroom-Carter Mathes

In Defense of Auto-Tune

Lil Wayne, I Am Still Music Tour, Photo by Matthew Eisman

I am here today to defend auto-tune. I may be late to the party, but if you watched Lil Wayne’s recent schizophrenic performance on MTV’s VMAs you know that auto-tune isn’t going anywhere.   The thoughtful and melodic opening song “How to Love” clashed harshly with the expletive-laden guitar-rocking “John” Weezy followed with. Regardless of how you judge that disjunction, what strikes me about the performance is that auto-tune made Weezy’s range possible. The studio magic transposed onto the live moment dared auto-tune’s many haters to revise their criticisms about the relationship between the live and the recorded. It suggested that this technology actually opens up possibilities, rather than marking a limitation.

Auto-tune is mostly synonymous with the intentionally mechanized vocal distortion effect of singers like T-Pain, but it has actually been used for clandestine pitch correction in the studio for over 15 years.  Cher’s voice on 1998’s “Believe” is probably the earliest well-known use of the device to distort rather than correct, though at the time her producers claimed to have used a vocoder pedal, probably in an attempt to hide what was then a trade secret—the Antares Auto-Tune machine is widely used to correct imperfections in studio singing. The corrective function of auto-tune is more difficult to note than the obvious distortive effect because when used as intended, auto-tuning is an inaudible process. It blends flubbed or off-key notes to the nearest true semi-tone to create the effect of perfect singing every time.  The more off-key a singer is, the harder it is to hide the use of the technology.  Furthermore, to make melody out of talking or rapping the sound has to be pushed to the point of sounding robotic.

Antares Auto-Tune 7

Antares Auto-Tune 7 Interface

The dismissal of auto-tuned acts is usually made in terms of a comparison between the modified recording and what is possible in live performance, like indie folk singer Neko Case’s extended tongue-lashing in Stereogum.  Auto-tune makes it so that anyone can sing whether they have talent or not, or so the criticism goes, putting determination of talent before evaluation of the outcome. This simple critique conveniently ignores how recording technology has long shaped our expectations in popular music and for live performance. Do we consider how many takes were required for Patti LaBelle to record “Lady Marmalade” when we listen?  Do we speculate on whether spliced tape made up for the effects of a fatiguing day of recording? Chances are that even your favorite and most gifted singer has benefited from some form of technology in recording their work. When someone argues that auto-tune allows anyone to sing, what they are really complaining about is that an illusion of authenticity has been dispelled. My question in response is: So what? Why would it so bad if anyone could be a singer through Auto-tuning technology?  What is really so threatening about its use?

As Walter Benjamin writes in “The Work of Art in the Age of Mechanical Reproduction,” the threat to art presented by mechanical reproduction emerges from the inability for its authenticity  to be reproduced—but authenticity is a shibboleth.  He explains that what is really threatened is the authority of the original; but how do we determine what is original in a field where the influences of live performance and record artifact are so interwoven?  Auto-tune represents just another step forward in undoing the illusion of art’s aura. It is not the quality of art that is endangered by mass access to its creation, but rather the authority of cultural arbiters and the ideological ends they serve.

Auto-tune supposedly obfuscates one of the indicators of authenticity, imperfections in the work of art.  However, recording technology already made error less notable as a sign of authenticity to the point where the near perfection of recorded music becomes the sign of authentic talent and the standard to which live performance is compared.  We expect the artist to perform the song as we have heard it in countless replays of the single, ignoring that the corrective technologies of recording shaped the contours of our understanding of the song.

In this way, we can think of the audible auto-tune effect is actually re-establishing authenticity by making itself transparent.  An auto-tuned song establishes its authority by casting into doubt the ability of any art to be truly authoritative and owning up to that lack. Listen to the auto-tuned hit “Blame It” by Jaime Foxx, featuring T-Pain, and note how their voices are made nearly indistinguishable by the auto-tune effect.

It might be the case that anyone is singing that song, but that doesn’t make it less bumping and less catchy—in fact, I’d argue the slippage makes it catchier.   The auto-tuned voice is the sound of a democratic voice.  There isn’t much precedent for actors becoming successful singers, but “Blame It” provides evidence of the transcendent power of auto-tune  allowing anyone to participate in art and culture making.   As Benjamin reminds us, “The fact that the new mode of participation first appeared in a disreputable form must not confuse the spectator.”  The fact that “anyone” can do it increases possibilities and casts all-encompassing dismissal of auto-tune as reactionary and elitist.

Mechanical reproduction may “pry an object from its shell” and destroy its aura and authority–demonstrating the democratic possibilities in art as it is repurposed–but I contend that auto-tune goes one step further. It pries singing free from the tyranny of talent and its proscriptive aesthetics.  It undermines the authority of the arbiters of talent and lets anyone potentially take part in public musical vocal expression. Even someone like Antoine Dodson, whose rant on the local news, ended up a catchy internet hit thanks to the Songify project.

Auto-tune represents a democratic impulse in music. It is another step in the increasing access to cultural production, going beyond special classes of people in social or economic position to determine what is worthy. Sure, not everyone can afford the Antares Auto-Tune machine, but recent history has demonstrated that such technologies become increasingly affordable and more widely available.  Rather than cold and soulless, the mechanized voice can give direct access to the pathos of melody when used by those whose natural talent is not for singing.  Listen to Kanye West’s 808s & Heartbreak, or (again) Lil Wayne’s “How To Love.”  These artists aren’t trying to get one over on their listeners, but just the opposite, they want to evoke an earnestness that they feel can only be expressed through the singing voice. Why would you want to resist a world where anyone could sing their hearts out?

 

Osvaldo Oyola is a regular contributor to Sounding Out! He is also an English PhD student at Binghamton University.

%d bloggers like this: