“Music has always confounded value,” writes interdisciplinary artist and writer Jace Clayton in Uproot: Travels in 21st-Century Music and Digital Culture (FSG Originals, 2016, 22). Recounting his extensive international travels performing as DJ /rupture, Clayton presents a flow of cosmopolitan musical experiences that illustrate complex collisions between music and value around the world. Whether writing about homemade sound-systems in tropical clubs in Brooklyn, or about shellac preservation at the Arab Music Archiving and Research Foundation in Beirut, Clayton considers the technologies by which we make — and place value on — musical sounds in “a world where worth is created in radically different ways from what the market teaches us” (24).
Uproot is a narrative about the ways working musicians experience globalization. “Our music seems to sound the way global capital is — liquid, international, porous, and sped up,” the author writes (16). This homology between sound and economic processes echoes the theories of sociologists like Anthony Giddens and the late Zygmunt Bauman, both of whom argue that modern life is characterized by fluidity and fragmentation: employment is precarious, experience is mediated, and ethical decisions are full of ambiguity. These ideas clearly inspire Clayton’s narrative; that said, Uproot is not an academic publication. As Atossa Araxia Abrahamian writes at the Nation, the book evades genre, “at once travelogue and cultural ethnography, pop philosophy and memoir, a guide to contemporary music and a fanzine.”
The book begins with a discussion of the history of Auto-Tune. While Clayton’s claim that Auto-Tune was the “first truly new sound effect of the internet era” might be overstated, his distinction between “corrective Auto-Tune” and “cosmetic Auto-Tune” is useful, the first of many moments of clarity in parsing the ways we use and mis-use musical media today. “The robot voice signifies differently everywhere you go,” he writes, an observation that becomes central to the book (49). By refusing to take a deterministic stance toward technology, Clayton empowers the musicians he writes about, acknowledging the ways in which artists mold trends to their own regional and local purposes. Of collaboration with a violinist in Morocco, Clayton writes: “We may have thought similarly, yet our ‘default settings’ were so far apart as to be almost incompatible” (185).
Uproot offers intimate insights into a range of tools and techniques of production, such as compression artifacts, “refixes,” and dozens of music-making interfaces, including Clayton’s own “music software-as-art project,” Sufi Plug Ins.
Even language itself is conceived as a form of technological mediation, as when Clayton compares Arabizi — a phonetic spelling of colloquial Arabic — to the hybrid sounds of mahraganat music that the language is used to describe. Of these “wandering genealogies” that emerge from international conversations, Clayton suggests that any hybrid genre we can imagine likely already exists: “Accordions and African techno? It’s called funaná” (102). The book describes at least a dozen other music traditions and microgenres–some very old, some just coalesced–from dabke to zar, each the product of a unique fusion of vocabularies.
Clayton on Mazaher (182): “Umm Sameh, Umm Hassan, and Nour el Sabah: these three women are some of the only people in Egypt keeping zar alive.”
Clayton’s own prose style, replete with metaphor and fluent in informal language, mirrors the ethos of music production he explores in the book: eclectic, energetic, and bursting with detail. What better way to describe Auto-Tune’s effect than as liquification of sound into a “bright neon stream, as if a dial-up modem and a river have fallen in love” (53)? Clayton’s technological travelogue extends beyond aural sensation alone. This is a story of “sidewalk vendors, radios, mosque loudspeakers,” (106) but it is just as much about “jerk chicken, fish tea, goatskin soup” (73). When Clayton describes his surroundings, we can touch the orange blossoms and smell the cigarettes.
The book’s recurrent question is how DJ practices in different locations are both constrained and inspired by financial flows. In any context, Clayton argues, “[m]oney runs to the people with the least imagination” (24). Early on, he establishes this view that musical experience is priceless, more valuable than any profit derived from rhythms of supply and demand, which reward the wrong people. That said, Clayton isn’t naive about musicians’ inevitable need for income, and throughout the text, readers are asked to inhabit ethical dilemmas that artists encounter throughout the world. At one point, Clayton describes his own moral quandary when asked to perform in front of a giant Red Bull logo, a “glowing lump of techno-fascist DJ furniture.” Later, Clayton critiques the hegemony of “Red Bull patronage” and similar systems of support for artists who are desperate for funding (121). He makes clear his disdain for corporate sponsors, companies that “appear generous as they let us know that our music is literally worthless to them” (123).
A tradeoff emerges between pragmatism and idealism. Clayton pokes holes in the empty rhetoric of “authenticity” that marketers encourage and exploit, even as we sense that he hasn’t yet relinquished his belief in something essentially good about the human spirit. Listening is a powerful social practice that, in Clayton’s view, gives true meaning to music in a global economy that otherwise undervalues it. “The heavier the workaday grind to escape from, the more a party transports us” (73), he writes, suggesting that listeners extract their own surplus value.
At times, Clayton’s observations could benefit from an engagement with ethnographic methods that can help mitigate fieldwork biases. For example, although the book does involve open discussions of gendered inequalities, they are limited in scope. At one point Clayton calls attention to “macho wrangling over propriety and womanhood” among managers and producers in Agadir, Morocco (52); he describes his own futile attempt to acquire a frank interview with female singers amid the patriarchal structure there. But despite Clayton’s awareness of gendered power dynamics, he does not critique the male musicians and producers who propagate such imbalances.
When female figures do appear, they are often treated as side characters. Rihanna, for example, is presented as exemplary of the business model of “singer as mouthpiece” (50), a person for whom others do the work. Clayton isn’t wrong to call attention to the large networks of employees that work behind any celebrity brand, but it is risky to do so at the expense of female workers, especially in the midst of a book that elsewhere describes women as decoration for the musical environments in which men perform what are presumably more important tasks. “Naked girls on pedestals [who] got their bodies painted” (19), “photoshopped young women” (49) and “demure girls” (49) all set scenes for tales of male creativity. This is not to critique how some women may choose to participate in music scenes, but rather to point out that women’s concerns and perspectives are not Clayton’s focus in these passages, nor in much of the book.
On Berber Auto-tune star Saadia Tihihit (49-50): “Like Justin Beiber or any child groomed to be a media star, Saadia Tihihit occupies a place at least initially defined more by the commercial strategies of those around her than by any desire for artistic autonomy.”
Comparably, Clayton’s conception of music and global inequality is sometimes uneven. Drawing stark divisions between the “civilized” and otherwise, he resorts to clichéd language when he writes of “backwater Uzbekistan” (31) and “war-torn Africa” (81). When he describes towns and villages near Casablanca where “ancient rhythms of life still hold sway” (33), he reproduces exoticizing tropes of African music. Elsewhere in the book, Clayton addresses musical accusations of fetishism, stating: “I know that Africans and blacks have been fetishized for centuries now, perhaps millennia. Who cares? You simply exist in all your complexity and let them deal with it. Fetishism is so vague” (84). He also critiques what he calls the “spectacle of a so-called ancient culture” (99) that is often at the heart of “world music” scenes, but then describes Appalachian musical performance as “the old-timey way with banjos and fiddles and washtub percussion” (32), opposing these practices against technological advancement, a false dichotomy that ethnomusicologists work to complicate, if not avoid.
Clayton brings these issues to a head during the book’s extensive discussion of “world music” as a marketing category. His commentary on the conundrums of appropriation surrounding figures such as Paul Simon, M.I.A., and Moby feels familiar, but he surpasses the usual analysis of these common case studies with more personal insights into “world music,” beginning with crate-digging excursions at record shops with deep international selections, such as the now-defunct RRRecords in Lowell, Massachusetts. Clayton contrasts his own on-foot exploration of foreign sounds with what he calls “World Music 2.0,” an internet-driven network of musical discovery based around the commodification of information and attention, in which middlemen reign supreme. His ambivalence is exemplified by this claim: “At its worst, World Music 2.0 offers the clubland equivalent of a package vacation. At its best, it propels some of the most exciting music in the world” (104-105).
The book’s ideas occasionally undermine themselves, but there is no question that the author ultimately intends to advocate for people on the margins. As Max Pearl has noted at the LA Times, Clayton consistently defends lo-fi, lo-tech, and lo-res sonic expression — that which is “distorted, homespun, libidinous” (80) — as valuable in its own right. Further, Sukhdev Sandhu has suggested at the Guardian that the book’s attention to homologies between “the movement of sounds and of migrant bodies” serves to recognize the struggles of global refugees and affirm their humanity.
Among Uproot’s many mentions of transport, readers never receive a clear statement about what, precisely, the relationship between music and motion is, or how exactly value emerges from that pairing. Rather than a weakness of the book, however, maybe such equivocation should be taken as an accurate reflection of the nebulous circumstances in which many of us find ourselves — creators and listeners who are regularly uprooted, usually at the mercy of those whom the money follows. Faced with this precarity, let Clayton’s enthusiasm for all sounds ground you.
Uproot is accompanied by an online Listening Guide that includes audio and visual examples of music from the book: http://www.uprootbook.com.
Elizabeth Newton is a doctoral candidate in musicology. She has written for The New Inquiry, Tiny Mix Tapes, Real Life Magazine, the Quietus, and Leonardo Music Journal. Her research interests include musico-poetics, fidelity and reproduction, and affective histories of musical media. Her dissertation, in progress, is about “affective fidelity” in audio and print culture of the 1990s.
REWIND!…If you liked this post, you may also dig:
SO! Reads: Dolores Inés Casillas’s ¡Sounds of Belonging!–Monica De La Torre
SO! Reads: Roshanak Khesti’s Modernity’s Ear–Shayna Silverstein
This is the final article in Sounding Out!‘s April Forum on “Sound and Technology.” Every Monday this month, you’ve heard new insights on this age-old pairing from the likes of Sounding Out! veteranos Aaron Trammell and Primus Luta along with new voices Andrew Salvati and Owen Marshall. These fast-forward folks have shared their thinking about everything from Auto-tune to techie manifestos. Today, Marshall helps us understand just why we want to shift pitch-time so darn bad. Wait, let me clean that up a little bit. . .so darn badly. . .no wait, run that back one more time. . .jjuuuuust a little bit more. . .so damn badly. Whew! There! Perfect!–JS, Editor-in-Chief
A recording engineer once told me a story about a time when he was tasked with “tuning” the lead vocals from a recording session (identifying details have been changed to protect the innocent). Polishing-up vocals is an increasingly common job in the recording business, with some dedicated vocal producers even making it their specialty. Being able to comp, tune, and repair the timing of a vocal take is now a standard skill set among engineers, but in this case things were not going smoothly. Whereas singers usually tend towards being either consistently sharp or flat (“men go flat, women go sharp” as another engineer explained), in this case the vocalist was all over the map, making it difficult to always know exactly what note they were even trying to hit. Complicating matters further was the fact that this band had a decidedly lo-fi, garage-y reputation, making your standard-issue, Glee-grade tuning job decidedly inappropriate.
Undaunted, our engineer pulled up the Auto-Tune plugin inside Pro-Tools and set to work tuning the vocal, to use his words, “artistically” – that is, not perfectly, but enough to keep it from being annoyingly off-key. When the band heard the result, however, they were incensed – “this sounds way too good! Do it again!” The engineer went back to work, this time tuning “even more artistically,” going so far as to pull the singer’s original performance out of tune here and there to compensate for necessary macro-level tuning changes elsewhere.
The product of the tortuous process of tuning and re-tuning apparently satisfied the band, but the story left me puzzled… Why tune the track at all? If the band was so committed to not sounding overproduced, why go to such great lengths to make it sound like you didn’t mess with it? This, I was told, simply wasn’t an option. The engineer couldn’t in good conscience let the performance go un-tuned. Digital pitch correction, it seems, has become the rule, not the exception, so much so that the accepted solution for too much pitch correction is more pitch correction.
Since 1997, recording engineers have used Auto-Tune (or, more accurately, the growing pantheon of digital pitch correction plugins for which Auto-Tune, Kleenex-like, has become the household name) to fix pitchy vocal takes, lend T-Pain his signature vocal sound, and reveal the hidden vocal talents of political pundits. It’s the technology that can make the tone-deaf sing in key, make skilled singers perform more consistently, and make MLK sound like Akon. And at 17 years of age, “The Gerbil,” as some like to call Auto-Tune, is getting a little long in the tooth (certainly by meme standards.) The next U.S. presidential election will include a contingent of voters who have never drawn air that wasn’t once rippled by Cher’s electronically warbling voice in the pre-chorus of “Believe.” A couple of years after that, the Auto-Tune patent will expire and its proprietary status will dissolve into to the collective ownership of the public domain.
Growing pains aside, digital vocal tuning doesn’t seem to be leaving any time soon. Exact numbers are hard to come by, but it’s safe to say that the vast majority of commercial music produced in the last decade or so has most likely been digitally tuned. Future Music editor Daniel Griffiths has ballpark-estimated that, as early as 2010, pitch correction was used in about 99% of recorded music. Reports of its death are thus premature at best. If pitch correction is seems banal it doesn’t mean it’s on the decline; rather, it’s a sign that we are increasingly accepting its underlying assumptions and internalizing the habits of thought and listening that go along with them.
Headlines in tech journalism are typically reserved for the newest, most groundbreaking gadgets. Often, though, the really interesting stuff only happens once a technology begins to lose its novelty, recede into the background, and quietly incorporate itself into fundamental ways we think about, perceive, and act in the world. Think, for example, about all the ways your embodied perceptual being has been shaped by and tuned-in to, say, the very computer or mobile device you’re reading this on. Setting value judgments aside for a moment, then, it’s worth thinking about where pitch correction technology came from, what assumptions underlie the way it works and how we work with it, and what it means that it feels like “old news.”
As is often the case with new musical technologies, digital pitch correction has been the target for no small amount of controversy and even hate. The list of indictments typically includes the homogenization of music, the devaluation of “actual talent,” and the destruction of emotional authenticity. Suffice to say, the technological possibility of ostensibly producing technically “pitch-perfect” performances has wreaked a fair amount of havoc on conventional ways of performing and evaluating music. As Primus Luta reminded us in his SO! piece on the powerful-yet-untranscribable “blue notes” that emerged from the idiosyncrasies of early hardware samplers, musical creativity is at least as much about digging-into and interrogating the apparent limits of a technology as it is about the successful removal of all obstacles to total control of the end result.
Paradoxically, it’s exactly in this spirit that others have come to the technology’s defense: Brian Eno, ever open to the unexpected creative agency of perplexing objects, credits the quantized sound of an overtaxed pitch corrector with renewing his interest in vocal performances. SO!’s own Osvaldo Oyola, channeling Walter Benjamin, has similarly offered a defense of Auto-Tune as a democratizing technology, one that both destabilizes conventional ideas about musical ability and allows everyone to sing in-tune, free from the “tyranny of talent and its proscriptive aesthetics.”
Jonathan Sterne, in his book MP3, offers an alternative to normative accounts of media technology (in this case, narratives either of the decline or rise of expressive technological potential) in the form of “compression histories” – accounts of how media technologies and practices directed towards increasing their efficiency, economy, and mobility can take on unintended cultural lives that reshape the very realities they were supposed to capture in the first place. The algorithms behind the MP3 format, for example, were based in part on psychoacoustic research into the nature of human hearing, framed primarily around the question of how many human voices the telephone company could fit into a limited bandwidth electrical cable while preserving signal intelligibility. The way compressed music files sound to us today, along with the way in which we typically acquire (illegally) and listen to them (distractedly), is deeply conditioned by the practical problems of early telephony. The model listener extracted from psychoacoustic research was created in an effort to learn about the way people listen. Over time, however, through our use of media technologies that have a simulated psychoacoustic subject built-in, we’ve actually learned collectively to listen like a psychoacoustic subject.
Pitch-time manipulation runs largely in parallel to Sterne’s bandwidth compression story. The ability to change a recorded sound’s pitch independently of its playback rate had its origins not in the realm of music technology, but in efforts to time-compress signals for faster communication. Instead of reducing a signal’s bandwidth, pitch manipulation technologies were pioneered to reduce the time required to push the message through the listener’s ears and into their brain. As early as the 1920s, the mechanism of the rotating playback head was being used to manipulate pitch and time interchangeably. By spinning a continuous playback head relative to the motion of the magnetic tape, researchers in electrical engineering, educational psychology, and pedagogy of the blind found that they could increase playback rate of recorded voices without turning the speakers into chipmunks. Alternatively, they could rotate the head against a static piece of tape and allow a single moment of recorded sound to unfold continuously in time – a phenomenon that influenced the development of a quantum theory of information.
In the early days of recorded sound some people had found a metaphor for human thought in the path of a phonograph’s needle. When the needle became a head and that head began to spin, ideas about how we think, listen, and communicate followed suit: In 1954 Grant Fairbanks, the director of the University of Illinois’ Speech Research Laboratory, put forth an influential model of the speech-hearing mechanism as a system where the speaker’s conscious intention of what to say next is analogized to a tape recorder full of instructions, its drive “alternately started and stopped, and when the tape is stationary a given unit of instruction is reproduced by a moving scanning head”(136). Pitch time changing was more a model for thinking than it was for singing, and its imagined applications were thus primarily non-musical.
Take for example the Eltro Information Rate Changer. The first commercially available dedicated pitch-time changer, the Eltro advertised its uses as including “pitch correction of helium speech as found in deep sea; Dictation speed testing for typing and steno; Transcribing of material directly to typewriter by adjusting speed of speech to typing ability; medical teaching of heart sounds, breathing sounds etc.by slow playback of these rapid occurrences.” (It was also, incidentally, used by Kubrick to produce the eerily deliberate vocal pacing of HAL 9000). In short, for the earliest “pitch-time correction” technologies, the pitch itself was largely a secondary concern, of interest primarily because it was desirable for the sake of intelligibility to pitch-change time-altered sounds into a more normal-sounding frequency range.
This coupling of time compression with pitch changing continued well into the era of digital processing. The Eventide Harmonizer, one of the first digital hardware pitch shifters, was initially used to pitch-correct episodes of “I Love Lucy” which had been time-compressed to free-up broadcast time for advertising. Similar broadcast time compression techniques have proliferated and become common in radio and television (see, for example, Davis Foster Wallace’s account of the “cashbox” compressor in his essay on an LA talk radio station.) Speed listening technology initially developed for the visually impaired has similarly become a way of producing the audio “fine print” at the end of radio advertisements.
Though the popular conversation about Auto-Tune often leaves this part out, it’s hardly a secret that pitch-time correction is as much about saving time as it is about hitting the right note. As Auto-Tune inventor Andy Hildebrand put it,
[Auto-Tune’s] largest effect in the community is it’s changed the economics of sound studios…Before Auto-Tune, sound studios would spend a lot of time with singers, getting them on pitch and getting a good emotional performance. Now they just do the emotional performance, they don’t worry about the pitch, the singer goes home, and they fix it in the mix.
Whereas early pitch-shifters aimed to speed-up our consumption of recorded voices, the ones now used in recording are meant to reduce the actual time spent tracking musicians in studio. One of the implications of this framing is that emotion, pitch, and the performer take on a very particular relationship, one we can find sketched out in the Auto-Tune patent language:
Voices or instruments are out of tune when their pitch is not sufficiently close to standard pitches expected by the listener, given the harmonic fabric and genre of the ensemble. When voices or instruments are out of tune, the emotional qualities of the performance are lost. Correcting intonation, that is, measuring the actual pitch of a note and changing the measured pitch to a standard, solves this problem and restores the performance. (Emphasis mine. Similar passages can be found in Auto-Tune’s technical documentation.)
In the world according to Auto-Tune, the engineer is in the business of getting emotional signals from place to place. Emotion is the message, and pitch is the medium. Incorrect (i.e. unexpected) pitch therefore causes the emotion to be “lost.” While this formulation may strike some people as strange (for example, does it mean that we are unable to register the emotional qualities of a performance from singers who can’t hit notes reliably? Is there no emotionally expressive role for pitched performances that defy their genre’s expectations?), it makes perfect sense within the current affective economy and division of labor and affective economy of the recording studio. It’s a framing that makes it possible, intelligible, and at least somewhat compulsory to have singers “express emotion” as a quality distinct from the notes they hit and have vocal producers fix up the actual pitches after the fact. Both this emotional model of the voice and the model of the psychoacoustic subject are useful frameworks for the particular purposes they serve. The trick is to pay attention to the ways we might find ourselves bending to fit them.
Owen Marshall is a PhD candidate in Science and Technology Studies at Cornell University. His dissertation research focuses on the articulation of embodied perceptual skills, technological systems, and economies of affect in the recording studio. He is particularly interested in the history and politics of pitch-time correction, cybernetics, and ideas and practices about sensory-technological attunement in general.
Featured image: “Epic iPhone Auto-Tune App” by Flickr user Photo Giddy, CC BY-NC 2.0
REWIND!…If you liked this post, you may also dig:
“From the Archive #1: It is art?”-Jennifer Stoever
“Garageland! Authenticity and Musical Taste”-Aaron Trammell
I am here today to defend auto-tune. I may be late to the party, but if you watched Lil Wayne’s recent schizophrenic performance on MTV’s VMAs you know that auto-tune isn’t going anywhere. The thoughtful and melodic opening song “How to Love” clashed harshly with the expletive-laden guitar-rocking “John” Weezy followed with. Regardless of how you judge that disjunction, what strikes me about the performance is that auto-tune made Weezy’s range possible. The studio magic transposed onto the live moment dared auto-tune’s many haters to revise their criticisms about the relationship between the live and the recorded. It suggested that this technology actually opens up possibilities, rather than marking a limitation.
Auto-tune is mostly synonymous with the intentionally mechanized vocal distortion effect of singers like T-Pain, but it has actually been used for clandestine pitch correction in the studio for over 15 years. Cher’s voice on 1998’s “Believe” is probably the earliest well-known use of the device to distort rather than correct, though at the time her producers claimed to have used a vocoder pedal, probably in an attempt to hide what was then a trade secret—the Antares Auto-Tune machine is widely used to correct imperfections in studio singing. The corrective function of auto-tune is more difficult to note than the obvious distortive effect because when used as intended, auto-tuning is an inaudible process. It blends flubbed or off-key notes to the nearest true semi-tone to create the effect of perfect singing every time. The more off-key a singer is, the harder it is to hide the use of the technology. Furthermore, to make melody out of talking or rapping the sound has to be pushed to the point of sounding robotic.
The dismissal of auto-tuned acts is usually made in terms of a comparison between the modified recording and what is possible in live performance, like indie folk singer Neko Case’s extended tongue-lashing in Stereogum. Auto-tune makes it so that anyone can sing whether they have talent or not, or so the criticism goes, putting determination of talent before evaluation of the outcome. This simple critique conveniently ignores how recording technology has long shaped our expectations in popular music and for live performance. Do we consider how many takes were required for Patti LaBelle to record “Lady Marmalade” when we listen? Do we speculate on whether spliced tape made up for the effects of a fatiguing day of recording? Chances are that even your favorite and most gifted singer has benefited from some form of technology in recording their work. When someone argues that auto-tune allows anyone to sing, what they are really complaining about is that an illusion of authenticity has been dispelled. My question in response is: So what? Why would it so bad if anyone could be a singer through Auto-tuning technology? What is really so threatening about its use?
As Walter Benjamin writes in “The Work of Art in the Age of Mechanical Reproduction,” the threat to art presented by mechanical reproduction emerges from the inability for its authenticity to be reproduced—but authenticity is a shibboleth. He explains that what is really threatened is the authority of the original; but how do we determine what is original in a field where the influences of live performance and record artifact are so interwoven? Auto-tune represents just another step forward in undoing the illusion of art’s aura. It is not the quality of art that is endangered by mass access to its creation, but rather the authority of cultural arbiters and the ideological ends they serve.
Auto-tune supposedly obfuscates one of the indicators of authenticity, imperfections in the work of art. However, recording technology already made error less notable as a sign of authenticity to the point where the near perfection of recorded music becomes the sign of authentic talent and the standard to which live performance is compared. We expect the artist to perform the song as we have heard it in countless replays of the single, ignoring that the corrective technologies of recording shaped the contours of our understanding of the song.
In this way, we can think of the audible auto-tune effect is actually re-establishing authenticity by making itself transparent. An auto-tuned song establishes its authority by casting into doubt the ability of any art to be truly authoritative and owning up to that lack. Listen to the auto-tuned hit “Blame It” by Jaime Foxx, featuring T-Pain, and note how their voices are made nearly indistinguishable by the auto-tune effect.
It might be the case that anyone is singing that song, but that doesn’t make it less bumping and less catchy—in fact, I’d argue the slippage makes it catchier. The auto-tuned voice is the sound of a democratic voice. There isn’t much precedent for actors becoming successful singers, but “Blame It” provides evidence of the transcendent power of auto-tune allowing anyone to participate in art and culture making. As Benjamin reminds us, “The fact that the new mode of participation first appeared in a disreputable form must not confuse the spectator.” The fact that “anyone” can do it increases possibilities and casts all-encompassing dismissal of auto-tune as reactionary and elitist.
Mechanical reproduction may “pry an object from its shell” and destroy its aura and authority–demonstrating the democratic possibilities in art as it is repurposed–but I contend that auto-tune goes one step further. It pries singing free from the tyranny of talent and its proscriptive aesthetics. It undermines the authority of the arbiters of talent and lets anyone potentially take part in public musical vocal expression. Even someone like Antoine Dodson, whose rant on the local news, ended up a catchy internet hit thanks to the Songify project.
Auto-tune represents a democratic impulse in music. It is another step in the increasing access to cultural production, going beyond special classes of people in social or economic position to determine what is worthy. Sure, not everyone can afford the Antares Auto-Tune machine, but recent history has demonstrated that such technologies become increasingly affordable and more widely available. Rather than cold and soulless, the mechanized voice can give direct access to the pathos of melody when used by those whose natural talent is not for singing. Listen to Kanye West’s 808s & Heartbreak, or (again) Lil Wayne’s “How To Love.” These artists aren’t trying to get one over on their listeners, but just the opposite, they want to evoke an earnestness that they feel can only be expressed through the singing voice. Why would you want to resist a world where anyone could sing their hearts out?
Osvaldo Oyola is a regular contributor to Sounding Out! He is also an English PhD student at Binghamton University.