Archive | Synthesizers RSS for this section

A Brief History of Auto-Tune

Sound and TechThis is the final article  in Sounding Out!‘s April  Forum on “Sound and Technology.” Every Monday this month, you’ve heard new insights on this age-old pairing from the likes of Sounding Out! veteranos Aaron Trammell and Primus Luta along with new voices Andrew Salvati and Owen Marshall.  These fast-forward folks have shared their thinking about everything from Auto-tune to techie manifestos. Today, Marshall helps us understand just why we want to shift pitch-time so darn bad. Wait, let me clean that up a little bit. . .so darn badly. . .no wait, run that back one more time. . .jjuuuuust a little bit more. . .so damn badly. Whew! There! Perfect!–JS, Editor-in-Chief

A recording engineer once told me a story about a time when he was tasked with “tuning” the lead vocals from a recording session (identifying details have been changed to protect the innocent). Polishing-up vocals is an increasingly common job in the recording business, with some dedicated vocal producers even making it their specialty. Being able to comp, tune, and repair the timing of a vocal take is now a standard skill set among engineers, but in this case things were not going smoothly. Whereas singers usually tend towards being either consistently sharp or flat (“men go flat, women go sharp” as another engineer explained), in this case the vocalist was all over the map, making it difficult to always know exactly what note they were even trying to hit. Complicating matters further was the fact that this band had a decidedly lo-fi, garage-y reputation, making your standard-issue, Glee-grade tuning job decidedly inappropriate.

Undaunted, our engineer pulled up the Auto-Tune plugin inside Pro-Tools and set to work tuning the vocal, to use his words, “artistically” – that is, not perfectly, but enough to keep it from being annoyingly off-key. When the band heard the result, however, they were incensed – “this sounds way too good! Do it again!” The engineer went back to work, this time tuning “even more artistically,” going so far as to pull the singer’s original performance out of tune here and there to compensate for necessary macro-level tuning changes elsewhere.

"Melodyne screencap" by Flickr user Ethan Hein, CC BY-NC-SA 2.0

“Melodyne screencap” by Flickr user Ethan Hein, CC BY-NC-SA 2.0

The product of the tortuous process of tuning and re-tuning apparently satisfied the band, but the story left me puzzled… Why tune the track at all? If the band was so committed to not sounding overproduced, why go to such great lengths to make it sound like you didn’t mess with it? This, I was told, simply wasn’t an option. The engineer couldn’t in good conscience let the performance go un-tuned. Digital pitch correction, it seems, has become the rule, not the exception, so much so that the accepted solution for too much pitch correction is more pitch correction.

Since 1997, recording engineers have used Auto-Tune (or, more accurately, the growing pantheon of digital pitch correction plugins for which Auto-Tune, Kleenex-like, has become the household name) to fix pitchy vocal takes, lend T-Pain his signature vocal sound, and reveal the hidden vocal talents of political pundits. It’s the technology that can make the tone-deaf sing in key, make skilled singers perform more consistently, and make MLK sound like Akon. And at 17 years of age, “The Gerbil,” as some like to call Auto-Tune, is getting a little long in the tooth (certainly by meme standards.) The next U.S. presidential election will include a contingent of voters who have never drawn air that wasn’t once rippled by Cher’s electronically warbling voice in the pre-chorus of “Believe.” A couple of years after that, the Auto-Tune patent will expire and its proprietary status will dissolve into to the collective ownership of the public domain.

.

Growing pains aside, digital vocal tuning doesn’t seem to be leaving any time soon. Exact numbers are hard to come by, but it’s safe to say that the vast majority of commercial music produced in the last decade or so has most likely been digitally tuned. Future Music editor Daniel Griffiths has ballpark-estimated that, as early as 2010, pitch correction was used in about 99% of recorded music. Reports of its death are thus premature at best. If pitch correction is seems banal it doesn’t mean it’s on the decline; rather, it’s a sign that we are increasingly accepting its underlying assumptions and internalizing the habits of thought and listening that go along with them.

Headlines in tech journalism are typically reserved for the newest, most groundbreaking gadgets. Often, though, the really interesting stuff only happens once a technology begins to lose its novelty, recede into the background, and quietly incorporate itself into fundamental ways we think about, perceive, and act in the world. Think, for example, about all the ways your embodied perceptual being has been shaped by and tuned-in to, say, the very computer or mobile device you’re reading this on. Setting value judgments aside for a moment, then, it’s worth thinking about where pitch correction technology came from, what assumptions underlie the way it works and how we work with it, and what it means that it feels like “old news.”

"Anti-Tune symbol"

“Anti-Tune symbol”

As is often the case with new musical technologies, digital pitch correction has been the target for no small amount of controversy and even hate. The list of indictments typically includes the homogenization of music, the devaluation of “actual talent,” and the destruction of emotional authenticity. Suffice to say, the technological possibility of ostensibly producing technically “pitch-perfect” performances has wreaked a fair amount of havoc on conventional ways of performing and evaluating music. As Primus Luta reminded us in his SO! piece on the powerful-yet-untranscribable “blue notes” that emerged from the idiosyncrasies of early hardware samplers, musical creativity is at least as much about digging-into and interrogating the apparent limits of a technology as it is about the successful removal of all obstacles to total control of the end result.

Paradoxically, it’s exactly in this spirit that others have come to the technology’s defense: Brian Eno, ever open to the unexpected creative agency of perplexing objects, credits the quantized sound of an overtaxed pitch corrector with renewing his interest in vocal performances. SO!’s own Osvaldo Oyola, channeling Walter Benjamin, has similarly offered a defense of Auto-Tune as a democratizing technology, one that both destabilizes conventional ideas about musical ability and allows everyone to sing in-tune, free from the “tyranny of talent and its proscriptive aesthetics.”

"Audiodatenkompression: Manowar, The Power of Thy Sword" by Wikimedia user Moehre1992, CC BY-SA 3.0

“Audiodatenkompression: Manowar, The Power of Thy Sword” by Wikimedia user Moehre1992, CC BY-SA 3.0

Jonathan Sterne, in his book MP3, offers an alternative to normative accounts of media technology (in this case, narratives either of the decline or rise of expressive technological potential) in the form of “compression histories” – accounts of how media technologies and practices directed towards increasing their efficiency, economy, and mobility can take on unintended cultural lives that reshape the very realities they were supposed to capture in the first place. The algorithms behind the MP3 format, for example, were based in part on psychoacoustic research into the nature of human hearing, framed primarily around the question of how many human voices the telephone company could fit into a limited bandwidth electrical cable while preserving signal intelligibility. The way compressed music files sound to us today, along with the way in which we typically acquire (illegally) and listen to them (distractedly), is deeply conditioned by the practical problems of early telephony. The model listener extracted from psychoacoustic research was created in an effort to learn about the way people listen. Over time, however, through our use of media technologies that have a simulated psychoacoustic subject built-in, we’ve actually learned collectively to listen like a psychoacoustic subject.

Pitch-time manipulation runs largely in parallel to Sterne’s bandwidth compression story. The ability to change a recorded sound’s pitch independently of its playback rate had its origins not in the realm of music technology, but in efforts to time-compress signals for faster communication. Instead of reducing a signal’s bandwidth, pitch manipulation technologies were pioneered to reduce the time required to push the message through the listener’s ears and into their brain. As early as the 1920s, the mechanism of the rotating playback head was being used to manipulate pitch and time interchangeably. By spinning a continuous playback head relative to the motion of the magnetic tape, researchers in electrical engineering, educational psychology, and pedagogy of the blind found that they could increase playback rate of recorded voices without turning the speakers into chipmunks. Alternatively, they could rotate the head against a static piece of tape and allow a single moment of recorded sound to unfold continuously in time – a phenomenon that influenced the development of a quantum theory of information

In the early days of recorded sound some people had found a metaphor for human thought in the path of a phonograph’s needle. When the needle became a head and that head began to spin, ideas about how we think, listen, and communicate followed suit: In 1954 Grant Fairbanks, the director of the University of Illinois’ Speech Research Laboratory, put forth an influential model of the speech-hearing mechanism as a system where the speaker’s conscious intention of what to say next is analogized to a tape recorder full of instructions, its drive “alternately started and stopped, and when the tape is stationary a given unit of instruction is reproduced by a moving scanning head”(136). Pitch time changing was more a model for thinking than it was for singing, and its imagined applications were thus primarily non-musical.

Take for example the Eltro Information Rate Changer. The first commercially available dedicated pitch-time changer, the Eltro advertised its uses as including “pitch correction of helium speech as found in deep sea; Dictation speed testing for typing and steno; Transcribing of material directly to typewriter by adjusting speed of speech to typing ability; medical teaching of heart sounds, breathing sounds etc.by slow playback of these rapid occurrences.” (It was also, incidentally, used by Kubrick to produce the eerily deliberate vocal pacing of HAL 9000). In short, for the earliest “pitch-time correction” technologies, the pitch itself was largely a secondary concern, of interest primarily because it was desirable for the sake of intelligibility to pitch-change time-altered sounds into a more normal-sounding frequency range.

.

This coupling of time compression with pitch changing continued well into the era of digital processing. The Eventide Harmonizer, one of the first digital hardware pitch shifters, was initially used to pitch-correct episodes of “I Love Lucy” which had been time-compressed to free-up broadcast time for advertising. Similar broadcast time compression techniques have proliferated and become common in radio and television (see, for example, Davis Foster Wallace’s account of the “cashbox” compressor in his essay on an LA talk radio station.) Speed listening technology initially developed for the visually impaired has similarly become a way of producing the audio “fine print” at the end of radio advertisements.

"H910 Harmonizer" by Wikimedia user Nalzatron, CC BY-SA 3.0

“H910 Harmonizer” by Wikimedia user Nalzatron, CC BY-SA 3.0

Though the popular conversation about Auto-Tune often leaves this part out, it’s hardly a secret that pitch-time correction is as much about saving time as it is about hitting the right note. As Auto-Tune inventor Andy Hildebrand put it,

[Auto-Tune’s] largest effect in the community is it’s changed the economics of sound studios…Before Auto-Tune, sound studios would spend a lot of time with singers, getting them on pitch and getting a good emotional performance. Now they just do the emotional performance, they don’t worry about the pitch, the singer goes home, and they fix it in the mix.

Whereas early pitch-shifters aimed to speed-up our consumption of recorded voices, the ones now used in recording are meant to reduce the actual time spent tracking musicians in studio. One of the implications of this framing is that emotion, pitch, and the performer take on a very particular relationship, one we can find sketched out in the Auto-Tune patent language:

Voices or instruments are out of tune when their pitch is not sufficiently close to standard pitches expected by the listener, given the harmonic fabric and genre of the ensemble. When voices or instruments are out of tune, the emotional qualities of the performance are lost. Correcting intonation, that is, measuring the actual pitch of a note and changing the measured pitch to a standard, solves this problem and restores the performance. (Emphasis mine. Similar passages can be found in Auto-Tune’s technical documentation.)

In the world according to Auto-Tune, the engineer is in the business of getting emotional signals from place to place. Emotion is the message, and pitch is the medium. Incorrect (i.e. unexpected) pitch therefore causes the emotion to be “lost.” While this formulation may strike some people as strange (for example, does it mean that we are unable to register the emotional qualities of a performance from singers who can’t hit notes reliably? Is there no emotionally expressive role for pitched performances that defy their genre’s expectations?), it makes perfect sense within the current affective economy and division of labor and affective economy of the recording studio. It’s a framing that makes it possible, intelligible, and at least somewhat compulsory to have singers “express emotion” as a quality distinct from the notes they hit and have vocal producers fix up the actual pitches after the fact. Both this emotional model of the voice and the model of the psychoacoustic subject are useful frameworks for the particular purposes they serve. The trick is to pay attention to the ways we might find ourselves bending to fit them.

.

Owen Marshall is a PhD candidate in Science and Technology Studies at Cornell University. His dissertation research focuses on the articulation of embodied perceptual skills, technological systems, and economies of affect in the recording studio. He is particularly interested in the history and politics of pitch-time correction, cybernetics, and ideas and practices about sensory-technological attunement in general. 

Featured image: “Epic iPhone Auto-Tune App” by Flickr user Photo Giddy, CC BY-NC 2.0

tape reelREWIND!…If you liked this post, you may also dig:

“From the Archive #1: It is art?”-Jennifer Stoever

“Garageland! Authenticity and Musical Taste”-Aaron Trammell

“Evoking the Object: Physicality in the Digital Age of Music”-Primus Luta

Tomahawk Chopped and Screwed: The Indeterminacy of Listening

The Wobble Frequency2I’m happy to introduce the final post in Guest Editor Justin Burton‘s three part series for SO!, “The Wobble Continuum.” I’ll leave Justin to recap the series and reflect on it a little in his article below, but first I want to express our appreciation to him for his thoughtful curation of this exciting series, the first in the new Thursday stream at Sounding Out!. Thanks for getting the ball rolling!

Next month be sure to watch this space for a preview of sound at the upcoming Society for Cinema & Media Studies meeting in Seattle, and a new four part series on radio in Latin America by Guest Editor Tom McEnaney.

— Neil Verma, Special Editor for ASA/SCMS

I’m standing at a bus stop outside the Convention Center in downtown Indianapolis, whistling. The tune, “Braves,” is robust, a deep, oscillating comeuppance of the “Tomahawk Chop” melody familiar from my youth (the Braves were always on TBS). There’s a wobbly synthesizer down in the bass, a hi hat cymbal line pecking away at the Tomahawk Chop. This whistled remix of mine really sticks it to the original tune and the sports teams who capitalize on racist appropriations of indigenous cultures. All in all, it’s a sublime bit of musicality I’m bestowing upon the cold Indianapolis streets.

Until I become aware of the other person waiting for the bus. As I glance over at him, I can now hear my tune for what it is. The synthesizer and hi hat are all in my head, the bass nowhere to be heard. This isn’t the mix I intended, A Tribe Called Red’s attempt at defanging the Tomahawk Chop, at re-appropriating stereotypical sounds and spitting them back out on their own terms. Nope, this is just a guy on the street whistling those very stereotypes: it’s the Tomahawk Chop. I suddenly don’t feel like whistling anymore.

*****

As we conclude our Wobble Continuum guest series here at Sounding Out!, I want to think about the connective tissues binding together the previous posts from Mike D’Errico and Christina Giacona, joining A Tribe Called Red and the colonialist culture into which they release their music, and linking me to the guy at the bus stop who is not privy to the virtuosic sonic accompaniment in my head. In each case, I’ll pay attention to sound as material conjoining producers and consumers, and I’ll play with Karen Barad’s notion of performativity to hear the way these elements interact [Jason Stanyek and Ben Piekut also explore exciting possibilities from Barad in “Deadness” (TDR 54:1, 2010)].

"Sound Waves: Loud Volume" by Flickr user Tess Watson, CC BY 2.0

“Sound Waves: Loud Volume” by Flickr user Tess Watson, CC BY 2.0

Drawing from physicist Niels Bohr, Barad begins with the fact that matter is fundamentally indeterminate. This is formally laid out in the Heisenberg Uncertainty Principle, which notes that the more precisely we can determine (for instance) the position of a particle, the less we can say with certainty about its momentum (and vice versa). Barad points out that “‘position’ only has meaning when a rigid apparatus with fixed parts is used (eg, a ruler is nailed to a fixed table in the laboratory, thereby establishing a fixed frame of reference for establishing ‘position’)” (2003, 814).

This kind of indeterminacy is characteristic of sound, which vibrates along a cultural continuum, and which, in sliding back and forth along that continuum, allows us to tune into some information even as other information distorts or disappears. This can feel very limiting, but it can also be exhilarating, as what we are measuring are a variety of possibilities prepared to unfold before us as matter and sound become increasingly unpredictable and slippery. We can observe this continuum in the tissue connecting the previous posts in this series. In the first, Mike D’Errico tunes into the problematic hypermasculinity of brostep, pinpointing the ways music software interfaces can rehash tropes of control and dominance (Robin James has responded with productive expansions of these ideas), dropping some areas of music production right back into systems of patriarchy. In the second post, Giacona, in highlighting the anti-racist and anti-colonial work of A Tribe Called Red, speaks of the “impotence” visited upon the Tomahawk Chop by ATCR’s sonic interventions. Here, hypermasculinity is employed as a means of colonial reprimand for a hypermasculine, patriarchal culture. In sliding from one post to the other, we’ve tuned into different frequencies along a continuum, hearing the possibilities (both terrorizing and ameliorative) of patriarchal production methods unfolding before us.

"Skrillex at Forum, Copenhagen" by Flickr user Jacob Wang, CC-BY-SA-2.0

“Skrillex at Forum, Copenhagen” by Flickr user Jacob Wang, CC-BY-SA-2.0

Barad locates the performative upshot of this kind of indeterminacy in the fact that the scientist, the particle, and the ruler nailed to the table in the lab are all three bound together as part of a single phenomenon—they become one entity. To observe something is to become entangled with it, so that all of the unfolding possibilities of that particle become entwined with the unfolding possibilities of the scientist and the ruler, too. The entire phenomenon becomes indeterminate as the boundaries separating each entity bleed together, and these entities only detangle by performing—by acting out—boundaries among themselves.

Returning to Giacona’s discussion of “Braves,” it’s possible to mix and remix our components to perform them—to act them out—in more than one way. Giacona arranges it so that ATCR is the scientist, observing a particle that is a colonizing culture drunk on its own stereotypes. Here, “Braves” is the ruler that allows listeners to measure something about that culture. Is that something location? Direction? Even if we can hear clearly what Giacona leads us to—an uncovering of stereotypes so pernicious as to pervade, unchallenged, everyday activities—there’s an optimism available in indeterminacy. As we slide along the continuum to the present position of this colonialist culture, the certainty with which we can say anything about its trajectory lessens, opening the very possibility that motivates ATCR, namely the hope of something better.

"ATCR 1" by Flickr user MadameChoCho, CC BY-NC-SA 2.0

“ATCR 1” by Flickr user MadameChoCho, CC BY-NC-SA 2.0

But listening and sounding are tricky things. As I think about my whistling of “Braves” in Indianapolis, it occurs to me that Giacona’s account is easily subverted. It could be that ATCR is the particle, members of a group of many different nations reduced to a single voice in a colonial present populated by scientists (continuing the analogy) who believe in Manifest Destiny and Johnny Depp. Now the ruler is not “Braves” but the Tomahawk Chop melody ATCR attempts to critique, and the group is measured by the same lousy standard colonizers always use. In this scenario, people attend ATCR shows in redface and headdresses, and I stand on the street whistling a war chant. We came to the right place, but we heard—or in my case, re-sounded—the wrong thing.

"Knob Twiddler" by Flickr user Jes, CC BY-SA 2.0

“Knob Twiddler” by Flickr user Jes, CC BY-SA 2.0

Jennifer Stoever-Ackerman’s “listening ear” is instructive here. Cultures as steeped in indigenous stereotypes as the United States and Canada have conditioned their ears to hear ATCR through whiteness, through colonialism, making it difficult to perceive the subversive nature of “Braves.” ATCR plays a dangerous game in which they are vulnerable to being heard as a war chant rather than a critique; their material must be handled with care. There’s a simple enough lesson for me and my whistling: some sounds should stay in my head. But Barad offers something more fundamental to what we do as listeners. By recognizing that 1). there are connective tissues deeply entangling the materiality of our selves, musicians, and music and 2). listening is a continuum revealing only some knowledge at any given moment, we can begin to imagine and perform the many possibilities that open up to us in the indeterminacy of listening.

If everything sounds certain to us when we listen, we’re doing it wrong. Instead, for music to function productively, we as listeners must find our places in a wobbly continuum whose tissues connect us to the varied appendages of music and culture. Once so entangled, we’ll ride those synth waves down to the low end as hi hats all the while tap out the infinite possibilities opening in front of us. 

Featured image: “a tribe called red_hall4_mozpics (2)_GF” by Flickr user Trans Musicales, CC BY-NC-ND 2.0

Justin Burton is a musicologist specializing in US popular music and culture. He is especially interested in hip hop and the ways it is sounded across regions, locating itself in specific places even as it expresses transnational and diasporic ideas.He is Assistant Professor of Music at Rider University, where he teaches in the school’s Popular Music and Culture program. He helped design the degree, which launched in the fall of 2012, and he is proud to be able to work in such a unique program.  His book-length project – Posthuman Pop – blends his interests in hip hop and technology by engaging contemporary popular music through the lens of posthuman theory.  Recent and forthcoming publications include an exploration of the Mozart myth as it is presented in Peter Shaffer’s Amadeus and then parodied in an episode of The Simpsons (Journal of Popular Culture 46:3, 2013), an examination of the earliest iPod silhouette commercials and the notions of freedom they are meant to convey (Oxford Handbook of Mobile Music Studies), and a long comparative review of Kanye and Jay Z’s Watch the Throne and the Roots’ Undun (Journal for the Society of American Music). He is also co-editing with Ali Colleen Neff a special issue of the Journal of Popular Music Studies titled “Sounding Global Southernness.”  He currently serves on the executive committee of the International Association for the Study of Popular Music-US Branch and is working on an oral history project of the organization. From June 2011 through May 2013, he served as Editor of the IASPM-US website, expanding the site’s offerings with the cutting edge work of popular music scholars from around the world.  You can contact him at justindburton [at] gmail [dot] com.

tape reelREWIND!…If you liked this post, you may also dig:

Musical Encounters and Acts of Audiencing: Listening Cultures in the American Antebellum-Daniel Cavicchi

Musical Objects, Variability and Live Electronic Performance-Primus Luta

Further Experiments in Agent-based Musical Composition”-Andreas Duus Pape