After a rockin’ (and seriously informative) series of podcasts from Leonard J. Paul, a Drrty South banger dropped by SO! Regular Regina Bradley, a screamtastic meditation from Yvon Bonenfant, and a heaping plate of food sounds from Steph Ceraso, our summer Sound and Pleasure series gets even louder with Kariann Goldschmidt‘s work on live events in Brazil. Brasil Ao Vivo! --JS, Editor-in-Chief
Brazilians pray, cheer and celebrate in public and often in close physical proximity to each other. From the nearly 3 million people that flocked to Copacabana Beach to hear Pope Francis lead a mass in 2013 to the huge crowds that regularly turn out for concerts at Maracanã stadium, Brazilians earn their global reputation for large-scale public events. Of course there is Carnival in Rio de Janeiro and Salvador; the largest LGBT Pride Parade in the world held in São Paulo; and then there is football.
The relationship between large-scale public events and sound hit home as the country reacted to the national team’s humiliating loss to Germany in the semi-final round of the 2014 FIFA World Cup. The world witnessed a different kind of public outpouring as the Brazilian public mourned. Within hours of the initial shock at the lopsided score, images of Brazilian football fans weeping and screaming in the stadium and on the street became a humorous meme with music and sound playing a prominent role. By the next day, most Brazilian football observers were taking pleasure in the public spectacle of weeping fans. With the abundance of images featuring hysteria, videos mocking the intensity of the crying went viral with dramatic musical scores. One observer proclaimed : “essa capacidade de rir de nós mesmos é uma das melhores qualidades”; the capacity to laugh at ourselves is one of our best qualities. That Brazilians express all varieties of emotions and annual passages together in public for everyone to witness, even when they border on campy excess, allow for everyone to feel the pleasures of community and the power of public performance.
All of this led me to believe that such a public culture has an effect on the aesthetics of what performance studies scholar Philip Auslander calls “liveness” in recorded music and related viral media. Auslander argues that the appeal of liveness for television broadcasts, concerts, and other stage performances allows audiences to feel the immediacy of the moment even if the presence of mediation, such as screens and on-air censorship, is obvious. The international spectacle of Brazilians emoting en masse, then, has a direct relationship with Brazilian sonic aesthetics. Nowhere, I argue, is this more prominent than in the (sometimes viral) popularity of live recordings.
That immediacy Auslander speaks of spreads to many aspects of Brazilian popular culture, including the popularity of concert DVDs and albums which are regularly listed among the most popular domestic recordings. In fact, concert records tend to be more popular than the studio albums that inspire the tour. These live albums often carry the designations Ao Vivo, live or MTV Acústico (the equivalent of the Unplugged albums popular in the United States), and they are often recorded in such a way so as to feature the interaction of the crowds. In place of the draw for authenticity (a value that permeates the MTV Unplugged recordings) is the love for community, and for experiencing big emotions together no matter how obviously they are mediated through cameras, microphones and other technology. Through the example of the continued popularity of live albums in Brazil, there is an opening for a different theorization for sounding liveness; in place of celebrating canonic performances and virtuosity, the valorization of liveness in Brazil reinforces the importance of crowds and the so-called “popular classes” at the root of the politicized singer-songwriter genre MPB or Música Popular Brasileira.
The pleasure and preference for live recordings also extends to social media. For meme chasers, a good example of this is Michel Teló’s 2011 hit “Ai Se Eu Te Pego.” The song and video were recorded ao vivo before a crowd dominated by young women. A close listen reveals that sounds of Teló’s female audience members are just as important as his voice even if his voice is only slightly louder in the mix. There is barely a moment in the recording when the audience stops making itself heard; the engineering revels in their presence. This is especially obvious during the opening seconds of the track when Teló and his audience sing “Nossa, nossa / assim você me mata / Ai, se eu te pego / Ai, ai, se eu te pego” [Wow, wow / you kill me like that / Ah, if I could get you / ah, ah, if I could get you] in unison at nearly the same volume in the mix. When the accordion and electric bass (crucial instruments for the song’s forró style) finally enter over the screaming audience, there is a noticeable break in the tension set up by the audience and Teló singing together. Their cries, like those in other live recordings, illustrate Teló’s appeal to the crowd in that moment while also allowing other listeners to imagine themselves there.
Teló’s song went viral (as of this writing, the official version currently has nearly 580 million views on YouTube and over 72 million plays on Spotify), with alternate video versions teaching the song’s dance steps and others highlighting global football stars dancing and singing along to the song. At one point Neymar, the national team’s biggest hope for World Cup victory, sang with Teló in front of a crowd. In general, Teló’s live songs easily outpace his studio recordings in terms of virality, and, I would argue, that a major part of the appeal of “Ai Se Eu Te Pego” is its provenance in a concert setting. It is just as important that the screaming throngs of women are audible as it is for those dance steps to be easy and recognizable. The liveness of the recording is so important, in fact, that the screaming audience appears as sampled snippets in the Pitbull remix. In its viral form, Teló’s song united the popularity of live spectacle with Brazil’s enthusiasm for other live events, merging concert goers with football fans.
The popularity of Teló’s live song is not an isolated incident. Look, for example, at record sales figures for all time. Two are live albums by artists who do not appear elsewhere on the list. Other albums that have sold more than 2 million copies in Brazil alone are by Roberto Carlos (Acústico MTV) and the teen pop/rock duo Sandy and Júnior (As Quatro Estações ao Vivo and Era Uma Vez… Ao Vivo). In 2011, five of the top ten albums in Brazil fit the ao vivo mode with little regard to genre: MPB stars Caetano Veloso and Maria Gadú are there alongside sertanejo artists Paula Fernandes and Luan Santana. In 2012, three of the top 20 best-sellers were live albums. Meanwhile, DVDs of concerts in Brazil continue to be strong sellers. Thus, the communal pleasure palpable on-screen translates to that experienced in the home.
Compare this with the status of live records in the United States in the last few years where they have rarely seen any chart success. If anything, liveness continues in YouTube clips and Spotify Sessions but not in physical sales and downloads. This is probably because live albums for U.S. based artists are embedded with different values having to do with the rock authenticity rather than communal pleasure. These performances demonstrate the chops of the musician and valorize the concerts (and tours) as events. The double live albums from the 1970s such as as Frampton Comes Alive, Lynyrd Skynyrd’s One More From The Road, and Kiss Alive! hold a prized place in the classic rock canon, often as much for extended guitar solos rather as the screaming throngs of fans. In the late ‘80s and early ’90s live albums, especially MTV Unplugged, re-inscribed a love of liveness through acoustic instruments and songs that reached back into the roots of American popular music. Eric Clapton’s Unplugged (1992) even topped the Billboard album charts and won 6 Grammy awards including Album of the Year while other records such as Nirvana’s MTV Unplugged in New York and U2’s Rattle and Hum were multi-platinum hits. While there is the occasional top-40 live single, these songs are the exception to a genre of that has has moved liveness to YouTube rather than streaming and MP3 markets.
SO! contributor Osvaldo Oyola has noted there is a tension between the efforts recording engineers often go through to make studio recordings sound as immediate as possible, and those that call attention to the recording process. Live records replace the need to sound polished with the need to sound spontaneous, often reveling in mistakes and banter. That immediacy is something I enjoy when listening to live recordings and it has a parallel for many people who participate in the reception of major events in real time through social media.
In Brazil, audiences enjoy the immense power of participation in live events. As part of a larger work in progress I’m particularly fascinated by how this power and pleasure is mediated through the sonic experience of recordings and viral social media. Whether they are sharing tears over an international football loss or singing along to “Ai Se Eu Te Pego” Brazilians extend Auslander’s liveness by prolonging and replaying the immediacy of the crowds to experience that shared sonic moment, again and again.
Kariann Goldschmitt is a Visiting Lecturer in the Faculty of Music at University of Cambridge. Her scholarly work focuses on Brazilian music, modes of listening, and sonic branding in the global cultural industries. She has published in the Oxford Handbook of Mobile Music Studies, Popular Music and Society, American Music, Yearbook for Traditional Music, and Luso-Brazilian Review and contributes to the South American cultural magazine, Sounds and Colours.
Featured image: Adapted from “Gloria” by Flickr user Lourenço Fabrino, CC BY-NC-SA 2.0
REWIND! . . .If you liked this post, you may also dig:
Sound-politics in São Paulo, Brazil– Leonardo Cardoso
klatsch \KLAHCH\ , noun: A casual gathering of people, esp. for refreshments and informal conversation [German Klatsch, from klatschen, to gossip, make a sharp noise, of imitative origin.] (Dictionary.com)
Dear Readers: So we’ve had two excellent posts on Autotune that have stirred up no small degree of controversy: Osvaldo Oyola’s “In Defense of Autotune” (9.12.11–our most popular post to date!) and Owen Marshall’s “A Brief History of Autotune” (4.21.2014). And now, we want to know how y’all feel and think about this most controversial of technological effects–with or without some of that T-Pain Effect. –J. Stoever-Ackerman, Editor-in-Chief
What’s your take on Autotune and what are we really talking about when we talk about it?
— Comment Klatsch logo courtesy of The Infatuated on Flickr.
This is the final article in Sounding Out!‘s April Forum on “Sound and Technology.” Every Monday this month, you’ve heard new insights on this age-old pairing from the likes of Sounding Out! veteranos Aaron Trammell and Primus Luta along with new voices Andrew Salvati and Owen Marshall. These fast-forward folks have shared their thinking about everything from Auto-tune to techie manifestos. Today, Marshall helps us understand just why we want to shift pitch-time so darn bad. Wait, let me clean that up a little bit. . .so darn badly. . .no wait, run that back one more time. . .jjuuuuust a little bit more. . .so damn badly. Whew! There! Perfect!–JS, Editor-in-Chief
A recording engineer once told me a story about a time when he was tasked with “tuning” the lead vocals from a recording session (identifying details have been changed to protect the innocent). Polishing-up vocals is an increasingly common job in the recording business, with some dedicated vocal producers even making it their specialty. Being able to comp, tune, and repair the timing of a vocal take is now a standard skill set among engineers, but in this case things were not going smoothly. Whereas singers usually tend towards being either consistently sharp or flat (“men go flat, women go sharp” as another engineer explained), in this case the vocalist was all over the map, making it difficult to always know exactly what note they were even trying to hit. Complicating matters further was the fact that this band had a decidedly lo-fi, garage-y reputation, making your standard-issue, Glee-grade tuning job decidedly inappropriate.
Undaunted, our engineer pulled up the Auto-Tune plugin inside Pro-Tools and set to work tuning the vocal, to use his words, “artistically” – that is, not perfectly, but enough to keep it from being annoyingly off-key. When the band heard the result, however, they were incensed – “this sounds way too good! Do it again!” The engineer went back to work, this time tuning “even more artistically,” going so far as to pull the singer’s original performance out of tune here and there to compensate for necessary macro-level tuning changes elsewhere.
The product of the tortuous process of tuning and re-tuning apparently satisfied the band, but the story left me puzzled… Why tune the track at all? If the band was so committed to not sounding overproduced, why go to such great lengths to make it sound like you didn’t mess with it? This, I was told, simply wasn’t an option. The engineer couldn’t in good conscience let the performance go un-tuned. Digital pitch correction, it seems, has become the rule, not the exception, so much so that the accepted solution for too much pitch correction is more pitch correction.
Since 1997, recording engineers have used Auto-Tune (or, more accurately, the growing pantheon of digital pitch correction plugins for which Auto-Tune, Kleenex-like, has become the household name) to fix pitchy vocal takes, lend T-Pain his signature vocal sound, and reveal the hidden vocal talents of political pundits. It’s the technology that can make the tone-deaf sing in key, make skilled singers perform more consistently, and make MLK sound like Akon. And at 17 years of age, “The Gerbil,” as some like to call Auto-Tune, is getting a little long in the tooth (certainly by meme standards.) The next U.S. presidential election will include a contingent of voters who have never drawn air that wasn’t once rippled by Cher’s electronically warbling voice in the pre-chorus of “Believe.” A couple of years after that, the Auto-Tune patent will expire and its proprietary status will dissolve into to the collective ownership of the public domain.
Growing pains aside, digital vocal tuning doesn’t seem to be leaving any time soon. Exact numbers are hard to come by, but it’s safe to say that the vast majority of commercial music produced in the last decade or so has most likely been digitally tuned. Future Music editor Daniel Griffiths has ballpark-estimated that, as early as 2010, pitch correction was used in about 99% of recorded music. Reports of its death are thus premature at best. If pitch correction is seems banal it doesn’t mean it’s on the decline; rather, it’s a sign that we are increasingly accepting its underlying assumptions and internalizing the habits of thought and listening that go along with them.
Headlines in tech journalism are typically reserved for the newest, most groundbreaking gadgets. Often, though, the really interesting stuff only happens once a technology begins to lose its novelty, recede into the background, and quietly incorporate itself into fundamental ways we think about, perceive, and act in the world. Think, for example, about all the ways your embodied perceptual being has been shaped by and tuned-in to, say, the very computer or mobile device you’re reading this on. Setting value judgments aside for a moment, then, it’s worth thinking about where pitch correction technology came from, what assumptions underlie the way it works and how we work with it, and what it means that it feels like “old news.”
As is often the case with new musical technologies, digital pitch correction has been the target for no small amount of controversy and even hate. The list of indictments typically includes the homogenization of music, the devaluation of “actual talent,” and the destruction of emotional authenticity. Suffice to say, the technological possibility of ostensibly producing technically “pitch-perfect” performances has wreaked a fair amount of havoc on conventional ways of performing and evaluating music. As Primus Luta reminded us in his SO! piece on the powerful-yet-untranscribable “blue notes” that emerged from the idiosyncrasies of early hardware samplers, musical creativity is at least as much about digging-into and interrogating the apparent limits of a technology as it is about the successful removal of all obstacles to total control of the end result.
Paradoxically, it’s exactly in this spirit that others have come to the technology’s defense: Brian Eno, ever open to the unexpected creative agency of perplexing objects, credits the quantized sound of an overtaxed pitch corrector with renewing his interest in vocal performances. SO!’s own Osvaldo Oyola, channeling Walter Benjamin, has similarly offered a defense of Auto-Tune as a democratizing technology, one that both destabilizes conventional ideas about musical ability and allows everyone to sing in-tune, free from the “tyranny of talent and its proscriptive aesthetics.”
Jonathan Sterne, in his book MP3, offers an alternative to normative accounts of media technology (in this case, narratives either of the decline or rise of expressive technological potential) in the form of “compression histories” – accounts of how media technologies and practices directed towards increasing their efficiency, economy, and mobility can take on unintended cultural lives that reshape the very realities they were supposed to capture in the first place. The algorithms behind the MP3 format, for example, were based in part on psychoacoustic research into the nature of human hearing, framed primarily around the question of how many human voices the telephone company could fit into a limited bandwidth electrical cable while preserving signal intelligibility. The way compressed music files sound to us today, along with the way in which we typically acquire (illegally) and listen to them (distractedly), is deeply conditioned by the practical problems of early telephony. The model listener extracted from psychoacoustic research was created in an effort to learn about the way people listen. Over time, however, through our use of media technologies that have a simulated psychoacoustic subject built-in, we’ve actually learned collectively to listen like a psychoacoustic subject.
Pitch-time manipulation runs largely in parallel to Sterne’s bandwidth compression story. The ability to change a recorded sound’s pitch independently of its playback rate had its origins not in the realm of music technology, but in efforts to time-compress signals for faster communication. Instead of reducing a signal’s bandwidth, pitch manipulation technologies were pioneered to reduce the time required to push the message through the listener’s ears and into their brain. As early as the 1920s, the mechanism of the rotating playback head was being used to manipulate pitch and time interchangeably. By spinning a continuous playback head relative to the motion of the magnetic tape, researchers in electrical engineering, educational psychology, and pedagogy of the blind found that they could increase playback rate of recorded voices without turning the speakers into chipmunks. Alternatively, they could rotate the head against a static piece of tape and allow a single moment of recorded sound to unfold continuously in time – a phenomenon that influenced the development of a quantum theory of information.
In the early days of recorded sound some people had found a metaphor for human thought in the path of a phonograph’s needle. When the needle became a head and that head began to spin, ideas about how we think, listen, and communicate followed suit: In 1954 Grant Fairbanks, the director of the University of Illinois’ Speech Research Laboratory, put forth an influential model of the speech-hearing mechanism as a system where the speaker’s conscious intention of what to say next is analogized to a tape recorder full of instructions, its drive “alternately started and stopped, and when the tape is stationary a given unit of instruction is reproduced by a moving scanning head”(136). Pitch time changing was more a model for thinking than it was for singing, and its imagined applications were thus primarily non-musical.
Take for example the Eltro Information Rate Changer. The first commercially available dedicated pitch-time changer, the Eltro advertised its uses as including “pitch correction of helium speech as found in deep sea; Dictation speed testing for typing and steno; Transcribing of material directly to typewriter by adjusting speed of speech to typing ability; medical teaching of heart sounds, breathing sounds etc.by slow playback of these rapid occurrences.” (It was also, incidentally, used by Kubrick to produce the eerily deliberate vocal pacing of HAL 9000). In short, for the earliest “pitch-time correction” technologies, the pitch itself was largely a secondary concern, of interest primarily because it was desirable for the sake of intelligibility to pitch-change time-altered sounds into a more normal-sounding frequency range.
This coupling of time compression with pitch changing continued well into the era of digital processing. The Eventide Harmonizer, one of the first digital hardware pitch shifters, was initially used to pitch-correct episodes of “I Love Lucy” which had been time-compressed to free-up broadcast time for advertising. Similar broadcast time compression techniques have proliferated and become common in radio and television (see, for example, Davis Foster Wallace’s account of the “cashbox” compressor in his essay on an LA talk radio station.) Speed listening technology initially developed for the visually impaired has similarly become a way of producing the audio “fine print” at the end of radio advertisements.
Though the popular conversation about Auto-Tune often leaves this part out, it’s hardly a secret that pitch-time correction is as much about saving time as it is about hitting the right note. As Auto-Tune inventor Andy Hildebrand put it,
[Auto-Tune’s] largest effect in the community is it’s changed the economics of sound studios…Before Auto-Tune, sound studios would spend a lot of time with singers, getting them on pitch and getting a good emotional performance. Now they just do the emotional performance, they don’t worry about the pitch, the singer goes home, and they fix it in the mix.
Whereas early pitch-shifters aimed to speed-up our consumption of recorded voices, the ones now used in recording are meant to reduce the actual time spent tracking musicians in studio. One of the implications of this framing is that emotion, pitch, and the performer take on a very particular relationship, one we can find sketched out in the Auto-Tune patent language:
Voices or instruments are out of tune when their pitch is not sufficiently close to standard pitches expected by the listener, given the harmonic fabric and genre of the ensemble. When voices or instruments are out of tune, the emotional qualities of the performance are lost. Correcting intonation, that is, measuring the actual pitch of a note and changing the measured pitch to a standard, solves this problem and restores the performance. (Emphasis mine. Similar passages can be found in Auto-Tune’s technical documentation.)
In the world according to Auto-Tune, the engineer is in the business of getting emotional signals from place to place. Emotion is the message, and pitch is the medium. Incorrect (i.e. unexpected) pitch therefore causes the emotion to be “lost.” While this formulation may strike some people as strange (for example, does it mean that we are unable to register the emotional qualities of a performance from singers who can’t hit notes reliably? Is there no emotionally expressive role for pitched performances that defy their genre’s expectations?), it makes perfect sense within the current affective economy and division of labor and affective economy of the recording studio. It’s a framing that makes it possible, intelligible, and at least somewhat compulsory to have singers “express emotion” as a quality distinct from the notes they hit and have vocal producers fix up the actual pitches after the fact. Both this emotional model of the voice and the model of the psychoacoustic subject are useful frameworks for the particular purposes they serve. The trick is to pay attention to the ways we might find ourselves bending to fit them.
Owen Marshall is a PhD candidate in Science and Technology Studies at Cornell University. His dissertation research focuses on the articulation of embodied perceptual skills, technological systems, and economies of affect in the recording studio. He is particularly interested in the history and politics of pitch-time correction, cybernetics, and ideas and practices about sensory-technological attunement in general.
Featured image: “Epic iPhone Auto-Tune App” by Flickr user Photo Giddy, CC BY-NC 2.0
REWIND!…If you liked this post, you may also dig:
“From the Archive #1: It is art?”-Jennifer Stoever
“Garageland! Authenticity and Musical Taste”-Aaron Trammell