Archive | Synthesizers RSS for this section

Sculptural Dissonance: Hans Zimmer and the Composer as Engineer

Zimmer at work

Sculpting the Film Soundtrack

Welcome to our new series Sculpting the Film Soundtrack, which brings you new perspectives on sound and filmmaking. As Guest Editor, we’re honored and delighted to have Katherine Spring, Associate Professor of Film Studies at Wilfrid Laurier University. Spring is the author of an exciting and important new book Saying it With Songs: Popular Music and the Coming of Sound to Hollywood Cinema. Read it! You’ll find an impeccably researched work that’s the definition of how the history of film sound and media convergence ought to be written.

But before rushing back to the early days, stick around here on SO! for the first of our three installments in Sculpting the Film Soundtrack.

NV

It’s been 35 years since film editor and sound designer Walter Murch used the sounds of whirring helicopter blades in place of an orchestral string section in Apocalypse Now, in essence blurring the boundary between two core components of the movie soundtrack: music and sound effects.  This blog series explores other ways in which filmmakers have treated the soundtrack as a holistic entity, one in which the traditional divisions between music, effects, and speech have been disrupted in the name of sculpting innovative sonic textures.

In three entries, Benjamin Wright, Danijela Kulezic-Wilson, and Randolph Jordan will examine the integrated soundtrack from a variety of perspectives, including technology, labor, aesthetic practice, theoretical frameworks, and suggest that the dissolution of the boundaries between soundtrack categories can prompt us to apprehend film sound in new ways. If, as Murch himself once said, “Listening to interestingly arranged sounds makes you hear differently,” then the time is ripe for considering how and what we might hear across the softening edges of the film soundtrack.

- Guest Editor Katherine Spring

Composing a sound world for Man of Steel (2013), Zack Snyder’s recent Superman reboot, had Hans Zimmer thinking about telephone wires stretching across the plains of Clark Kent’s boyhood home in Smallville. “What would that sound like,” he said in an interview last year. “That wind making those telephone wires buzz – how could I write a piece of music out of that?” The answer, as it turned out, was not blowing in the wind, but sliding up and down the scale of a pedal steel guitar, the twangy lap instruments of country music. In recording sessions, Zimmer instructed a group of pedal steel players to experiment with sustains, reverb, and pitches that, when mixed into the final track, accompany Superman leaping over tall buildings at a single bound.

His work on Man of Steel, just one of his most recent films in a long and celebrated career, exemplifies his unique take on composing for cinema. “I would have been just as happy being a recording engineer as a composer,” remarked Zimmer last year in an interview to commemorate the release of a percussion library he created in collaboration with Spitfire Audio, a British sample library developer. “Sometimes it’s very difficult to stop me from mangling sounds, engineering, and doing any of those things, and actually getting me to sit down and write the notes.” Dubbed the “HZ01 London Ensembles,” the library consists of a collection of percussion recordings featuring many of the same musicians who have performed for Zimmer’s film scores, playing everything from tamtams to taikos, buckets to bombos, timpani to anvils. According to Spitfire’s founders, the library recreates Zimmer’s approach to percussion recording by offering a “distillation of a decade’s worth of musical experimentation and innovation.”

In many ways, the collection is a reminder not just of the influence of Zimmer’s work on contemporary film, television, and video game composers but also of his distinctive approach to film scoring, one that emphasizes sonic experimentation and innovation. Having spent the early part of his career as a synth programmer and keyboardist for new wave bands such as The Buggles and Ultravox, then as a protégé of English film composer Stanley Myers, Zimmer has cultivated a hybrid electronic-orchestral aesthetic that uses a range of analog and digital oscillators, filters, and amplifiers to twist and augment solo instrument samples into a synthesized whole.

Zimmer played backup keyboards on “Video Killed the Radio Star.”

In a very short time, Zimmer has become a dominant voice in contemporary film music with a sound that blends melody with dissonance and electronic minimalism with rock and roll percussion. His early Hollywood successes, Driving Miss Daisy (1989) and Days of Thunder (1990), combined catchy themes and electronic passages with propulsive rhythms, while his score for Black Rain (1989), which featured taiko drums, electronic percussion, and driving ostinatos, laid the groundwork for an altogether new kind of action film score, one that Zimmer refined over the next two decades on projects such as The Rock (1996), Gladiator (2000), and The Pirates of the Caribbean series.

What is especially intriguing about Zimmer’s sound is the way in which he combines the traditional role of the composer, who fashions scores around distinct melodies (or “leitmotifs”), with that of the recording engineer, who focuses on sculpting sounds.  Zimmer may not be the first person in the film business to experiment with synthesized tones and electronic arrangements – you’d have to credit Bebe and Louis Barron (Forbidden Planet, 1956), Vangelis (Chariots of Fire, 1981), Jerry Goldsmith (Logan’s Run, 1976), and Giorgio Moroder (Midnight Express, 1981) for pushing that envelope – but he has turned modern film composing into an engineering art, something that few other film composers can claim.

Zimmer Studio

Zimmer’s studio

One thing that separates Zimmer’s working method from that of other composers is that he does not confine himself to pen and paper, or even keyboard and computer monitor. Instead, he invites musicians to his studio or a sound stage for an impromptu jam session to find and hone the musical syntax of a project. Afterwards, he returns to his studio and uses the raw samples from the sessions to compose the rest of the score, in much the same way that a recording engineer creates the architecture of a sound mix.

“There is something about that collaborative process that happens in music all the time,” Zimmer told an interviewer in 2010. “That thing that can only happen with eye contact and when people are in the same room and they start making music and they are fiercely dependent on each other. They cannot sound good without the other person’s part.”

Zimmer facilitates the social and aesthetic contours of these off-the-cuff performances and later sculpts the samples into the larger fabric of a score. In most cases, these partnerships have provided the equivalent of a pop hook to much of Zimmer’s output: Lebo M’s opening vocal in The Lion King (1994), Johnny Marr’s reverb-heavy guitar licks in Inception, Lisa Gerrard’s ethereal vocals in Gladiator and Black Hawk Down (2002), and the recent contributions of the so-called “Magnificent Six” musicians to The Amazing Spider Man 2 (2014).

The melodic hooks are simple but infectious – even Zimmer admits he writes “stupidly simple music” that can often be played with one finger on the piano. But what matters most are the colors that frame those notes and the performances that imbue those simple melodies with a personality. Zimmer’s work on Christopher Nolan’s Dark Knight trilogy revolves around a deceptively simple rising two-note motif that often signifies the presence of the caped crusader, but the pounding taiko hits and bleeding brass figures that surround it do as much to conjure up images of Gotham City as cinematographer Wally Pfister’s neo-noir photography. The heroic aspects of the Batman character are muted in Zimmer’s score except for the presence of the expansive brass figures and taiko hits, which reach an operatic crescendo in the finale, where the image of Batman escaping into the blinding light of the city is accompanied by a grand statement of the two-note figure backed by a driving string ostinato. Throughout the series, a string ostinato and taikos set the pace for action sequences and hint at the presence of Batman who lies somewhere in the shadows of Gotham.

Zimmer’s expressive treatment of musical colors also characterizes his engineering practices, which are more commonly used in the recording industry. Music scholar Paul Théberge has noted that the recording engineer’s interest in an aesthetic of recorded musical “sound” led to an increased demand for control over the recording process, especially in the early days of multitrack rock recording where overdubbing created a separate, hierarchical space for solo instruments. Likewise for Zimmer, it’s not just about capturing individual sounds from an orchestra but also layering them into a synthesized product. Zimmer is also interested in experimenting with acoustic performances, pushing musicians to play their instruments in unconventional ways or playing his notes “the wrong way,” as he demonstrates here in the making of the Joker’s theme from The Dark Knight:

The significance of the cooperative aspects of these musical performances and their treatment as musical “colors” to be modulated, tweaked, and polished rests on a paradoxical treatment of sound. While he often finds his sound world among the wrong notes, mistakes, and impromptu performances of world musicians, Zimmer is also often criticized for removing traces of an original performance by obscuring it with synth drones and distortion. In some cases, like in The Peacemaker (1997), the orchestration is mushy and sounds overly processed. But in other cases, the trace of a solo performance can constitute a thematic motif in the same way that a melody serves to identify place, space, or character in classical film music. Compare, for instance, Danny Elfman’s opening title theme for Tim Burton’s Batman (1989) and Zimmer’s opening title music for The Dark Knight. While Elfman creates a suite of themes around a central Batman motif, Zimmer builds a sparse sound world that introduces a sustained note on the electric cello that will eventually be identified with the Joker.  It’s the timbre of the cello, not its melody, that carries its identifying features.

To texture the sounds in Man of Steel, Zimmer also commissioned Chas Smith, a Los Angeles-based composer, performer, and exotic instrument designer to construct instruments from “junk” objects Smith found around the city that could be played with a bow or by hand while also functioning as metal art works. The highly abstract designs carry names that give some hint to their origins – “Bertoia 718” named after modern sculptor and furniture designer Harry Bertoia; “Copper Box” named for the copper rods that comprise its design; and “Tin Sheet” that, when prodded, sounds like futuristic thunderclaps.

Smith’s performances of his exotic instruments are woven into the fabric of the score, providing it with a sort of musical sound design. Consider General Zod’s suite of themes and motifs, titled “Arcade” on the 2-disc version of the soundtrack. The motif is built around a call-and-answer ostinato for strings and brass that is interrupted by Smith’s sculptural dissonance. It’s the sound of an otherworldly menace, organic but processed, sculpted into a conventional motif-driven sound world.

Zimmer remains a fixture in contemporary film music partly because, as music critic Jon Burlingame has pointed out, he has a relentless desire to search for fresh approaches to a film’s musical landscape. This pursuit begins with his extracting of sounds and colors from live performances and electronically engineering them during the scoring process. Such heightened attention to sound texture and color motivated the creation of the Spitfire percussion library, but can only hint at the experimentation and improvisational nature that goes into Zimmer’s work. In each of his film scores, the music tells a story that is tailored to the demands of the narrative, but the sounds reveal Zimmer’s urge to manipulate sound samples until they are, in his own words, “polished like a diamond.”

Zimmer at Work

Ben Wright  holds a Provost Postdoctoral Fellowship from the University of Southern California in the School of Cinematic Arts. In 2011, he received his Ph.D. in Cultural Studies from the Institute for Comparative Studies in Literature, Art and Culture at Carleton University. His research focuses on the study of production cultures, especially exploring the industrial, social, and technological effects of labor structures within the American film industry. His work on production culture, film sound and music, and screen comedy has appeared in numerous journals and anthologies. He is currently completing a manuscript on the history of contemporary sound production, titled Hearing Hollywood: Art, Industry, and Labor in Hollywood Film Sound.

All images creative commons.

tape reelREWIND! . . .If you liked this post, you may also dig:

Fade to Black, Old Sport: How Hip Hop Amplifies Baz Luhrmann’s The Great Gatsby– Regina Bradley

Quiet on the Set?: The Artist and the Sound of a Silent Resurgence– April Miller

Play It Again (and Again), Sam: The Tape Recorder in Film (Part Two on Walter Murch)– Jennifer Stoever

 

From Kitschy to Classy: Reviving the TR-808

"1980 Roland TR-808" by Flickr user Joseph Holmes, CC BY-NC-ND 2.0

Before Roland’s new TR-8 Rhythm Performer, a contemporary drum machine, was unveiled this year, the company released a series of promotional videos in which the machine’s designers sought out the original schematics and behavior of its predecessor the TR-808, an iconic analog drum machine from the early 1980s. The TR-808 holds cultural cache–most recently due to its use by Outkast, Baauer, and Kanye West–that Roland is interested in exploiting for the Rhythm Performer. The video features engineers closely examining the TR-808’s sound with an oscilloscope, trying to glean every last detail of the original’s personality.

"Roland TR-808" by Flickr user Ethan Hein, CC BY 2.0

“Roland TR-808″ by Flickr user Ethan Hein, CC BY 2.0

Things were not always this way. Upon its initial release, the TR-808 was widely dismissed. Because it did not sound like “normal” acoustic drums, many established musicians questioned its utility and many ultimately disregarded it.  However, its “cheap” circuit-produced sounds became bargain-bin treasures for emerging artists. Since its sounds now play such a large part in the landscape of electronic music, this essay takes a historical perspective on the TR-808 Rhythm Composer’s use and circulation. By analyzing how Juan Atkins  and Marvin Gaye used the TR-808 in the early 1980s, I show how the TR-808 created a sonic space for drum machines in popular music.

Drum machines, though commonplace today, were once seen as kitschy tools for broke amateur musicians. As audio engineer Mitchell Sigman explains, the 808’s low, subsonic kick drum and “tick” snare characterized a departure from the realistic, sampled drum sounds produced by high-end drum machines in the early 1980s. The 808 uses analog oscillators and white noise generators to make sounds resembling the components of a drum set (kick, snare, hi-hats, etc.) And, although these sounds are now commonplace, most contemporary artists use them precisely because they sound robotic, not because they sound like drums.  Even though the 808 at first seemed a failed imitation of “real” drums, the comparatively low cost of the 808, which originally retailed around $1,195, attracted musicians who were unable to afford other similar machines such as the LinnDrum that retailed at more than twice that price. Roland advertised the machine as a “studio” for musicians on a budget and even as they began to disinvest from the 808–as testified by the company’s decision to invest in marketing and research for other products–the 808′s so-called noises began their movement into mainstream American popular culture. In Detroit, electronic musician Juan Atkins, now known as one of the innovators of Detroit Techno, began experimenting with the machine’s sonic capabilities as early as 1981, while other artists such as Afrika Bambaataa were also using it in the Bronx by 1982.

"Industrial Records Studio 1980" by Flickr user Chris Carter, CC BY-NC-ND 2.0

“Industrial Records Studio 1980″ by Flickr user Chris Carter, CC BY-NC-ND 2.0

A landmark year for the 808, 1982 saw the release of Juan Atkins’ “Clear” and Marvin Gaye’s “Sexual Healing,” tracks that illuminate the key features each musician realized in the 808.  For Atkins, the machine was something he felt could embody his early career; Atkins’ use of the 808 represented a pivotal moment in the American musical landscape, in which the futurism of the sound of synthesizers echoed other segments of the nation’s sonic imagination.  Gaye’s use of the 808 was a clear departure from his body of Motown work.  Although the instrument enabled different sorts of experimentation for the two, the new sorts of sounds the machine produced allowed them both to explore new possibilities for musical meaning.  Just as Trevor Pinch and Frank Trocco argue in Analog Days that analog synthesizers required validation by musicians such as Geoff Downes and Keith Emerson a decade before, the 808 broke into the mainstream through artistic experimentation.

Juan Atkins

In the early ‘80s, Juan Atkins was learning all he could about electronic music. As an able musician and the son of a concert promoter, Atkins was poised to couple his musical knowledge with a new breed of electronic musical instruments such as the 808. Together with a tightly knit group from Detroit, Atkins succeeded in promoting techno from a subculture to part of a global dance music scene. According to Atkins, the popularity of Detroit Techno came from its adoption in European urban centers like London and Berlin, which lent the music additional meaning stateside. In an interview with Dollop UK, Atkins emphasizes that the 808 was central to this musical development, as he calls the 808 (among other machines) “the foundation[s] of electronic dance music.”

"Cybotron-Clear" by Flickr user Alan Read, CC BY-NC-SA 2.0

“Cybotron-Clear” by Flickr user Alan Read, CC BY-NC-SA 2.0

Under the moniker of Cybotron, Atkins released the song “Clear” in 1982. “Clear”’s proto-techno soundscape pushes the 808 to the front of his mix, and provides the track’s backbone. The solid, resonant kick, swishy open high hat, and the piercing snare are decidedly machinic, departing from most rhythmic trends in popular music to date, since, as music scholar John Mowitt points out, a sense of “human feeling” comes hand-in-hand with drumming.

Atkins embraced these machine sounds and considered the 808 his “secret weapon.” Its ability to be programmed, manipulated, and warped on the fly lent it a very particular kind of performance and music making that Atkins exploited. Rather than rely on the breaks that DJs could find on records, the 808 allowed Atkins to create beats to his own liking, placing kick, snare, and hi-hat hits where he found them to be most effective. Because of this flexibility, the kitsch of the 808′s sounds empowered the difference between his music and other artists’ creations. The breaks Atkins produced on the 808, for example, were obviously impossible to find on vinyl.

"Juan Atkins" by Flickr user Rene Passet, CC BY-NC-ND 2.0

“Juan Atkins” by Flickr user Rene Passet, CC BY-NC-ND 2.0

As Bleep43, an online EDM collective, notes, Atkins’ vision for electronic music would eventually pick up in London, where he relocated in the late eighties. Although Detroit Techno had achieved regional success in the US, record sales and performance dates in London signaled techno had found a larger audience abroad.  Although Atkins considers himself an eclectically “Detroit” artist,  he recognizes the impact of his work globally, and thinks of the modern Berlin flavor of minimal techno as a notably clever offshoot.

Marvin Gaye

Marvin Gaye’s struggle with depression, drug use and relationship issues were the context for the subtle and understated 808 rhythmic backing he used in “Sexual Healing.” Gaye’s use of the 808 in “Sexual Healing” differs vastly from Watkins’ in “Clear,” operating as a tool of texture and punctuation from the noticeable timbric changes to the clever placement of  handclaps and clave in the composition.  While Gaye recovered from his personal crises in Belgium, Colombia Records sent him an 808 because it was more portable than a studio drummer. It also offered sonic capabilities new and exciting to Gaye’s seasoned ears.

“Synths of Yesteryear 5/5″ by Flickr user Jochen Wolters, CC BY-NC-ND 2.0

The drum machine’s prevalence in “Sexual Healing” shows how culturally marginal sounds move into mainstream musical culture. Gaye and his producers, already squarely in the center of popular American music, experimented with the sound of the 808 not in an attempt to break through, but rather to exercise musical flexibility. Since he was already an extremely successful pop artist, Gaye’s use of the 808 marks him as a sonic risk-taker and innovator, weaving the machine sounds of the 808 seamlessly but noticeably into R and B.

The machine’s normally powerful snare is invoked only at the quietest of velocities, often being replaced by the now iconic handclap. Unlike many contexts in which the 808 is heard such as “Clear” and Afrika Bambaataa’s “Planet Rock,” “Sexual Healing manages to keep everything low key. Matching the lyrics that espouse peace, harmony, and sense of internal struggle (Whenever blue tear drops are falling/And my emotional stability is leaving me/Honey I know you’ll be there to relieve me/The love you give to me will free me), Gaye uses the 808 to evoke a surprisingly contemplative and serene atmosphere. It is this use that best shows the machine’s strange versatility, as both a harbinger of radically innovative musical genres and its ability to produce tranquil rhythmic textures for popular music.

Transformation

"Roland TR 909 Drum Machine Classic" by Flickr user Juliana Luz, CC BY-NC 2.0

“Roland TR 909 Drum Machine Classic” by Flickr user Juliana Luz, CC BY-NC 2.0

Although Atkins and Gaye’s work exemplify the TR-808’s early adoption, a long road toward mainstream popularity remained because of Roger Linn’s more “realistic”  sampled drums sounds included in his high-end machines. The LM-1 and its successors (famous for hit singles like Billy Idol’s “White Wedding”, Hall and Oate’s “Maneater,” and Don Henley’s “Dirty Laundry”) made sampled drums the gold standard of computerized rhythmic backing. In fact, Roland’s next drum machine, the TR-909, implemented samples alongside synthesis.  As a result, 808s couldn’t be given away until musical innovators gave its sounds gravitas (Sigman, 2011, 46).

The 808′s shift from sonically trashy and undesirable to ostensibly hip signifies a culturally important moment within the history of music technology. As shown in the examples above, subtle moments of economic, emotional, and geographic necessity seeded the popular music industry for the eventual 808 boom today. When techno eventually broke through to global popularity, the 808 was so fundamental to the canon of the genre that it has managed to retain a place of fundamental sonic importance for musicians and producers.

 11:40, 6/11/14: This essay was re-edited for clarity, grammar, and flow by Jennifer Stoever.

Ian Dunham is a musician and music scholar originally from northeast Ohio. He earned a B.S. from Middle Tennessee State University in the Recording Industry within the College of Mass Communications, and then worked as a recording engineer in Nashville and Germany. Afterward, he earned an M.M. in Ethnomusicology from the University of Texas at Austin, where he also operated a home recording studio. He will start a PhD in Media Studies at Rutgers in the fall, where he will pursue research related to music and copyright.

Featured image: “1980 Roland TR-808” by Flickr user Joseph Holmes, CC BY-NC-ND 2.0

tape reelREWIND!…If you liked this post, you may also dig:

“Into the Woods: A Brief History of Wood Paneling on Synthesizers*”-Tara Rodgers

“The Blue Notes of Sampling”-Primus Luta

“Revising the Future of Music Technology”-Aaron Trammell

Sounding Out! Podcast #29: Game Audio Notes I: Growing Sounds for Sim Cell

cell Visual Slice_Base_02-1360x768

Sound and Pleasure2A pair of firsts! This is both the lead post in our summer Sound and Pleasure series, and the first podcast in a three part series by Leonard J. Paul. What is the connection between sound and enjoyment, and how are pleasing sounds designed? Pleasure is, after all, what brings y’all back to Sounding Out! weekly, is it not?

In today’s installment Leonard peels back the curtain of game audio design and reveals his creative process. For anyone curious as to what creative decisions lead to the bloops, bleeps, and ambient soundscapes of video games, this is essential listening. Stay tuned for next Monday’s installment on the process of designing sound for Retro City Rampage, and next month’s episode which focuses on the game Vessel. Today, Leonard begins by picking apart his design process at a cellular level. Literally! -AT, Multimedia Editor

-

CLICK HERE TO DOWNLOADGame Audio Notes I: Growing Sounds for Sim Cell

SUBSCRIBE TO THE SERIES VIA ITUNES

ADD OUR PODCASTS TO YOUR STITCHER FAVORITES PLAYLIST

-

Game Audio Notes I: Growing Sounds for Sim Cell

Sim Cell is an educational game released in Spring 2014 by Strange Loop Games. Published by Amplify, a branch of News Corp, it teaches students how the human cell works. Players take control of a small vessel that is shrunk down to the size of a cell and solve the tasks set by the game while also learning about the human cell. This essay unpacks the design decisions behind the simulation of a variety of natural phenomena (motion, impact, and voice) in Sim Cell.

For the design of this game I decided to focus on the elements of life itself and attempted to “grow” the music and sound design from synthetic sounds. I used the visual scripting language Pure Data (PD) to program both the sounds and the music. The music is generated from a set of rules and patterns that cause each playback of a song to be slightly different each time. Each of the sound effects are crafted from small programs that are based on the design of analogue modular synthesizers. Basic synthetic audio elements such as filtered noise, sawtooth waves and sine waves were all used in the game’s sound design.

An image of the game's space-like world.

A screenshot of Sim Cell’s microscopic space-like world. Used with permission (c) 2014 Amplify.

The visuals of the game give a feeling of being in an “inner space” that mirrors outer space. I took my inspiration for Sim Cell‘s sound effects from Louis and Bebe Baron’s score to Forbidden Planet. I used simple patches in PD to assemble all of the sounds for the game from synthesis. There aren’t any recorded samples in the sound design – all of the sound effects are generated from mathematics.

The digital effects used in the sound design were built to emulate the effects available in vintage studios such as plate reverb and analog delay. I found a simulation of a plate reverb that from a modified open source patch from the RjDj project, and constructed an analogue delay by using a low-pass filter on a delayed signal.

https://www.youtube.com/watch?v=3m86ftny1uY&feature=youtu.be

In keeping with the vintage theme, I used elements of sound design from the early days of video games as well. Early arcade games such as Space Invaders used custom audio synthesis microchips for each of their sounds. In order to emulate this, I gave every sound its own patch when doing the synthesis for Sim Cell. I learned to appreciate this ethic of design when playing Combat on the Atari 2600 while growing up. The Atari 2600 could only output two sounds at once thus had a very limited palette of tones. All of the source code for the sounds and music for Sim Cell are less than 1 megabyte, which shows how powerful the efficient coding of mathematics can be.

Vector image on the left, raster graphics on the right.

Vector image on the left, raster graphics on the right. Photo used with permission by the author.

Another cool thing about generating the sounds from mathematics is that users can “zoom in” on sounds in the same way that one can zoom in to a vector drawing. In vector drawings the lines are smooth when you zoom in, as opposed to rasterized pictures (such as a JPEG) which reveal a blurry set of pixels upon zooming. When code can change sounds in real-time, it makes them come alive and lends a sense of flexibility to the composition.

For a human feeling I often filter the sounds using formant frequencies which simulate the resonant qualities of vowel sounds, thus offering a vocal quality to the sample. For alarm sounds in Sim Cell I used minor second intervals. This lent a sense of dissonance which informed players that they needed to adjust their gameplay in order to navigate treacherous areas. Motion was captured through a whoosh sound that used filtered and modulated noise. Some sounds were pitched up to make things seem like they were travelling towards the player and likewise they exploited a sense of doppler shift when pitched down, giving the feel that a sound was traveling away from the player. Together, these techniques produced a sense of immersion for the player while simultaneously building a realistic soundscape.

Our decision to use libPD for this synthesis turned problematic and its processing requirements were too high. In order to remedy this, we decided to convert our audio into samples. Consider a photograph of a sculpture. Our sounds, like sculpture in the photograph, could now only be viewed from one direction. This meant that the music now only had a single version and that sound effects would repeat as well. A small fix was exporting the file with a set of five different intensities from 0 to 1.0. Like taking a photograph from several angles, this meant that the game could play a sample at intensity levels of 20%, 40%, 60%, 80% and 100%. Although a truly random sense of variation was lost, this method still conveyed the intensity of impacts (and other similar events) generated by the physics engine of the game.

An image of the inexpensive open-source computer: The Rasberry PI

An image of the inexpensive open-source computer: The Rasberry PI. Photo used with permission by the author.

PD is a great way to learn and play with digital audio since you can change the patch while it is running, just like you might do with a real analogue synthesizer. There’s plenty of other neat stuff that PD can do, like being able to be run on the Raspberry Pi, so you could code your own effects pedals and make your own synths using PD for around $50 or so. For video games, you can use libPD to integrate PD patches into your Android or iOS apps as well. I hope this essay has offered some insight as to my process when using PD. I’ve included some links below for those interested in learning more.

Additional resources:

-

Leonard J. Paul attained his Honours degree in Computer Science at Simon Fraser University in BC, Canada with an Extended Minor in Music concentrating in Electroacoustics. He began his work in video games on the Sega Genesis and Super Nintendo Entertainment System and has a twenty year history in composing, sound design and coding for games. He has worked on over twenty major game titles totalling over 6.4 million units sold since 1994, including award-winning AAA titles such as EA’s NBA Jam 2010NHL11Need for Speed: Hot Pursuit 2NBA Live ’95 as well as the indie award-winning title Retro City Rampage.

He is the co-founder of the School of Video Game Audio and has taught game audio students from over thirty different countries online since 2012. His new media works has been exhibited in cities including Surrey, Banff, Victoria, São Paulo, Zürich and San Jose. As a documentary film composer, he had the good fortune of scoring the original music for multi-awarding winning documentary The Corporation which remains the highest-grossing Canadian documentary in history to date. He has performed live electronic music in cities such as Osaka, Berlin, San Francisco, Brooklyn and Amsterdam under the name Freaky DNA.

He is an internationally renowned speaker on the topic of video game audio and has been invited to speak in Vancouver, Lyon, Berlin, Bogotá, London, Banff, San Francisco, San Jose, Porto, Angoulême and other locations around the world.

His writings and presentations are available at http://VideoGameAudio.com

-

Featured image: Concept art for Sim Cell. Used with permission (c) 2014 Amplify.

tape reelREWIND! . . .If you liked this post, you may also dig:

Sounding Out! Podcast #10: Interview with Theremin Master Eric Ross- Aaron Trammell

Papa Sangre and the Construction of Immersion in Audio Games- Enongo Lumumba-Kasongo

Playing with Bits, Pieces, and Lightning Bolts: An Interview with Sound Artist Andrea Parkins

A Brief History of Auto-Tune

3886588096_193dd13dd6_o

Sound and TechThis is the final article  in Sounding Out!‘s April  Forum on “Sound and Technology.” Every Monday this month, you’ve heard new insights on this age-old pairing from the likes of Sounding Out! veteranos Aaron Trammell and Primus Luta along with new voices Andrew Salvati and Owen Marshall.  These fast-forward folks have shared their thinking about everything from Auto-tune to techie manifestos. Today, Marshall helps us understand just why we want to shift pitch-time so darn bad. Wait, let me clean that up a little bit. . .so darn badly. . .no wait, run that back one more time. . .jjuuuuust a little bit more. . .so damn badly. Whew! There! Perfect!–JS, Editor-in-Chief

A recording engineer once told me a story about a time when he was tasked with “tuning” the lead vocals from a recording session (identifying details have been changed to protect the innocent). Polishing-up vocals is an increasingly common job in the recording business, with some dedicated vocal producers even making it their specialty. Being able to comp, tune, and repair the timing of a vocal take is now a standard skill set among engineers, but in this case things were not going smoothly. Whereas singers usually tend towards being either consistently sharp or flat (“men go flat, women go sharp” as another engineer explained), in this case the vocalist was all over the map, making it difficult to always know exactly what note they were even trying to hit. Complicating matters further was the fact that this band had a decidedly lo-fi, garage-y reputation, making your standard-issue, Glee-grade tuning job decidedly inappropriate.

Undaunted, our engineer pulled up the Auto-Tune plugin inside Pro-Tools and set to work tuning the vocal, to use his words, “artistically” – that is, not perfectly, but enough to keep it from being annoyingly off-key. When the band heard the result, however, they were incensed – “this sounds way too good! Do it again!” The engineer went back to work, this time tuning “even more artistically,” going so far as to pull the singer’s original performance out of tune here and there to compensate for necessary macro-level tuning changes elsewhere.

"Melodyne screencap" by Flickr user Ethan Hein, CC BY-NC-SA 2.0

“Melodyne screencap” by Flickr user Ethan Hein, CC BY-NC-SA 2.0

The product of the tortuous process of tuning and re-tuning apparently satisfied the band, but the story left me puzzled… Why tune the track at all? If the band was so committed to not sounding overproduced, why go to such great lengths to make it sound like you didn’t mess with it? This, I was told, simply wasn’t an option. The engineer couldn’t in good conscience let the performance go un-tuned. Digital pitch correction, it seems, has become the rule, not the exception, so much so that the accepted solution for too much pitch correction is more pitch correction.

Since 1997, recording engineers have used Auto-Tune (or, more accurately, the growing pantheon of digital pitch correction plugins for which Auto-Tune, Kleenex-like, has become the household name) to fix pitchy vocal takes, lend T-Pain his signature vocal sound, and reveal the hidden vocal talents of political pundits. It’s the technology that can make the tone-deaf sing in key, make skilled singers perform more consistently, and make MLK sound like Akon. And at 17 years of age, “The Gerbil,” as some like to call Auto-Tune, is getting a little long in the tooth (certainly by meme standards.) The next U.S. presidential election will include a contingent of voters who have never drawn air that wasn’t once rippled by Cher’s electronically warbling voice in the pre-chorus of “Believe.” A couple of years after that, the Auto-Tune patent will expire and its proprietary status will dissolve into to the collective ownership of the public domain.

.

Growing pains aside, digital vocal tuning doesn’t seem to be leaving any time soon. Exact numbers are hard to come by, but it’s safe to say that the vast majority of commercial music produced in the last decade or so has most likely been digitally tuned. Future Music editor Daniel Griffiths has ballpark-estimated that, as early as 2010, pitch correction was used in about 99% of recorded music. Reports of its death are thus premature at best. If pitch correction is seems banal it doesn’t mean it’s on the decline; rather, it’s a sign that we are increasingly accepting its underlying assumptions and internalizing the habits of thought and listening that go along with them.

Headlines in tech journalism are typically reserved for the newest, most groundbreaking gadgets. Often, though, the really interesting stuff only happens once a technology begins to lose its novelty, recede into the background, and quietly incorporate itself into fundamental ways we think about, perceive, and act in the world. Think, for example, about all the ways your embodied perceptual being has been shaped by and tuned-in to, say, the very computer or mobile device you’re reading this on. Setting value judgments aside for a moment, then, it’s worth thinking about where pitch correction technology came from, what assumptions underlie the way it works and how we work with it, and what it means that it feels like “old news.”

"Anti-Tune symbol"

“Anti-Tune symbol”

As is often the case with new musical technologies, digital pitch correction has been the target for no small amount of controversy and even hate. The list of indictments typically includes the homogenization of music, the devaluation of “actual talent,” and the destruction of emotional authenticity. Suffice to say, the technological possibility of ostensibly producing technically “pitch-perfect” performances has wreaked a fair amount of havoc on conventional ways of performing and evaluating music. As Primus Luta reminded us in his SO! piece on the powerful-yet-untranscribable “blue notes” that emerged from the idiosyncrasies of early hardware samplers, musical creativity is at least as much about digging-into and interrogating the apparent limits of a technology as it is about the successful removal of all obstacles to total control of the end result.

Paradoxically, it’s exactly in this spirit that others have come to the technology’s defense: Brian Eno, ever open to the unexpected creative agency of perplexing objects, credits the quantized sound of an overtaxed pitch corrector with renewing his interest in vocal performances. SO!’s own Osvaldo Oyola, channeling Walter Benjamin, has similarly offered a defense of Auto-Tune as a democratizing technology, one that both destabilizes conventional ideas about musical ability and allows everyone to sing in-tune, free from the “tyranny of talent and its proscriptive aesthetics.”

"Audiodatenkompression: Manowar, The Power of Thy Sword" by Wikimedia user Moehre1992, CC BY-SA 3.0

“Audiodatenkompression: Manowar, The Power of Thy Sword” by Wikimedia user Moehre1992, CC BY-SA 3.0

Jonathan Sterne, in his book MP3, offers an alternative to normative accounts of media technology (in this case, narratives either of the decline or rise of expressive technological potential) in the form of “compression histories” – accounts of how media technologies and practices directed towards increasing their efficiency, economy, and mobility can take on unintended cultural lives that reshape the very realities they were supposed to capture in the first place. The algorithms behind the MP3 format, for example, were based in part on psychoacoustic research into the nature of human hearing, framed primarily around the question of how many human voices the telephone company could fit into a limited bandwidth electrical cable while preserving signal intelligibility. The way compressed music files sound to us today, along with the way in which we typically acquire (illegally) and listen to them (distractedly), is deeply conditioned by the practical problems of early telephony. The model listener extracted from psychoacoustic research was created in an effort to learn about the way people listen. Over time, however, through our use of media technologies that have a simulated psychoacoustic subject built-in, we’ve actually learned collectively to listen like a psychoacoustic subject.

Pitch-time manipulation runs largely in parallel to Sterne’s bandwidth compression story. The ability to change a recorded sound’s pitch independently of its playback rate had its origins not in the realm of music technology, but in efforts to time-compress signals for faster communication. Instead of reducing a signal’s bandwidth, pitch manipulation technologies were pioneered to reduce the time required to push the message through the listener’s ears and into their brain. As early as the 1920s, the mechanism of the rotating playback head was being used to manipulate pitch and time interchangeably. By spinning a continuous playback head relative to the motion of the magnetic tape, researchers in electrical engineering, educational psychology, and pedagogy of the blind found that they could increase playback rate of recorded voices without turning the speakers into chipmunks. Alternatively, they could rotate the head against a static piece of tape and allow a single moment of recorded sound to unfold continuously in time – a phenomenon that influenced the development of a quantum theory of information

In the early days of recorded sound some people had found a metaphor for human thought in the path of a phonograph’s needle. When the needle became a head and that head began to spin, ideas about how we think, listen, and communicate followed suit: In 1954 Grant Fairbanks, the director of the University of Illinois’ Speech Research Laboratory, put forth an influential model of the speech-hearing mechanism as a system where the speaker’s conscious intention of what to say next is analogized to a tape recorder full of instructions, its drive “alternately started and stopped, and when the tape is stationary a given unit of instruction is reproduced by a moving scanning head”(136). Pitch time changing was more a model for thinking than it was for singing, and its imagined applications were thus primarily non-musical.

Take for example the Eltro Information Rate Changer. The first commercially available dedicated pitch-time changer, the Eltro advertised its uses as including “pitch correction of helium speech as found in deep sea; Dictation speed testing for typing and steno; Transcribing of material directly to typewriter by adjusting speed of speech to typing ability; medical teaching of heart sounds, breathing sounds etc.by slow playback of these rapid occurrences.” (It was also, incidentally, used by Kubrick to produce the eerily deliberate vocal pacing of HAL 9000). In short, for the earliest “pitch-time correction” technologies, the pitch itself was largely a secondary concern, of interest primarily because it was desirable for the sake of intelligibility to pitch-change time-altered sounds into a more normal-sounding frequency range.

.

This coupling of time compression with pitch changing continued well into the era of digital processing. The Eventide Harmonizer, one of the first digital hardware pitch shifters, was initially used to pitch-correct episodes of “I Love Lucy” which had been time-compressed to free-up broadcast time for advertising. Similar broadcast time compression techniques have proliferated and become common in radio and television (see, for example, Davis Foster Wallace’s account of the “cashbox” compressor in his essay on an LA talk radio station.) Speed listening technology initially developed for the visually impaired has similarly become a way of producing the audio “fine print” at the end of radio advertisements.

"H910 Harmonizer" by Wikimedia user Nalzatron, CC BY-SA 3.0

“H910 Harmonizer” by Wikimedia user Nalzatron, CC BY-SA 3.0

Though the popular conversation about Auto-Tune often leaves this part out, it’s hardly a secret that pitch-time correction is as much about saving time as it is about hitting the right note. As Auto-Tune inventor Andy Hildebrand put it,

[Auto-Tune’s] largest effect in the community is it’s changed the economics of sound studios…Before Auto-Tune, sound studios would spend a lot of time with singers, getting them on pitch and getting a good emotional performance. Now they just do the emotional performance, they don’t worry about the pitch, the singer goes home, and they fix it in the mix.

Whereas early pitch-shifters aimed to speed-up our consumption of recorded voices, the ones now used in recording are meant to reduce the actual time spent tracking musicians in studio. One of the implications of this framing is that emotion, pitch, and the performer take on a very particular relationship, one we can find sketched out in the Auto-Tune patent language:

Voices or instruments are out of tune when their pitch is not sufficiently close to standard pitches expected by the listener, given the harmonic fabric and genre of the ensemble. When voices or instruments are out of tune, the emotional qualities of the performance are lost. Correcting intonation, that is, measuring the actual pitch of a note and changing the measured pitch to a standard, solves this problem and restores the performance. (Emphasis mine. Similar passages can be found in Auto-Tune’s technical documentation.)

In the world according to Auto-Tune, the engineer is in the business of getting emotional signals from place to place. Emotion is the message, and pitch is the medium. Incorrect (i.e. unexpected) pitch therefore causes the emotion to be “lost.” While this formulation may strike some people as strange (for example, does it mean that we are unable to register the emotional qualities of a performance from singers who can’t hit notes reliably? Is there no emotionally expressive role for pitched performances that defy their genre’s expectations?), it makes perfect sense within the current affective economy and division of labor and affective economy of the recording studio. It’s a framing that makes it possible, intelligible, and at least somewhat compulsory to have singers “express emotion” as a quality distinct from the notes they hit and have vocal producers fix up the actual pitches after the fact. Both this emotional model of the voice and the model of the psychoacoustic subject are useful frameworks for the particular purposes they serve. The trick is to pay attention to the ways we might find ourselves bending to fit them.

.

Owen Marshall is a PhD candidate in Science and Technology Studies at Cornell University. His dissertation research focuses on the articulation of embodied perceptual skills, technological systems, and economies of affect in the recording studio. He is particularly interested in the history and politics of pitch-time correction, cybernetics, and ideas and practices about sensory-technological attunement in general. 

Featured image: “Epic iPhone Auto-Tune App” by Flickr user Photo Giddy, CC BY-NC 2.0

tape reelREWIND!…If you liked this post, you may also dig:

“From the Archive #1: It is art?”-Jennifer Stoever

“Garageland! Authenticity and Musical Taste”-Aaron Trammell

“Evoking the Object: Physicality in the Digital Age of Music”-Primus Luta

%d bloggers like this: