Archive | Sampling and Remix Culture RSS for this section

Sounding Out! Podcast #31: Game Audio Notes III: The Nature of Sound in Vessel

Sound and Pleasure2This post continues our summer Sound and Pleasure series, as the third and final podcast in a three part series by Leonard J. Paul. What is the connection between sound and enjoyment, and how are pleasing sounds designed? Pleasure is, after all, what brings y’all back to Sounding Out! weekly, is it not?

Part of the goal of this series of podcasts has been to reveal the interesting and invisible labor practices which are involved in sound design. In this final entry Leonard J. Paul breaks down his process in designing living sounds for the game Vessel. How does one design empathetic or aggressive sounds? If you need to catch up read Leonard’s last entry where he breaks down the vintage sounds of Retro City Rampage. Also, be sure to be sure to check out last week’s edition where Leonard breaks down his process in designing sound for Sim Cell. But first, listen to this! -AT, Multimedia Editor

-

CLICK HERE TO DOWNLOADGame Audio Notes III: The Nature of Sound in Vessel

SUBSCRIBE TO THE SERIES VIA ITUNES

ADD OUR PODCASTS TO YOUR STITCHER FAVORITES PLAYLIST

-
Game Audio Notes III: The Nature of Sound in Vessel

Strange Loop Game’s Vessel is set in an alternate world history where a servant class of liquid automatons (called fluros) has gone out of control. The player explores the world and solves puzzles in an effort to restore order. While working on Vessel, I personally recorded all of the sounds so that I could have full control over the soundscape. I recorded all of the game’s samples with a Zoom H4n portable recorder. My emphasis on real sounds was intended to focus the player’s experience of immersion in the game.

This realistic soundscape was supplemented with a variety of techniques that produced sounds that dynamically responded to the changes in the physics engine. Water and other fluids in the game were  difficult to model with both the physics engine and the audio engine (FMOD Designer). Because fluids are fundamentally connected to the game’s physics engine, they takes on a variety of different dynamic forms as players interact with the fluid in different ways. In order to address this Kieran Lord, the audio coder, and I considered factors like the amount of liquid in a collision with anything, the hardness of the surface that it was colliding with, the type of liquid in motion, whether the player is experiencing an extreme form of that sound because it is colliding with their head, and, of course, how fast the liquid is travelling.

Although there was a musical score, I designed the effects to be played without music. Each element of the game, for instance a lava fluro’s (one of the game’s rebellious automatons) footsteps, entailed required layers of sound. The footsteps were composed of water sizzling on a hot pan, a gloopy slap of oatmeal and a wet rag hitting the ground. Finding the correct emotional balance to support the game’s story was fundamental to my work as a sound designer. The game’s sound effects were constantly competing with the adaptive music (which is also contingent on player action) that plays throughout the game, so it was important to provide an informative quality to them. The sound effects inform you about the environment while the music sets the emotional underscore of the gameplay and helps guide you in the puzzles.

The lava fluro foosteps in FMOD Designer.

The lava fluro foosteps in FMOD Designer. Used with permission (c) 2014 Strange Loop Games

Defining the character of the fluros was difficult because I wanted players to have empathy for them. This was important to me because there is often no way to avoid destroying them when solving the game’s puzzles. While recording sounds in the back of an antique shop, I came across a vintage Dick Tracey gun that made a fantastic clanking sound when making a siren sound. Since the gun allowed me to control how quickly the siren rose and fell, it was a great way to produce vocalizations for the fluros. I simply recorded the gun’s siren sound, chopped the recording into smaller pieces, and then played back different segments randomly. The metal clanking gave a mechanical feel and the siren’s tone gave a vocal quality to the resulting sound that was perfect for the fluros. I could make the fluros sound excited by choosing a higher pitch range from the sample grains and inform the player when they approached their goal.

I wanted a fluid-based scream to announce a fluro’s death. I tried screaming underwater, screaming into a glass of water, and a few other things, but nothing worked. Eventually, when recording a rubber ear syringe, I found squeezing the water out quickly lent a real shriek while it spit out the last of the water. Not only did this sound really cut through the din of the gears clanking in the mix, but it also bonded a watery yell with the sense of being crushed and running out of breath.

Vessel-LavaBoss

Vessel’s Lava boss with audio debug output. Used with permission (c) 2014 Strange Loop Games

For the final boss, I tried many combinations of glurpy sounds to signify its lava form. Eventually I recorded a nail in a board being dragged across a large rusty metal sheet. Though it was quite excruciating to listen to, I pitched down the recording and combined it with a pitched down and granulated recording of myself growling into a cup of water. This sound perfectly captured the emotion I wanted to feel when encountering a final boss.  Although it can take a long time to arrive at the “obvious” sound, simplicity is often the key.

Anticipation is fundamental to a player’s sense of immersion. It carves a larger space for tension to build, for instance a small crescendo of a creaking sound can develop a tension that builds to a sudden and large impact. A whoosh before a punch lands adds extra weight to the force of the punch. These cues are often naturally present in real-world sounds, such as a rush of air sweeping in before a door slams. A small pause might be included just for added suspense and helps to intensify the effect of the door slamming. Dreading the impact is half of the emotion of a large hit .

Vessel-ClockRecording

Recording inside of a clock tower with my H4n recorder for Vessel. Used with permission by the author.

Recording all of the sounds for Vessel was a large undertaking but since I viewed each recording as a performance, I was able to make the feeling of the world very cohesive. Each sound was designed to immerse the player in the soundscape, but also to allow players enough time to solve puzzles without becoming annoyed with the audio. All sounds have a life of their own and a resonance of memory and time that stays with the them during each playthrough of a game. In Retro City Rampage I left a sonic space for the player to wax nostalgic. In Sim Cell, I worked to breathe life into a set of sterile and synthesized sounds. Each recorded sound in Vessel is alive in comparison, telling stories of time, place and recording with them, that are all their own.

The common theme of my audio work on Retro City Rampage, Sim Cell and Vessel, is that I enjoy putting constraints on myself to inspire my creativity. I focus on what works and removing non-essential elements. Exploring the limits of constraints often provokes interesting and unpredictable results. I like “sculpting” sounds and will often proceed from a rough sketch, polishing and reducing elements until I like what I hear. Typically I remove layers that don’t add an emotive aspect to the sound design. In games there are often many sounds that can play at once, so clarity and focus are necessary when preventing sounds from getting lost in a sonic goo.

CherryBlossoms

Cherry blossoms for new beginnings. Used with permission by the author.

In this post I have shown how play and experimentation are fundamental to my creative process. For an aspiring sound artist, spending time with Pure Data, FMOD Studio or Wwise and a personal recorder is a great way to improve their skill with game audio. This series of articles has aimed to reveal the tacit decisions behind the production of game audio that get obscured by the fun of the creative process. Plus, I hope they offer a bit of inspiration to those creating their own sounds in the future.

Additional Resources:

-

Leonard J. Paul attained his Honours degree in Computer Science at Simon Fraser University in BC, Canada with an Extended Minor in Music concentrating in Electroacoustics. He began his work in video games on the Sega Genesis and Super Nintendo Entertainment System and has a twenty year history in composing, sound design and coding for games. He has worked on over twenty major game titles totalling over 6.4 million units sold since 1994, including award-winning AAA titles such as EA’s NBA Jam 2010NHL11Need for Speed: Hot Pursuit 2NBA Live ’95 as well as the indie award-winning title Retro City Rampage.

He is the co-founder of the School of Video Game Audio and has taught game audio students from over thirty different countries online since 2012. His new media works has been exhibited in cities including Surrey, Banff, Victoria, São Paulo, Zürich and San Jose. As a documentary film composer, he had the good fortune of scoring the original music for multi-awarding winning documentary The Corporation which remains the highest-grossing Canadian documentary in history to date. He has performed live electronic music in cities such as Osaka, Berlin, San Francisco, Brooklyn and Amsterdam under the name Freaky DNA.

He is an internationally renowned speaker on the topic of video game audio and has been invited to speak in Vancouver, Lyon, Berlin, Bogotá, London, Banff, San Francisco, San Jose, Porto, Angoulême and other locations around the world.

His writings and presentations are available at http://VideoGameAudio.com

-

Featured image: Courtesy of Vblank Entertainment (c)2014 – Artwork by Maxime Trépanier.

tape reelREWIND! . . .If you liked this post, you may also dig:

Sounding Out! Podcast #30: Game Audio Notes I: Growing Sounds for Sim Cell- Leonard J. Paul

Sounding Out! Podcast #31: Hand Made Music in Retro City Rampage- Leonard J. Paul

Papa Sangre and the Construction of Immersion in Audio Games- Enongo Lumumba-Kasongo 

Sounding Out! Podcast #29: Game Audio Notes I: Growing Sounds for Sim Cell

cell Visual Slice_Base_02-1360x768

Sound and Pleasure2A pair of firsts! This is both the lead post in our summer Sound and Pleasure series, and the first podcast in a three part series by Leonard J. Paul. What is the connection between sound and enjoyment, and how are pleasing sounds designed? Pleasure is, after all, what brings y’all back to Sounding Out! weekly, is it not?

In today’s installment Leonard peels back the curtain of game audio design and reveals his creative process. For anyone curious as to what creative decisions lead to the bloops, bleeps, and ambient soundscapes of video games, this is essential listening. Stay tuned for next Monday’s installment on the process of designing sound for Retro City Rampage, and next month’s episode which focuses on the game Vessel. Today, Leonard begins by picking apart his design process at a cellular level. Literally! -AT, Multimedia Editor

-

CLICK HERE TO DOWNLOADGame Audio Notes I: Growing Sounds for Sim Cell

SUBSCRIBE TO THE SERIES VIA ITUNES

ADD OUR PODCASTS TO YOUR STITCHER FAVORITES PLAYLIST

-

Game Audio Notes I: Growing Sounds for Sim Cell

Sim Cell is an educational game released in Spring 2014 by Strange Loop Games. Published by Amplify, a branch of News Corp, it teaches students how the human cell works. Players take control of a small vessel that is shrunk down to the size of a cell and solve the tasks set by the game while also learning about the human cell. This essay unpacks the design decisions behind the simulation of a variety of natural phenomena (motion, impact, and voice) in Sim Cell.

For the design of this game I decided to focus on the elements of life itself and attempted to “grow” the music and sound design from synthetic sounds. I used the visual scripting language Pure Data (PD) to program both the sounds and the music. The music is generated from a set of rules and patterns that cause each playback of a song to be slightly different each time. Each of the sound effects are crafted from small programs that are based on the design of analogue modular synthesizers. Basic synthetic audio elements such as filtered noise, sawtooth waves and sine waves were all used in the game’s sound design.

An image of the game's space-like world.

A screenshot of Sim Cell’s microscopic space-like world. Used with permission (c) 2014 Amplify.

The visuals of the game give a feeling of being in an “inner space” that mirrors outer space. I took my inspiration for Sim Cell‘s sound effects from Louis and Bebe Baron’s score to Forbidden Planet. I used simple patches in PD to assemble all of the sounds for the game from synthesis. There aren’t any recorded samples in the sound design – all of the sound effects are generated from mathematics.

The digital effects used in the sound design were built to emulate the effects available in vintage studios such as plate reverb and analog delay. I found a simulation of a plate reverb that from a modified open source patch from the RjDj project, and constructed an analogue delay by using a low-pass filter on a delayed signal.

https://www.youtube.com/watch?v=3m86ftny1uY&feature=youtu.be

In keeping with the vintage theme, I used elements of sound design from the early days of video games as well. Early arcade games such as Space Invaders used custom audio synthesis microchips for each of their sounds. In order to emulate this, I gave every sound its own patch when doing the synthesis for Sim Cell. I learned to appreciate this ethic of design when playing Combat on the Atari 2600 while growing up. The Atari 2600 could only output two sounds at once thus had a very limited palette of tones. All of the source code for the sounds and music for Sim Cell are less than 1 megabyte, which shows how powerful the efficient coding of mathematics can be.

Vector image on the left, raster graphics on the right.

Vector image on the left, raster graphics on the right. Photo used with permission by the author.

Another cool thing about generating the sounds from mathematics is that users can “zoom in” on sounds in the same way that one can zoom in to a vector drawing. In vector drawings the lines are smooth when you zoom in, as opposed to rasterized pictures (such as a JPEG) which reveal a blurry set of pixels upon zooming. When code can change sounds in real-time, it makes them come alive and lends a sense of flexibility to the composition.

For a human feeling I often filter the sounds using formant frequencies which simulate the resonant qualities of vowel sounds, thus offering a vocal quality to the sample. For alarm sounds in Sim Cell I used minor second intervals. This lent a sense of dissonance which informed players that they needed to adjust their gameplay in order to navigate treacherous areas. Motion was captured through a whoosh sound that used filtered and modulated noise. Some sounds were pitched up to make things seem like they were travelling towards the player and likewise they exploited a sense of doppler shift when pitched down, giving the feel that a sound was traveling away from the player. Together, these techniques produced a sense of immersion for the player while simultaneously building a realistic soundscape.

Our decision to use libPD for this synthesis turned problematic and its processing requirements were too high. In order to remedy this, we decided to convert our audio into samples. Consider a photograph of a sculpture. Our sounds, like sculpture in the photograph, could now only be viewed from one direction. This meant that the music now only had a single version and that sound effects would repeat as well. A small fix was exporting the file with a set of five different intensities from 0 to 1.0. Like taking a photograph from several angles, this meant that the game could play a sample at intensity levels of 20%, 40%, 60%, 80% and 100%. Although a truly random sense of variation was lost, this method still conveyed the intensity of impacts (and other similar events) generated by the physics engine of the game.

An image of the inexpensive open-source computer: The Rasberry PI

An image of the inexpensive open-source computer: The Rasberry PI. Photo used with permission by the author.

PD is a great way to learn and play with digital audio since you can change the patch while it is running, just like you might do with a real analogue synthesizer. There’s plenty of other neat stuff that PD can do, like being able to be run on the Raspberry Pi, so you could code your own effects pedals and make your own synths using PD for around $50 or so. For video games, you can use libPD to integrate PD patches into your Android or iOS apps as well. I hope this essay has offered some insight as to my process when using PD. I’ve included some links below for those interested in learning more.

Additional resources:

-

Leonard J. Paul attained his Honours degree in Computer Science at Simon Fraser University in BC, Canada with an Extended Minor in Music concentrating in Electroacoustics. He began his work in video games on the Sega Genesis and Super Nintendo Entertainment System and has a twenty year history in composing, sound design and coding for games. He has worked on over twenty major game titles totalling over 6.4 million units sold since 1994, including award-winning AAA titles such as EA’s NBA Jam 2010NHL11Need for Speed: Hot Pursuit 2NBA Live ’95 as well as the indie award-winning title Retro City Rampage.

He is the co-founder of the School of Video Game Audio and has taught game audio students from over thirty different countries online since 2012. His new media works has been exhibited in cities including Surrey, Banff, Victoria, São Paulo, Zürich and San Jose. As a documentary film composer, he had the good fortune of scoring the original music for multi-awarding winning documentary The Corporation which remains the highest-grossing Canadian documentary in history to date. He has performed live electronic music in cities such as Osaka, Berlin, San Francisco, Brooklyn and Amsterdam under the name Freaky DNA.

He is an internationally renowned speaker on the topic of video game audio and has been invited to speak in Vancouver, Lyon, Berlin, Bogotá, London, Banff, San Francisco, San Jose, Porto, Angoulême and other locations around the world.

His writings and presentations are available at http://VideoGameAudio.com

-

Featured image: Concept art for Sim Cell. Used with permission (c) 2014 Amplify.

tape reelREWIND! . . .If you liked this post, you may also dig:

Sounding Out! Podcast #10: Interview with Theremin Master Eric Ross- Aaron Trammell

Papa Sangre and the Construction of Immersion in Audio Games- Enongo Lumumba-Kasongo

Playing with Bits, Pieces, and Lightning Bolts: An Interview with Sound Artist Andrea Parkins

The Blue Notes of Sampling

4287453890_eb4c960243_o

Sound and TechThis is article 2.0  in Sounding Out!‘s April  Forum on “Sound and Technology.” Every Monday this month, you’ll be hearing new insights on this age-old pairing from the likes of Sounding Out! veterano Aaron Trammell along with new voices Andrew Salvati and Owen Marshall.  These fast-forward folks will share their thinking about everything from Auto-tune to techie manifestos. So, turn on your quantizing for Sounding Out! and enjoy today’s supersonic in-depth look at sampling from from SO! Regular Writer Primus Luta.  –JS, Editor-in-Chief

My favorite sample-based composition? No question about it: “Stroke of Death” by Ghostface and produced by The RZA.

.

Supposedly the story goes, RZA was playing records in the studio when he put on the Harlem Underground Band’s album. It is a go-to album in a sample-based composer collection, because of the open drum breaks. One such break appears in the cover of Bill Wither’s “Ain’t No Sunshine”, notably used by A Tribe Called Quest on “Everything is Fair.”

.

RZA, a known break beat head, listened as the song approached the open drums, when the unthinkable happened: a scratch in his copy of the record. Suddenly, right before the open drums dropped, the vinyl created its own loop, one that caught RZA’s ear. He recorded it right there and started crafting the beat.

This sample is the only source material for the track. RZA throws a slight turntable backspin in for emphasis, adding to the jarring feel that drives the beat. That backspin provides a pitch shift for the horn that dominates the sample, changing it from a single sound into a three-note melody. RZA also captures some of the open drums so that the track can breathe a bit before coming back to the jarring loop. As accidental as the discovery may have been, it is a very precisely arranged track, tailor-made for the attacking vocals of Ghostface, Solomon Childs, and the RZA himself.

"How to: fix a scratched record" by Flickr user Fred Scharmen, CC BY-NC-SA 2.0

“How to: fix a scratched record” by Flickr user Fred Scharmen, CC BY-NC-SA 2.0

“Stroke of Death” exemplifies how transformative sample-based composition can be. Other than by knowing the source material, the sample is hard to identify. You cannot figure out that the original composition is Wither’s “Ain’t No Sunshine” from the one note RZA sampled, especially considering the note has been manipulated into a three-note melody that appears nowhere in either rendition of the composition. It is sample based, yes, but also completely original.

Classifying a composition like this as a ‘happy accident’ downplays just how important the ear is in sample-based composition, particularly on the transformative end of the spectrum. J Dilla once said finding the mistakes in a record excited him and that it was often those mistakes he would try to capture in his production style. Working with vinyl as a source went a long way in that regard, as each piece of vinyl had the potential to have its own physical characteristics that affected what one heard. It’s hard to imagine “Stroke of Death” being inspired from a digital source. While digital files can have their own glitches, one that would create an internal loop on playback would be rare.

*****

"Unpacking of the Proteus 2000" by Flickr user Anders Dahnielson, CC BY-NC-SA 2.0

“Unpacking of the Proteus 2000″ by Flickr user Anders Dahnielson, CC BY-NC-SA 2.0

There has been a change in the sound of sampling over the past few decades. It is subtle but still perceptible; one can hear it even if a person does not know what it is they are hearing. It is akin to the difference between hearing a blues man play and hearing a music student play the blues. They technically are both still the blues, but the music student misses all of the blue notes.

The ‘blue notes’ of the blues were those aspects of the music that could not be transcribed yet were directly related to how the song conveyed emotion. It might be the fact that the instrument was not fully in tune, or the way certain notes were bent but others were not, it could even be the way a finger hit the body of a guitar right after the string was strummed. It goes back farther than the blues and ultimately is not exclusive to the African American tradition from which the phrase derives; most folk music traditions around the world have parallels. “The Rite of Spring” can be understood as Stravinsky ‘sampling’ the blue notes of Transylvanian folk music. In many regards sample-based composing is a modern folk tradition, so it should come as no surprise that it has its own blue notes.

The sample-based composition work of today is still sampling, but much of it lacks the blue notes that helped define the golden era of the art. I attribute this discrepancy to the evolution of technology over the last two decades. Many of the things that could be understood as the blue notes of sampling were merely ways around the limits of the technology. In the same way, the blue notes of most folk music happened when the emotion went beyond the standards of the instrument (or alternately the limits imposed upon it by the literal analysis of western theory). By looking at how the technology has evolved we can see how blue notes of sampling are being lost as key limitations are being overcome by “advances.”

e-muFirst, let’s consider the E-Mu SP-1200, which is still thought to be the most definitive sounding sampler for hip-hop styled sample-based compositions, particularly related to drums. The primary reason for this is its low-resolution sampling and conversion rates. For the SP-1200 the Analog to Digital (A/D) and Digital to Analog (D/A) converters were 12-bit at a sample rate of 26.04 kHz (CD quality is 16-bit 44.1 kHz). No matter what quality the source material, there would be a loss in quality once it was sampled into and played out of the SP-1200. This loss proved desirable for drum sounds particularly when combined with the analog filtering available in the unit, giving them a grit that reflected the environments from which the music was emerging.

Sp1200_Back_PanelOn top of this, individual samples could only be 2.5 seconds long, with a total available sample time of only 10 seconds. While the sample and conversion rates directly affected the sound of the samples, the time limits drove the way that composers sampled. Instead of finding loops, beatmakers focused on individual sounds or phrases, using the sequencer to arrange those elements into loops. There were workarounds for the sample time constraints; for example, playing a 33-rpm record at 45 rpm to sample, then pitching it back down post sampling was a quick way to increase the sample time. Doing this would further reduce the sample rate, but again, that could be sonically appealing.

An under appreciated limitation of the SP-1200 however, was the visual feedback for editing samples. The display of the SP-1200 was completely alpha numeric; there were no visual representations of the sample other than numbers that were controlled by the faders on the interface. The composer had to find the start and end points of the sample solely by ear. Two producers might edit the exact same kick drum with start times 100 samples (a fraction of a millisecond) apart. Were one of the composers to have recorded the kick at 45 rpm and pitched it down, the actual resolution for the start and end times would be different. When played in a sequence, these 100 samples affect the groove, contributing directly to the feel of the composition. The timing of when the sample starts playback is combined with the quantization setting and the swing percentage of the sequencer. That difference of 100 samples in the edit further offsets the trigger times, which even with quantization turned off fit into the 24 parts per quarter grid limitations of the machine.

akaiAkai’s MPC-60 was the next evolution in sampling technology. It raised the sample and conversion rates to 16-bit and 40 kHz. Sample time increased to a total of 13.1 seconds (upgradable to 26.2). Sequencing resolution increased to 96 parts per quarter. Gone was the crunch of the SP-1200, but the precision went up both in sampling and in sequencing. The main trademark of the MPC series was the swing and groove that came to Akai from Roger Linn’s Linn Drum. For years shrouded in mystery and considered a myth by many, in truth there was a timing difference that Linn says was achieved by delaying certain notes by samples. Combined with the greater PPQ resolution in unquantized mode, even with more precision than the SP-1200, the MPC lent itself to capturing user variation.

Despite these technological advances, sample time and editing limitations, combined with the fact that the higher resolution sampling lacked the character of the SP-1200, kept the MPC from being the complete package sample composers desired. For this reason it was often paired with Akai’s S-950 rack sampler. The S-950 was a 12-bit sampler but had a variable sample rate between 7.5 kHz and 40 kHz. The stock memory could hold 750 KB of samples which at the lowest sample rate could garner upwards of 60 seconds of sampling and at the higher sample rates around 10 seconds. This was expandable to up to 2.5 MB of sample memory.

s950.

The editing capabilities made the S-950 such a powerful sampler. Being able to create internal sample loops, key map samples to a keyboard, modify envelopes for playback, and take advantage of early time stretching (which would come of age with the S-1000)—not to mention the filter on the unit—helped take sampling deeper into the sound design territory. This again increased the variable possibilities from composer to composer even when working from the same source material. Often combined with the MPC for sequencing, composers had the ultimate sample-based composition workstation.

Today, there are literally no limitations for sampling. Perhaps the subtlest advances have developed the precision with which samples can be edited. With these advances, the biggest shift has been the reduction of the reliance on ears. Recycle was an early software program that started to replace the ears in the editing process. With Recycle an audio file could be loaded, and the software would chop the sample into component parts by searching for the transients. Utilizing Recycle on the same source, it was more likely two different composers could arrive at a kick sample that was truncated identically.

waveformAnother factor has been the waveform visualization of samples for editing. Some earlier hardware samplers featured the waveform display for truncating samples, but the graphic resolution within the computer made this even more precise. By looking at the waveform you are able to edit samples at the point where a waveform crosses the middle point between the negative and positive side of the signal, known as the zero-crossing. The advantage of zero-crossing sampling is that it prevents errors that happen when playback goes from either side of the zero point to another point in one sample, which can make the edit point audible because of the break in the waveform. The end result of zero-crossing edited samples is a seamlessness that makes samples sound like they naturally fit into a sequence without audible errors. In many audio applications snap-to settings mean that edits automatically snap to zero-crossing—no ears needed to get a “perfect” sounding sample.

It is interesting to note that with digital files it’s not about recording the sample, but editing it out of the original file. It is much different from having to put the turntable on 45 rpm to fit a sample into 2.5 seconds. Another differentiation between digital sample sources is the quality of the files, whether original digital files (CD quality or higher), lossless compression (FLAC), lossy compressed (MP3, AAC) or the least desirable though most accessible, transcoded (lossy compression recompressed such as YouTube rips). These all result in a different degradation of quality than the SP-1200. Where the SP-1200’s downsampling often led to fatter sounds, these forms of compression trend toward thinner-sounding samples.

converter.

Some producers have created their own sound using thinned out samples with the same level of sonic intent as The RZA’s on “Stroke of Death.” The lo-fi aesthetic is often an attempt to capture a sound to parallel the golden era of hardware-based sampling. Some software-based samplers by example will have an SP-1200 emulation button that reduces bit rates to 12-bit. Most of software sequencers have groove templates that allow for the sequencers to emulate grooves like the MPC timing.

Perhaps the most important part of the sample-based composition process however, cannot be emulated: the ear. The ear in this case is not so much about the identification of the hot sample. Decades of history should tell us that the hot sample is truly a dime a dozen. It takes a keen composer’s ear to hear how to manipulate those sounds into something uniquely theirs. Being able to listen for that and then create that unique sound—utilizing whatever tools— that is the blue note of sampling. And there is simply no way to automate that process.

Featured image: “Blue note inverted” by Flickr user Tim, CC BY-ND 2.0

Primus Luta is a husband and father of three. He is a writer and an artist exploring the intersection of technology and art, and their philosophical implications.  He maintains his own AvantUrb site. Luta was a regular presenter for Rhythm Incursions. As an artist, he is a founding member of the collective Concrète Sound System, which spun off into a record label for the exploratory realms of sound in 2012. Recently Concréte released the second part of their Ultimate Break Beats series for Shocklee.

tape reelREWIND!…If you liked this post, you may also dig:

“SO! Reads: Jonathan Sterne’s MP3: The Meaning of a Format”-Aaron Trammell

“Remixing Girl Talk: The Poetics and Aesthetics of Mashups”-Aram Sinnreich

“Sound as Art as Anti-environment”-Steven Hammer

Tomahawk Chopped and Screwed: The Indeterminacy of Listening

11298707205_8a3f48c56e_b

The Wobble Frequency2I’m happy to introduce the final post in Guest Editor Justin Burton‘s three part series for SO!, “The Wobble Continuum.” I’ll leave Justin to recap the series and reflect on it a little in his article below, but first I want to express our appreciation to him for his thoughtful curation of this exciting series, the first in the new Thursday stream at Sounding Out!. Thanks for getting the ball rolling!

Next month be sure to watch this space for a preview of sound at the upcoming Society for Cinema & Media Studies meeting in Seattle, and a new four part series on radio in Latin America by Guest Editor Tom McEnaney.

– Neil Verma, Special Editor for ASA/SCMS

I’m standing at a bus stop outside the Convention Center in downtown Indianapolis, whistling. The tune, “Braves,” is robust, a deep, oscillating comeuppance of the “Tomahawk Chop” melody familiar from my youth (the Braves were always on TBS). There’s a wobbly synthesizer down in the bass, a hi hat cymbal line pecking away at the Tomahawk Chop. This whistled remix of mine really sticks it to the original tune and the sports teams who capitalize on racist appropriations of indigenous cultures. All in all, it’s a sublime bit of musicality I’m bestowing upon the cold Indianapolis streets.

Until I become aware of the other person waiting for the bus. As I glance over at him, I can now hear my tune for what it is. The synthesizer and hi hat are all in my head, the bass nowhere to be heard. This isn’t the mix I intended, A Tribe Called Red’s attempt at defanging the Tomahawk Chop, at re-appropriating stereotypical sounds and spitting them back out on their own terms. Nope, this is just a guy on the street whistling those very stereotypes: it’s the Tomahawk Chop. I suddenly don’t feel like whistling anymore.

*****

As we conclude our Wobble Continuum guest series here at Sounding Out!, I want to think about the connective tissues binding together the previous posts from Mike D’Errico and Christina Giacona, joining A Tribe Called Red and the colonialist culture into which they release their music, and linking me to the guy at the bus stop who is not privy to the virtuosic sonic accompaniment in my head. In each case, I’ll pay attention to sound as material conjoining producers and consumers, and I’ll play with Karen Barad’s notion of performativity to hear the way these elements interact [Jason Stanyek and Ben Piekut also explore exciting possibilities from Barad in “Deadness” (TDR 54:1, 2010)].

"Sound Waves: Loud Volume" by Flickr user Tess Watson, CC BY 2.0

“Sound Waves: Loud Volume” by Flickr user Tess Watson, CC BY 2.0

Drawing from physicist Niels Bohr, Barad begins with the fact that matter is fundamentally indeterminate. This is formally laid out in the Heisenberg Uncertainty Principle, which notes that the more precisely we can determine (for instance) the position of a particle, the less we can say with certainty about its momentum (and vice versa). Barad points out that “‘position’ only has meaning when a rigid apparatus with fixed parts is used (eg, a ruler is nailed to a fixed table in the laboratory, thereby establishing a fixed frame of reference for establishing ‘position’)” (2003, 814).

This kind of indeterminacy is characteristic of sound, which vibrates along a cultural continuum, and which, in sliding back and forth along that continuum, allows us to tune into some information even as other information distorts or disappears. This can feel very limiting, but it can also be exhilarating, as what we are measuring are a variety of possibilities prepared to unfold before us as matter and sound become increasingly unpredictable and slippery. We can observe this continuum in the tissue connecting the previous posts in this series. In the first, Mike D’Errico tunes into the problematic hypermasculinity of brostep, pinpointing the ways music software interfaces can rehash tropes of control and dominance (Robin James has responded with productive expansions of these ideas), dropping some areas of music production right back into systems of patriarchy. In the second post, Giacona, in highlighting the anti-racist and anti-colonial work of A Tribe Called Red, speaks of the “impotence” visited upon the Tomahawk Chop by ATCR’s sonic interventions. Here, hypermasculinity is employed as a means of colonial reprimand for a hypermasculine, patriarchal culture. In sliding from one post to the other, we’ve tuned into different frequencies along a continuum, hearing the possibilities (both terrorizing and ameliorative) of patriarchal production methods unfolding before us.

"Skrillex at Forum, Copenhagen" by Flickr user Jacob Wang, CC-BY-SA-2.0

“Skrillex at Forum, Copenhagen” by Flickr user Jacob Wang, CC-BY-SA-2.0

Barad locates the performative upshot of this kind of indeterminacy in the fact that the scientist, the particle, and the ruler nailed to the table in the lab are all three bound together as part of a single phenomenon—they become one entity. To observe something is to become entangled with it, so that all of the unfolding possibilities of that particle become entwined with the unfolding possibilities of the scientist and the ruler, too. The entire phenomenon becomes indeterminate as the boundaries separating each entity bleed together, and these entities only detangle by performing—by acting out—boundaries among themselves.

Returning to Giacona’s discussion of “Braves,” it’s possible to mix and remix our components to perform them—to act them out—in more than one way. Giacona arranges it so that ATCR is the scientist, observing a particle that is a colonizing culture drunk on its own stereotypes. Here, “Braves” is the ruler that allows listeners to measure something about that culture. Is that something location? Direction? Even if we can hear clearly what Giacona leads us to—an uncovering of stereotypes so pernicious as to pervade, unchallenged, everyday activities—there’s an optimism available in indeterminacy. As we slide along the continuum to the present position of this colonialist culture, the certainty with which we can say anything about its trajectory lessens, opening the very possibility that motivates ATCR, namely the hope of something better.

"ATCR 1" by Flickr user MadameChoCho, CC BY-NC-SA 2.0

“ATCR 1″ by Flickr user MadameChoCho, CC BY-NC-SA 2.0

But listening and sounding are tricky things. As I think about my whistling of “Braves” in Indianapolis, it occurs to me that Giacona’s account is easily subverted. It could be that ATCR is the particle, members of a group of many different nations reduced to a single voice in a colonial present populated by scientists (continuing the analogy) who believe in Manifest Destiny and Johnny Depp. Now the ruler is not “Braves” but the Tomahawk Chop melody ATCR attempts to critique, and the group is measured by the same lousy standard colonizers always use. In this scenario, people attend ATCR shows in redface and headdresses, and I stand on the street whistling a war chant. We came to the right place, but we heard—or in my case, re-sounded—the wrong thing.

"Knob Twiddler" by Flickr user Jes, CC BY-SA 2.0

“Knob Twiddler” by Flickr user Jes, CC BY-SA 2.0

Jennifer Stoever-Ackerman’s “listening ear” is instructive here. Cultures as steeped in indigenous stereotypes as the United States and Canada have conditioned their ears to hear ATCR through whiteness, through colonialism, making it difficult to perceive the subversive nature of “Braves.” ATCR plays a dangerous game in which they are vulnerable to being heard as a war chant rather than a critique; their material must be handled with care. There’s a simple enough lesson for me and my whistling: some sounds should stay in my head. But Barad offers something more fundamental to what we do as listeners. By recognizing that 1). there are connective tissues deeply entangling the materiality of our selves, musicians, and music and 2). listening is a continuum revealing only some knowledge at any given moment, we can begin to imagine and perform the many possibilities that open up to us in the indeterminacy of listening.

If everything sounds certain to us when we listen, we’re doing it wrong. Instead, for music to function productively, we as listeners must find our places in a wobbly continuum whose tissues connect us to the varied appendages of music and culture. Once so entangled, we’ll ride those synth waves down to the low end as hi hats all the while tap out the infinite possibilities opening in front of us. 

Featured image: “a tribe called red_hall4_mozpics (2)_GF” by Flickr user Trans Musicales, CC BY-NC-ND 2.0

Justin Burton is a musicologist specializing in US popular music and culture. He is especially interested in hip hop and the ways it is sounded across regions, locating itself in specific places even as it expresses transnational and diasporic ideas.He is Assistant Professor of Music at Rider University, where he teaches in the school’s Popular Music and Culture program. He helped design the degree, which launched in the fall of 2012, and he is proud to be able to work in such a unique program.  His book-length project – Posthuman Pop – blends his interests in hip hop and technology by engaging contemporary popular music through the lens of posthuman theory.  Recent and forthcoming publications include an exploration of the Mozart myth as it is presented in Peter Shaffer’s Amadeus and then parodied in an episode of The Simpsons (Journal of Popular Culture 46:3, 2013), an examination of the earliest iPod silhouette commercials and the notions of freedom they are meant to convey (Oxford Handbook of Mobile Music Studies), and a long comparative review of Kanye and Jay Z’s Watch the Throne and the Roots’ Undun (Journal for the Society of American Music). He is also co-editing with Ali Colleen Neff a special issue of the Journal of Popular Music Studies titled “Sounding Global Southernness.”  He currently serves on the executive committee of the International Association for the Study of Popular Music-US Branch and is working on an oral history project of the organization. From June 2011 through May 2013, he served as Editor of the IASPM-US website, expanding the site’s offerings with the cutting edge work of popular music scholars from around the world.  You can contact him at justindburton [at] gmail [dot] com.

tape reelREWIND!…If you liked this post, you may also dig:

Musical Encounters and Acts of Audiencing: Listening Cultures in the American Antebellum-Daniel Cavicchi

Musical Objects, Variability and Live Electronic Performance-Primus Luta

Further Experiments in Agent-based Musical Composition”-Andreas Duus Pape

%d bloggers like this: