Tag Archive | Creativity

Sounding Out! Podcast #31: Game Audio Notes III: The Nature of Sound in Vessel

Sound and Pleasure2This post continues our summer Sound and Pleasure series, as the third and final podcast in a three part series by Leonard J. Paul. What is the connection between sound and enjoyment, and how are pleasing sounds designed? Pleasure is, after all, what brings y’all back to Sounding Out! weekly, is it not?

Part of the goal of this series of podcasts has been to reveal the interesting and invisible labor practices which are involved in sound design. In this final entry Leonard J. Paul breaks down his process in designing living sounds for the game Vessel. How does one design empathetic or aggressive sounds? If you need to catch up read Leonard’s last entry where he breaks down the vintage sounds of Retro City Rampage. Also, be sure to be sure to check out last week’s edition where Leonard breaks down his process in designing sound for Sim Cell. But first, listen to this! -AT, Multimedia Editor

CLICK HERE TO DOWNLOADGame Audio Notes III: The Nature of Sound in Vessel

SUBSCRIBE TO THE SERIES VIA ITUNES

ADD OUR PODCASTS TO YOUR STITCHER FAVORITES PLAYLIST


Game Audio Notes III: The Nature of Sound in Vessel

Strange Loop Game’s Vessel is set in an alternate world history where a servant class of liquid automatons (called fluros) has gone out of control. The player explores the world and solves puzzles in an effort to restore order. While working on Vessel, I personally recorded all of the sounds so that I could have full control over the soundscape. I recorded all of the game’s samples with a Zoom H4n portable recorder. My emphasis on real sounds was intended to focus the player’s experience of immersion in the game.

This realistic soundscape was supplemented with a variety of techniques that produced sounds that dynamically responded to the changes in the physics engine. Water and other fluids in the game were  difficult to model with both the physics engine and the audio engine (FMOD Designer). Because fluids are fundamentally connected to the game’s physics engine, they takes on a variety of different dynamic forms as players interact with the fluid in different ways. In order to address this Kieran Lord, the audio coder, and I considered factors like the amount of liquid in a collision with anything, the hardness of the surface that it was colliding with, the type of liquid in motion, whether the player is experiencing an extreme form of that sound because it is colliding with their head, and, of course, how fast the liquid is travelling.

Although there was a musical score, I designed the effects to be played without music. Each element of the game, for instance a lava fluro’s (one of the game’s rebellious automatons) footsteps, entailed required layers of sound. The footsteps were composed of water sizzling on a hot pan, a gloopy slap of oatmeal and a wet rag hitting the ground. Finding the correct emotional balance to support the game’s story was fundamental to my work as a sound designer. The game’s sound effects were constantly competing with the adaptive music (which is also contingent on player action) that plays throughout the game, so it was important to provide an informative quality to them. The sound effects inform you about the environment while the music sets the emotional underscore of the gameplay and helps guide you in the puzzles.

The lava fluro foosteps in FMOD Designer.

The lava fluro foosteps in FMOD Designer. Used with permission (c) 2014 Strange Loop Games

Defining the character of the fluros was difficult because I wanted players to have empathy for them. This was important to me because there is often no way to avoid destroying them when solving the game’s puzzles. While recording sounds in the back of an antique shop, I came across a vintage Dick Tracey gun that made a fantastic clanking sound when making a siren sound. Since the gun allowed me to control how quickly the siren rose and fell, it was a great way to produce vocalizations for the fluros. I simply recorded the gun’s siren sound, chopped the recording into smaller pieces, and then played back different segments randomly. The metal clanking gave a mechanical feel and the siren’s tone gave a vocal quality to the resulting sound that was perfect for the fluros. I could make the fluros sound excited by choosing a higher pitch range from the sample grains and inform the player when they approached their goal.

I wanted a fluid-based scream to announce a fluro’s death. I tried screaming underwater, screaming into a glass of water, and a few other things, but nothing worked. Eventually, when recording a rubber ear syringe, I found squeezing the water out quickly lent a real shriek while it spit out the last of the water. Not only did this sound really cut through the din of the gears clanking in the mix, but it also bonded a watery yell with the sense of being crushed and running out of breath.

Vessel-LavaBoss

Vessel’s Lava boss with audio debug output. Used with permission (c) 2014 Strange Loop Games

For the final boss, I tried many combinations of glurpy sounds to signify its lava form. Eventually I recorded a nail in a board being dragged across a large rusty metal sheet. Though it was quite excruciating to listen to, I pitched down the recording and combined it with a pitched down and granulated recording of myself growling into a cup of water. This sound perfectly captured the emotion I wanted to feel when encountering a final boss.  Although it can take a long time to arrive at the “obvious” sound, simplicity is often the key.

Anticipation is fundamental to a player’s sense of immersion. It carves a larger space for tension to build, for instance a small crescendo of a creaking sound can develop a tension that builds to a sudden and large impact. A whoosh before a punch lands adds extra weight to the force of the punch. These cues are often naturally present in real-world sounds, such as a rush of air sweeping in before a door slams. A small pause might be included just for added suspense and helps to intensify the effect of the door slamming. Dreading the impact is half of the emotion of a large hit .

Vessel-ClockRecording

Recording inside of a clock tower with my H4n recorder for Vessel. Used with permission by the author.

Recording all of the sounds for Vessel was a large undertaking but since I viewed each recording as a performance, I was able to make the feeling of the world very cohesive. Each sound was designed to immerse the player in the soundscape, but also to allow players enough time to solve puzzles without becoming annoyed with the audio. All sounds have a life of their own and a resonance of memory and time that stays with the them during each playthrough of a game. In Retro City Rampage I left a sonic space for the player to wax nostalgic. In Sim Cell, I worked to breathe life into a set of sterile and synthesized sounds. Each recorded sound in Vessel is alive in comparison, telling stories of time, place and recording with them, that are all their own.

The common theme of my audio work on Retro City Rampage, Sim Cell and Vessel, is that I enjoy putting constraints on myself to inspire my creativity. I focus on what works and removing non-essential elements. Exploring the limits of constraints often provokes interesting and unpredictable results. I like “sculpting” sounds and will often proceed from a rough sketch, polishing and reducing elements until I like what I hear. Typically I remove layers that don’t add an emotive aspect to the sound design. In games there are often many sounds that can play at once, so clarity and focus are necessary when preventing sounds from getting lost in a sonic goo.

CherryBlossoms

Cherry blossoms for new beginnings. Used with permission by the author.

In this post I have shown how play and experimentation are fundamental to my creative process. For an aspiring sound artist, spending time with Pure Data, FMOD Studio or Wwise and a personal recorder is a great way to improve their skill with game audio. This series of articles has aimed to reveal the tacit decisions behind the production of game audio that get obscured by the fun of the creative process. Plus, I hope they offer a bit of inspiration to those creating their own sounds in the future.

Additional Resources:

Leonard J. Paul attained his Honours degree in Computer Science at Simon Fraser University in BC, Canada with an Extended Minor in Music concentrating in Electroacoustics. He began his work in video games on the Sega Genesis and Super Nintendo Entertainment System and has a twenty year history in composing, sound design and coding for games. He has worked on over twenty major game titles totalling over 6.4 million units sold since 1994, including award-winning AAA titles such as EA’s NBA Jam 2010NHL11Need for Speed: Hot Pursuit 2NBA Live ’95 as well as the indie award-winning title Retro City Rampage.

He is the co-founder of the School of Video Game Audio and has taught game audio students from over thirty different countries online since 2012. His new media works has been exhibited in cities including Surrey, Banff, Victoria, São Paulo, Zürich and San Jose. As a documentary film composer, he had the good fortune of scoring the original music for multi-awarding winning documentary The Corporation which remains the highest-grossing Canadian documentary in history to date. He has performed live electronic music in cities such as Osaka, Berlin, San Francisco, Brooklyn and Amsterdam under the name Freaky DNA.

He is an internationally renowned speaker on the topic of video game audio and has been invited to speak in Vancouver, Lyon, Berlin, Bogotá, London, Banff, San Francisco, San Jose, Porto, Angoulême and other locations around the world.

His writings and presentations are available at http://VideoGameAudio.com

Featured image: Courtesy of Vblank Entertainment (c)2014 – Artwork by Maxime Trépanier.

tape reelREWIND! . . .If you liked this post, you may also dig:

Sounding Out! Podcast #30: Game Audio Notes I: Growing Sounds for Sim Cell- Leonard J. Paul

Sounding Out! Podcast #31: Hand Made Music in Retro City Rampage– Leonard J. Paul

Papa Sangre and the Construction of Immersion in Audio Games- Enongo Lumumba-Kasongo 

Sounding Out! Podcast #29: Game Audio Notes I: Growing Sounds for Sim Cell

Sound and Pleasure2A pair of firsts! This is both the lead post in our summer Sound and Pleasure series, and the first podcast in a three part series by Leonard J. Paul. What is the connection between sound and enjoyment, and how are pleasing sounds designed? Pleasure is, after all, what brings y’all back to Sounding Out! weekly, is it not?

In today’s installment Leonard peels back the curtain of game audio design and reveals his creative process. For anyone curious as to what creative decisions lead to the bloops, bleeps, and ambient soundscapes of video games, this is essential listening. Stay tuned for next Monday’s installment on the process of designing sound for Retro City Rampage, and next month’s episode which focuses on the game Vessel. Today, Leonard begins by picking apart his design process at a cellular level. Literally! -AT, Multimedia Editor

CLICK HERE TO DOWNLOADGame Audio Notes I: Growing Sounds for Sim Cell

SUBSCRIBE TO THE SERIES VIA ITUNES

ADD OUR PODCASTS TO YOUR STITCHER FAVORITES PLAYLIST

Game Audio Notes I: Growing Sounds for Sim Cell

Sim Cell is an educational game released in Spring 2014 by Strange Loop Games. Published by Amplify, a branch of News Corp, it teaches students how the human cell works. Players take control of a small vessel that is shrunk down to the size of a cell and solve the tasks set by the game while also learning about the human cell. This essay unpacks the design decisions behind the simulation of a variety of natural phenomena (motion, impact, and voice) in Sim Cell.

For the design of this game I decided to focus on the elements of life itself and attempted to “grow” the music and sound design from synthetic sounds. I used the visual scripting language Pure Data (PD) to program both the sounds and the music. The music is generated from a set of rules and patterns that cause each playback of a song to be slightly different each time. Each of the sound effects are crafted from small programs that are based on the design of analogue modular synthesizers. Basic synthetic audio elements such as filtered noise, sawtooth waves and sine waves were all used in the game’s sound design.

An image of the game's space-like world.

A screenshot of Sim Cell’s microscopic space-like world. Used with permission (c) 2014 Amplify.

The visuals of the game give a feeling of being in an “inner space” that mirrors outer space. I took my inspiration for Sim Cell‘s sound effects from Louis and Bebe Baron’s score to Forbidden Planet. I used simple patches in PD to assemble all of the sounds for the game from synthesis. There aren’t any recorded samples in the sound design – all of the sound effects are generated from mathematics.

The digital effects used in the sound design were built to emulate the effects available in vintage studios such as plate reverb and analog delay. I found a simulation of a plate reverb that from a modified open source patch from the RjDj project, and constructed an analogue delay by using a low-pass filter on a delayed signal.

https://www.youtube.com/watch?v=3m86ftny1uY&feature=youtu.be

In keeping with the vintage theme, I used elements of sound design from the early days of video games as well. Early arcade games such as Space Invaders used custom audio synthesis microchips for each of their sounds. In order to emulate this, I gave every sound its own patch when doing the synthesis for Sim Cell. I learned to appreciate this ethic of design when playing Combat on the Atari 2600 while growing up. The Atari 2600 could only output two sounds at once thus had a very limited palette of tones. All of the source code for the sounds and music for Sim Cell are less than 1 megabyte, which shows how powerful the efficient coding of mathematics can be.

Vector image on the left, raster graphics on the right.

Vector image on the left, raster graphics on the right. Photo used with permission by the author.

Another cool thing about generating the sounds from mathematics is that users can “zoom in” on sounds in the same way that one can zoom in to a vector drawing. In vector drawings the lines are smooth when you zoom in, as opposed to rasterized pictures (such as a JPEG) which reveal a blurry set of pixels upon zooming. When code can change sounds in real-time, it makes them come alive and lends a sense of flexibility to the composition.

For a human feeling I often filter the sounds using formant frequencies which simulate the resonant qualities of vowel sounds, thus offering a vocal quality to the sample. For alarm sounds in Sim Cell I used minor second intervals. This lent a sense of dissonance which informed players that they needed to adjust their gameplay in order to navigate treacherous areas. Motion was captured through a whoosh sound that used filtered and modulated noise. Some sounds were pitched up to make things seem like they were travelling towards the player and likewise they exploited a sense of doppler shift when pitched down, giving the feel that a sound was traveling away from the player. Together, these techniques produced a sense of immersion for the player while simultaneously building a realistic soundscape.

Our decision to use libPD for this synthesis turned problematic and its processing requirements were too high. In order to remedy this, we decided to convert our audio into samples. Consider a photograph of a sculpture. Our sounds, like sculpture in the photograph, could now only be viewed from one direction. This meant that the music now only had a single version and that sound effects would repeat as well. A small fix was exporting the file with a set of five different intensities from 0 to 1.0. Like taking a photograph from several angles, this meant that the game could play a sample at intensity levels of 20%, 40%, 60%, 80% and 100%. Although a truly random sense of variation was lost, this method still conveyed the intensity of impacts (and other similar events) generated by the physics engine of the game.

An image of the inexpensive open-source computer: The Rasberry PI

An image of the inexpensive open-source computer: The Rasberry PI. Photo used with permission by the author.

PD is a great way to learn and play with digital audio since you can change the patch while it is running, just like you might do with a real analogue synthesizer. There’s plenty of other neat stuff that PD can do, like being able to be run on the Raspberry Pi, so you could code your own effects pedals and make your own synths using PD for around $50 or so. For video games, you can use libPD to integrate PD patches into your Android or iOS apps as well. I hope this essay has offered some insight as to my process when using PD. I’ve included some links below for those interested in learning more.

Additional resources:

Leonard J. Paul attained his Honours degree in Computer Science at Simon Fraser University in BC, Canada with an Extended Minor in Music concentrating in Electroacoustics. He began his work in video games on the Sega Genesis and Super Nintendo Entertainment System and has a twenty year history in composing, sound design and coding for games. He has worked on over twenty major game titles totalling over 6.4 million units sold since 1994, including award-winning AAA titles such as EA’s NBA Jam 2010NHL11Need for Speed: Hot Pursuit 2NBA Live ’95 as well as the indie award-winning title Retro City Rampage.

He is the co-founder of the School of Video Game Audio and has taught game audio students from over thirty different countries online since 2012. His new media works has been exhibited in cities including Surrey, Banff, Victoria, São Paulo, Zürich and San Jose. As a documentary film composer, he had the good fortune of scoring the original music for multi-awarding winning documentary The Corporation which remains the highest-grossing Canadian documentary in history to date. He has performed live electronic music in cities such as Osaka, Berlin, San Francisco, Brooklyn and Amsterdam under the name Freaky DNA.

He is an internationally renowned speaker on the topic of video game audio and has been invited to speak in Vancouver, Lyon, Berlin, Bogotá, London, Banff, San Francisco, San Jose, Porto, Angoulême and other locations around the world.

His writings and presentations are available at http://VideoGameAudio.com

Featured image: Concept art for Sim Cell. Used with permission (c) 2014 Amplify.

tape reelREWIND! . . .If you liked this post, you may also dig:

Sounding Out! Podcast #10: Interview with Theremin Master Eric Ross- Aaron Trammell

Papa Sangre and the Construction of Immersion in Audio Games- Enongo Lumumba-Kasongo

Playing with Bits, Pieces, and Lightning Bolts: An Interview with Sound Artist Andrea Parkins

The Blue Notes of Sampling

Sound and TechThis is article 2.0  in Sounding Out!‘s April  Forum on “Sound and Technology.” Every Monday this month, you’ll be hearing new insights on this age-old pairing from the likes of Sounding Out! veterano Aaron Trammell along with new voices Andrew Salvati and Owen Marshall.  These fast-forward folks will share their thinking about everything from Auto-tune to techie manifestos. So, turn on your quantizing for Sounding Out! and enjoy today’s supersonic in-depth look at sampling from from SO! Regular Writer Primus Luta.  –JS, Editor-in-Chief

My favorite sample-based composition? No question about it: “Stroke of Death” by Ghostface and produced by The RZA.

.

Supposedly the story goes, RZA was playing records in the studio when he put on the Harlem Underground Band’s album. It is a go-to album in a sample-based composer collection, because of the open drum breaks. One such break appears in the cover of Bill Wither’s “Ain’t No Sunshine”, notably used by A Tribe Called Quest on “Everything is Fair.”

.

RZA, a known break beat head, listened as the song approached the open drums, when the unthinkable happened: a scratch in his copy of the record. Suddenly, right before the open drums dropped, the vinyl created its own loop, one that caught RZA’s ear. He recorded it right there and started crafting the beat.

This sample is the only source material for the track. RZA throws a slight turntable backspin in for emphasis, adding to the jarring feel that drives the beat. That backspin provides a pitch shift for the horn that dominates the sample, changing it from a single sound into a three-note melody. RZA also captures some of the open drums so that the track can breathe a bit before coming back to the jarring loop. As accidental as the discovery may have been, it is a very precisely arranged track, tailor-made for the attacking vocals of Ghostface, Solomon Childs, and the RZA himself.

"How to: fix a scratched record" by Flickr user Fred Scharmen, CC BY-NC-SA 2.0

“How to: fix a scratched record” by Flickr user Fred Scharmen, CC BY-NC-SA 2.0

“Stroke of Death” exemplifies how transformative sample-based composition can be. Other than by knowing the source material, the sample is hard to identify. You cannot figure out that the original composition is Wither’s “Ain’t No Sunshine” from the one note RZA sampled, especially considering the note has been manipulated into a three-note melody that appears nowhere in either rendition of the composition. It is sample based, yes, but also completely original.

Classifying a composition like this as a ‘happy accident’ downplays just how important the ear is in sample-based composition, particularly on the transformative end of the spectrum. J Dilla once said finding the mistakes in a record excited him and that it was often those mistakes he would try to capture in his production style. Working with vinyl as a source went a long way in that regard, as each piece of vinyl had the potential to have its own physical characteristics that affected what one heard. It’s hard to imagine “Stroke of Death” being inspired from a digital source. While digital files can have their own glitches, one that would create an internal loop on playback would be rare.

*****

"Unpacking of the Proteus 2000" by Flickr user Anders Dahnielson, CC BY-NC-SA 2.0

“Unpacking of the Proteus 2000” by Flickr user Anders Dahnielson, CC BY-NC-SA 2.0

There has been a change in the sound of sampling over the past few decades. It is subtle but still perceptible; one can hear it even if a person does not know what it is they are hearing. It is akin to the difference between hearing a blues man play and hearing a music student play the blues. They technically are both still the blues, but the music student misses all of the blue notes.

The ‘blue notes’ of the blues were those aspects of the music that could not be transcribed yet were directly related to how the song conveyed emotion. It might be the fact that the instrument was not fully in tune, or the way certain notes were bent but others were not, it could even be the way a finger hit the body of a guitar right after the string was strummed. It goes back farther than the blues and ultimately is not exclusive to the African American tradition from which the phrase derives; most folk music traditions around the world have parallels. “The Rite of Spring” can be understood as Stravinsky ‘sampling’ the blue notes of Transylvanian folk music. In many regards sample-based composing is a modern folk tradition, so it should come as no surprise that it has its own blue notes.

The sample-based composition work of today is still sampling, but much of it lacks the blue notes that helped define the golden era of the art. I attribute this discrepancy to the evolution of technology over the last two decades. Many of the things that could be understood as the blue notes of sampling were merely ways around the limits of the technology. In the same way, the blue notes of most folk music happened when the emotion went beyond the standards of the instrument (or alternately the limits imposed upon it by the literal analysis of western theory). By looking at how the technology has evolved we can see how blue notes of sampling are being lost as key limitations are being overcome by “advances.”

e-muFirst, let’s consider the E-Mu SP-1200, which is still thought to be the most definitive sounding sampler for hip-hop styled sample-based compositions, particularly related to drums. The primary reason for this is its low-resolution sampling and conversion rates. For the SP-1200 the Analog to Digital (A/D) and Digital to Analog (D/A) converters were 12-bit at a sample rate of 26.04 kHz (CD quality is 16-bit 44.1 kHz). No matter what quality the source material, there would be a loss in quality once it was sampled into and played out of the SP-1200. This loss proved desirable for drum sounds particularly when combined with the analog filtering available in the unit, giving them a grit that reflected the environments from which the music was emerging.

Sp1200_Back_PanelOn top of this, individual samples could only be 2.5 seconds long, with a total available sample time of only 10 seconds. While the sample and conversion rates directly affected the sound of the samples, the time limits drove the way that composers sampled. Instead of finding loops, beatmakers focused on individual sounds or phrases, using the sequencer to arrange those elements into loops. There were workarounds for the sample time constraints; for example, playing a 33-rpm record at 45 rpm to sample, then pitching it back down post sampling was a quick way to increase the sample time. Doing this would further reduce the sample rate, but again, that could be sonically appealing.

An under appreciated limitation of the SP-1200 however, was the visual feedback for editing samples. The display of the SP-1200 was completely alpha numeric; there were no visual representations of the sample other than numbers that were controlled by the faders on the interface. The composer had to find the start and end points of the sample solely by ear. Two producers might edit the exact same kick drum with start times 100 samples (a fraction of a millisecond) apart. Were one of the composers to have recorded the kick at 45 rpm and pitched it down, the actual resolution for the start and end times would be different. When played in a sequence, these 100 samples affect the groove, contributing directly to the feel of the composition. The timing of when the sample starts playback is combined with the quantization setting and the swing percentage of the sequencer. That difference of 100 samples in the edit further offsets the trigger times, which even with quantization turned off fit into the 24 parts per quarter grid limitations of the machine.

akaiAkai’s MPC-60 was the next evolution in sampling technology. It raised the sample and conversion rates to 16-bit and 40 kHz. Sample time increased to a total of 13.1 seconds (upgradable to 26.2). Sequencing resolution increased to 96 parts per quarter. Gone was the crunch of the SP-1200, but the precision went up both in sampling and in sequencing. The main trademark of the MPC series was the swing and groove that came to Akai from Roger Linn’s Linn Drum. For years shrouded in mystery and considered a myth by many, in truth there was a timing difference that Linn says was achieved by delaying certain notes by samples. Combined with the greater PPQ resolution in unquantized mode, even with more precision than the SP-1200, the MPC lent itself to capturing user variation.

Despite these technological advances, sample time and editing limitations, combined with the fact that the higher resolution sampling lacked the character of the SP-1200, kept the MPC from being the complete package sample composers desired. For this reason it was often paired with Akai’s S-950 rack sampler. The S-950 was a 12-bit sampler but had a variable sample rate between 7.5 kHz and 40 kHz. The stock memory could hold 750 KB of samples which at the lowest sample rate could garner upwards of 60 seconds of sampling and at the higher sample rates around 10 seconds. This was expandable to up to 2.5 MB of sample memory.

s950.

The editing capabilities made the S-950 such a powerful sampler. Being able to create internal sample loops, key map samples to a keyboard, modify envelopes for playback, and take advantage of early time stretching (which would come of age with the S-1000)—not to mention the filter on the unit—helped take sampling deeper into the sound design territory. This again increased the variable possibilities from composer to composer even when working from the same source material. Often combined with the MPC for sequencing, composers had the ultimate sample-based composition workstation.

Today, there are literally no limitations for sampling. Perhaps the subtlest advances have developed the precision with which samples can be edited. With these advances, the biggest shift has been the reduction of the reliance on ears. Recycle was an early software program that started to replace the ears in the editing process. With Recycle an audio file could be loaded, and the software would chop the sample into component parts by searching for the transients. Utilizing Recycle on the same source, it was more likely two different composers could arrive at a kick sample that was truncated identically.

waveformAnother factor has been the waveform visualization of samples for editing. Some earlier hardware samplers featured the waveform display for truncating samples, but the graphic resolution within the computer made this even more precise. By looking at the waveform you are able to edit samples at the point where a waveform crosses the middle point between the negative and positive side of the signal, known as the zero-crossing. The advantage of zero-crossing sampling is that it prevents errors that happen when playback goes from either side of the zero point to another point in one sample, which can make the edit point audible because of the break in the waveform. The end result of zero-crossing edited samples is a seamlessness that makes samples sound like they naturally fit into a sequence without audible errors. In many audio applications snap-to settings mean that edits automatically snap to zero-crossing—no ears needed to get a “perfect” sounding sample.

It is interesting to note that with digital files it’s not about recording the sample, but editing it out of the original file. It is much different from having to put the turntable on 45 rpm to fit a sample into 2.5 seconds. Another differentiation between digital sample sources is the quality of the files, whether original digital files (CD quality or higher), lossless compression (FLAC), lossy compressed (MP3, AAC) or the least desirable though most accessible, transcoded (lossy compression recompressed such as YouTube rips). These all result in a different degradation of quality than the SP-1200. Where the SP-1200’s downsampling often led to fatter sounds, these forms of compression trend toward thinner-sounding samples.

converter.

Some producers have created their own sound using thinned out samples with the same level of sonic intent as The RZA’s on “Stroke of Death.” The lo-fi aesthetic is often an attempt to capture a sound to parallel the golden era of hardware-based sampling. Some software-based samplers by example will have an SP-1200 emulation button that reduces bit rates to 12-bit. Most of software sequencers have groove templates that allow for the sequencers to emulate grooves like the MPC timing.

Perhaps the most important part of the sample-based composition process however, cannot be emulated: the ear. The ear in this case is not so much about the identification of the hot sample. Decades of history should tell us that the hot sample is truly a dime a dozen. It takes a keen composer’s ear to hear how to manipulate those sounds into something uniquely theirs. Being able to listen for that and then create that unique sound—utilizing whatever tools— that is the blue note of sampling. And there is simply no way to automate that process.

Featured image: “Blue note inverted” by Flickr user Tim, CC BY-ND 2.0

Primus Luta is a husband and father of three. He is a writer and an artist exploring the intersection of technology and art, and their philosophical implications.  He maintains his own AvantUrb site. Luta was a regular presenter for Rhythm Incursions. As an artist, he is a founding member of the collective Concrète Sound System, which spun off into a record label for the exploratory realms of sound in 2012. Recently Concréte released the second part of their Ultimate Break Beats series for Shocklee.

tape reelREWIND!…If you liked this post, you may also dig:

“SO! Reads: Jonathan Sterne’s MP3: The Meaning of a Format”-Aaron Trammell

“Remixing Girl Talk: The Poetics and Aesthetics of Mashups”-Aram Sinnreich

“Sound as Art as Anti-environment”-Steven Hammer

A Series of Mistakes: Nullsleep and the Art of 8-bit Composition

8-bit rendition of NYC, by Alex Bond.

Three weeks ago I got to meet one of my musical heroes. I went to an 8-bit game design workshop at NYU focused around programming games for developing nations. It was organized into a series of tutorials, each focusing on a different element of the game design process. The tutorial on music design was hosted by 8bitpeople’s Nullsleep, Jeremiah Johnson, one of my two favorite chiptune artists! As he instructed the room on the finer points of using the Famitracker software to script authentic 8-bit music, I was struck by some of the nuance in his process. Creativity is a messy and fluid endeavor where mistakes and successes remain ambiguous until they can be contextualized within a final draft.

When Jeremiah programmed the Famitracker, his instrument, I watched as he pushed notes around, made arbitrary decisions and deliberately turned his attention from some tasks which became too arduous. His demo was still awesome, but I was struck by how unstructured his creative process seemed. Famitracker is a music scripting instrument, the notes are organized and prearranged, despite this formal quality there remains a good deal of negotiation between the artist and its interface. I have forever stereotyped music composition as a fairly sterile and surgical art, far away from the authentic feedback between an artist and their instrument. I always had imagined live music as the moment of the authentic, and pigeonholed studio compositions as somehow stale. Watching Jeremiah work helped me to see that all artists hold a unique relationship to their instrument no matter how mechanical, electronic, or mundane that instrument might seem. Even static compositions bring with them history, negotiation and risk. These were liberating ideas, when it came time for me to compose a song on Famitracker, I was able to rip in and rapidly sift ideas from my mind to the canvas.

Eventually, I tried to program in a portamento effect (think: keyboard intro,The Cars, “My Best Friend’s Girl”), and needed some help. Jeremiah came over and started to fiddle with the options, but he was having trouble getting it to work as well. It took about five minutes of trial and error before we figured out how to get the effect just right. These mistakes, bad notes, even misspelled words are all part of the creative process and they inscribe themselves into the larger work, even if they only remain in spirit. Understanding these hiccups and nuances let me view composition from a new perspective where I could recognize all of the skirmishes and textures which have been made invisible in the final product. Live music is often constructed as a space of possibility, where these odd textures and negotiations are given the opportunity to appear. How is this presumption challenged if studio compositions can be read as a series of mistakes leading to an arbitrary but coherent whole?

My big song is called Clever Fishies (Click to hear it!) it will be the soundtrack to a game called Math Shark.

Check out Nullsleep’s Her Lazer Light Eyes to hear why I’m so psyched!

AT

Add to FacebookAdd to DiggAdd to Del.icio.usAdd to StumbleuponAdd to RedditAdd to BlinklistAdd to TwitterAdd to TechnoratiAdd to Yahoo BuzzAdd to Newsvine

Like This!

%d bloggers like this: