The Blue Notes of Sampling

This is article 2.0 in Sounding Out!‘s April Forum on “Sound and Technology.” Every Monday this month, you’ll be hearing new insights on this age-old pairing from the likes of Sounding Out! veterano Aaron Trammell along with new voices Andrew Salvati and Owen Marshall. These fast-forward folks will share their thinking about everything from Auto-tune to techie manifestos. So, turn on your quantizing for Sounding Out! and enjoy today’s supersonic in-depth look at sampling from from SO! Regular Writer Primus Luta. –JS, Editor-in-Chief
—
My favorite sample-based composition? No question about it: “Stroke of Death” by Ghostface and produced by The RZA.
.
Supposedly the story goes, RZA was playing records in the studio when he put on the Harlem Underground Band’s album. It is a go-to album in a sample-based composer collection, because of the open drum breaks. One such break appears in the cover of Bill Wither’s “Ain’t No Sunshine”, notably used by A Tribe Called Quest on “Everything is Fair.”
.
RZA, a known break beat head, listened as the song approached the open drums, when the unthinkable happened: a scratch in his copy of the record. Suddenly, right before the open drums dropped, the vinyl created its own loop, one that caught RZA’s ear. He recorded it right there and started crafting the beat.
This sample is the only source material for the track. RZA throws a slight turntable backspin in for emphasis, adding to the jarring feel that drives the beat. That backspin provides a pitch shift for the horn that dominates the sample, changing it from a single sound into a three-note melody. RZA also captures some of the open drums so that the track can breathe a bit before coming back to the jarring loop. As accidental as the discovery may have been, it is a very precisely arranged track, tailor-made for the attacking vocals of Ghostface, Solomon Childs, and the RZA himself.

“How to: fix a scratched record” by Flickr user Fred Scharmen, CC BY-NC-SA 2.0
“Stroke of Death” exemplifies how transformative sample-based composition can be. Other than by knowing the source material, the sample is hard to identify. You cannot figure out that the original composition is Wither’s “Ain’t No Sunshine” from the one note RZA sampled, especially considering the note has been manipulated into a three-note melody that appears nowhere in either rendition of the composition. It is sample based, yes, but also completely original.
Classifying a composition like this as a ‘happy accident’ downplays just how important the ear is in sample-based composition, particularly on the transformative end of the spectrum. J Dilla once said finding the mistakes in a record excited him and that it was often those mistakes he would try to capture in his production style. Working with vinyl as a source went a long way in that regard, as each piece of vinyl had the potential to have its own physical characteristics that affected what one heard. It’s hard to imagine “Stroke of Death” being inspired from a digital source. While digital files can have their own glitches, one that would create an internal loop on playback would be rare.
*****
There has been a change in the sound of sampling over the past few decades. It is subtle but still perceptible; one can hear it even if a person does not know what it is they are hearing. It is akin to the difference between hearing a blues man play and hearing a music student play the blues. They technically are both still the blues, but the music student misses all of the blue notes.
The ‘blue notes’ of the blues were those aspects of the music that could not be transcribed yet were directly related to how the song conveyed emotion. It might be the fact that the instrument was not fully in tune, or the way certain notes were bent but others were not, it could even be the way a finger hit the body of a guitar right after the string was strummed. It goes back farther than the blues and ultimately is not exclusive to the African American tradition from which the phrase derives; most folk music traditions around the world have parallels. “The Rite of Spring” can be understood as Stravinsky ‘sampling’ the blue notes of Transylvanian folk music. In many regards sample-based composing is a modern folk tradition, so it should come as no surprise that it has its own blue notes.
The sample-based composition work of today is still sampling, but much of it lacks the blue notes that helped define the golden era of the art. I attribute this discrepancy to the evolution of technology over the last two decades. Many of the things that could be understood as the blue notes of sampling were merely ways around the limits of the technology. In the same way, the blue notes of most folk music happened when the emotion went beyond the standards of the instrument (or alternately the limits imposed upon it by the literal analysis of western theory). By looking at how the technology has evolved we can see how blue notes of sampling are being lost as key limitations are being overcome by “advances.”
First, let’s consider the E-Mu SP-1200, which is still thought to be the most definitive sounding sampler for hip-hop styled sample-based compositions, particularly related to drums. The primary reason for this is its low-resolution sampling and conversion rates. For the SP-1200 the Analog to Digital (A/D) and Digital to Analog (D/A) converters were 12-bit at a sample rate of 26.04 kHz (CD quality is 16-bit 44.1 kHz). No matter what quality the source material, there would be a loss in quality once it was sampled into and played out of the SP-1200. This loss proved desirable for drum sounds particularly when combined with the analog filtering available in the unit, giving them a grit that reflected the environments from which the music was emerging.
On top of this, individual samples could only be 2.5 seconds long, with a total available sample time of only 10 seconds. While the sample and conversion rates directly affected the sound of the samples, the time limits drove the way that composers sampled. Instead of finding loops, beatmakers focused on individual sounds or phrases, using the sequencer to arrange those elements into loops. There were workarounds for the sample time constraints; for example, playing a 33-rpm record at 45 rpm to sample, then pitching it back down post sampling was a quick way to increase the sample time. Doing this would further reduce the sample rate, but again, that could be sonically appealing.
An under appreciated limitation of the SP-1200 however, was the visual feedback for editing samples. The display of the SP-1200 was completely alpha numeric; there were no visual representations of the sample other than numbers that were controlled by the faders on the interface. The composer had to find the start and end points of the sample solely by ear. Two producers might edit the exact same kick drum with start times 100 samples (a fraction of a millisecond) apart. Were one of the composers to have recorded the kick at 45 rpm and pitched it down, the actual resolution for the start and end times would be different. When played in a sequence, these 100 samples affect the groove, contributing directly to the feel of the composition. The timing of when the sample starts playback is combined with the quantization setting and the swing percentage of the sequencer. That difference of 100 samples in the edit further offsets the trigger times, which even with quantization turned off fit into the 24 parts per quarter grid limitations of the machine.
Akai’s MPC-60 was the next evolution in sampling technology. It raised the sample and conversion rates to 16-bit and 40 kHz. Sample time increased to a total of 13.1 seconds (upgradable to 26.2). Sequencing resolution increased to 96 parts per quarter. Gone was the crunch of the SP-1200, but the precision went up both in sampling and in sequencing. The main trademark of the MPC series was the swing and groove that came to Akai from Roger Linn’s Linn Drum. For years shrouded in mystery and considered a myth by many, in truth there was a timing difference that Linn says was achieved by delaying certain notes by samples. Combined with the greater PPQ resolution in unquantized mode, even with more precision than the SP-1200, the MPC lent itself to capturing user variation.
Despite these technological advances, sample time and editing limitations, combined with the fact that the higher resolution sampling lacked the character of the SP-1200, kept the MPC from being the complete package sample composers desired. For this reason it was often paired with Akai’s S-950 rack sampler. The S-950 was a 12-bit sampler but had a variable sample rate between 7.5 kHz and 40 kHz. The stock memory could hold 750 KB of samples which at the lowest sample rate could garner upwards of 60 seconds of sampling and at the higher sample rates around 10 seconds. This was expandable to up to 2.5 MB of sample memory.
.
The editing capabilities made the S-950 such a powerful sampler. Being able to create internal sample loops, key map samples to a keyboard, modify envelopes for playback, and take advantage of early time stretching (which would come of age with the S-1000)—not to mention the filter on the unit—helped take sampling deeper into the sound design territory. This again increased the variable possibilities from composer to composer even when working from the same source material. Often combined with the MPC for sequencing, composers had the ultimate sample-based composition workstation.
Today, there are literally no limitations for sampling. Perhaps the subtlest advances have developed the precision with which samples can be edited. With these advances, the biggest shift has been the reduction of the reliance on ears. Recycle was an early software program that started to replace the ears in the editing process. With Recycle an audio file could be loaded, and the software would chop the sample into component parts by searching for the transients. Utilizing Recycle on the same source, it was more likely two different composers could arrive at a kick sample that was truncated identically.
Another factor has been the waveform visualization of samples for editing. Some earlier hardware samplers featured the waveform display for truncating samples, but the graphic resolution within the computer made this even more precise. By looking at the waveform you are able to edit samples at the point where a waveform crosses the middle point between the negative and positive side of the signal, known as the zero-crossing. The advantage of zero-crossing sampling is that it prevents errors that happen when playback goes from either side of the zero point to another point in one sample, which can make the edit point audible because of the break in the waveform. The end result of zero-crossing edited samples is a seamlessness that makes samples sound like they naturally fit into a sequence without audible errors. In many audio applications snap-to settings mean that edits automatically snap to zero-crossing—no ears needed to get a “perfect” sounding sample.
It is interesting to note that with digital files it’s not about recording the sample, but editing it out of the original file. It is much different from having to put the turntable on 45 rpm to fit a sample into 2.5 seconds. Another differentiation between digital sample sources is the quality of the files, whether original digital files (CD quality or higher), lossless compression (FLAC), lossy compressed (MP3, AAC) or the least desirable though most accessible, transcoded (lossy compression recompressed such as YouTube rips). These all result in a different degradation of quality than the SP-1200. Where the SP-1200’s downsampling often led to fatter sounds, these forms of compression trend toward thinner-sounding samples.
.
Some producers have created their own sound using thinned out samples with the same level of sonic intent as The RZA’s on “Stroke of Death.” The lo-fi aesthetic is often an attempt to capture a sound to parallel the golden era of hardware-based sampling. Some software-based samplers by example will have an SP-1200 emulation button that reduces bit rates to 12-bit. Most of software sequencers have groove templates that allow for the sequencers to emulate grooves like the MPC timing.
Perhaps the most important part of the sample-based composition process however, cannot be emulated: the ear. The ear in this case is not so much about the identification of the hot sample. Decades of history should tell us that the hot sample is truly a dime a dozen. It takes a keen composer’s ear to hear how to manipulate those sounds into something uniquely theirs. Being able to listen for that and then create that unique sound—utilizing whatever tools— that is the blue note of sampling. And there is simply no way to automate that process.
—
Featured image: “Blue note inverted” by Flickr user Tim, CC BY-ND 2.0
—
Primus Luta is a husband and father of three. He is a writer and an artist exploring the intersection of technology and art, and their philosophical implications. He maintains his own AvantUrb site. Luta was a regular presenter for Rhythm Incursions. As an artist, he is a founding member of the collective Concrète Sound System, which spun off into a record label for the exploratory realms of sound in 2012. Recently Concréte released the second part of their Ultimate Break Beats series for Shocklee.
—
REWIND!…If you liked this post, you may also dig:
“SO! Reads: Jonathan Sterne’s MP3: The Meaning of a Format”-Aaron Trammell
“Remixing Girl Talk: The Poetics and Aesthetics of Mashups”-Aram Sinnreich
“Sound as Art as Anti-environment”-Steven Hammer
Musical Objects, Variability and Live Electronic Performance

This is part two of a three part series on live Electronic music. To review part one, click here.
In the first part of this series,”Toward a Practical Language for Live Electronic Performance,” I established the language of variability as an index for objectively measuring the quality of musical performances. From this we were able to rethink traditional musical instruments as musical objects with variables that could be used to evaluate performer proficiency, both as a universal for the instrument (proficiency on the trumpet) and within genre and style constraints (proficiency as a Shrill C trumpet player). I propose that this language can be used to describe the performance of electronic music, making it easier to parallel with traditional western forms. Having proven useful with traditional instruments we’ll now see if this language can be used to describe the musical objects of electronic performance.
We’ll start with a DJ set. While not necessarily an instrument, a performed DJ set is a live musical object comprised of a number of variables. First would be the hardware. A vinyl DJ rig consists of at least two turntables, a mixer and a selection of vinyl. A CDJ rig uses two CD decks and a mixer. Serato and other vinyl controller software require only one turntable, a mixer and a laptop. Laptop mixing can be done with or without a controller. One could also do a cassette mix, reel to reel mix, or other hardware format mixing. Critical is a means to combine audio from separate sources into one uniform mix. Some of the other variables involved in this include selection, transitions and effects.

DJ Lush, FORWARD Winter Session 2012, Image by JHG Photography (c)
Because DJ sets are expected to be filled with pre-recorded sounds, the selection of sounds available is as broad as all of the sounds ever recorded. Specific styles of DJ sets, like an ambient DJ set, limit the selection down to a subset of recorded music. The choice of hardware can limit that even more. An all-vinyl DJ set of ambient music presents more of a challenge, in terms of selection, than a laptop set in the same style, because there are fewer ambient records pressed to vinyl than are available in a digital format.
Connected to selections are transitions, which could be said to define a DJ. When thinking of transitions there are two component factors: the playlist and going from one song to another. The playlist is obviously directly tied to the selection; however, even if you select the most popular songs for the style, unless they are put into a logical order, the transitions between them could make the set horrible.
One of the transitional keys to keeping a mix flowing is beat matching. In a turntable DJ set the beat mapping degree of difficulty is high because all of the tempos have to be matched manually by adjusting the speed of the two selections on the spinning turntables. When the tempos are synchronized, transitioning from one to the other is accomplished via a simple crossfade. With digital hardware such as the laptop, Serato and even CDJ setups, there is commonly a way to automatically match beats between selections. This makes the degree of difficulty to beat match in these formats much lower.
Effects, another variable, rely on what’s available through the hardware medium. With the turntable DJ set, the mixer is the primary source of effects and those until recent years have been limited to disc manipulation (e.g. scratching), crossfader, and EQ effects. Many of the non vinyl setups and even some of the vinyl setups now include a variety of digital effects like delay, reverb, sampler, granular effects and more.
With these variables so defined it becomes easier to objectively analyze the expressed variability of a live DJ set. But, while the variables themselves are objective, the value placed on them and even how they are evaluated are not. The language only provides the common ground for analysis and discussion. So the next time you’re at an event and the person next to you says, “this DJ is a hack!” you can say, “well they’ve got a pretty diverse selection with rather seemless transitions, maybe you just don’t like the music,” to which they’ll reply, “yeah, I don’t think I like this music,” which is decent progress in the scheme of things. If we really want to talk about live electronic performance however we will need to move beyond the DJ set to exemplify how this variable language can work to accurately describe the other musical objects which appear at a live electronic performance.

Joe Nice at Reconstrvct in Brooklyn NY on 2-23-2013, Photo by Kyle Rober, Courtesy of Electrogenic
Take for example another electronic instrument: the keyboard. The keyboard itself is a challenging instrument to define; in fact I could argue that the keyboard is itself not actually an instrument but a musical object. It is a component part of a group of instruments commonly referred to as keyboards, but the keyboard itself is not the instrument. What it is is one of the earliest examples of controllerism.
On a piano, typically fingers are used to press keys on the keyboard, which trigger the hammers to hit the strings and produce sound. The range of the instrument travels seven octaves from A0 to C8, and can theoretically have 88 voice polyphony, though in typical that polyphony is limited to the ten fingers. It can play a wide range of dynamics and includes pedals which can be used to modify the sustainabilty of pitches. With a pipe organ, the keyboard controls woodwind instruments with completely different timbre, range, and dynamics; the polyphony increases and the foot pedals can perform radically different functions. The differences from the piano grow even more once we enter the realm where the term “keyboard,” as instrument, is most commonly used: the synthesizer keyboard.
The first glaring difference is that, even if you have an encyclopedia of knowledge about keyboard synthesizers, when you see a performer with one on stage you simply cannot know by seeing what sounds it will produce. Pressing the key on a synthesizer keyboard can produce an infinite number of sounds, which can change not just from song to song, but from second to second and key to key. A performer’s left thumb can produce an entirely different sound than their left index finger. Using a keyboard equipped with a sequencer, the performer’s fingers may not press any keys at all but can still be active in the performance.

Minimoog Voyager Electric Blue, Image by Flickr User harald walker
When the keyboard synthesizer was first introduced, it was being used by traditional piano players in standard band configurations, like a piano or organ, with timbres being limited to one during a song and the performance aspect being limited to fingers pressing keys. Some keyboardists however used the instrument more as a bank for sound effects and textures. They may have been playing the same keys, but one wouldn’t necessarily expect to hear a I IV V chord progression. Rather than listening for the physical dexterity of the player’s fingers, the key to listening to a keyboard in this context was evaluating the sounds produced first and then how they were played to fit into the surrounding musical context.
Could one of these performers be seen as more competent than the other? Possibly. The first performer could be said to be one of the most amazing keyboard players in the piano player sense, but where they aren’t really maximizing the variability potential of the instrument, it could be said they fall short as a keyboard synthesizer performer. The second performer on the other hand may not even know what a I IV V chord progression is and thus be considered incompetent on the keyboard in the piano player sense, but the ways in which they exploit the variable possibilities shows their mastery of the keyboard synthesizer as an instrument.
Well, almost.
While generally speaking there isn’t a set of variables which define the keyboard synthesizer as an instrument, if we think of the keyboard synthesizer as a group of musical instruments, each of the individual types of keyboard synthesizers come with their own set of fixed variables which can be defined. Many of these variables are consistent across the various keyboards but not always in a standard arrangement.
As such, while the umbrella term “keyboard” persists it is perhaps more practical to define the instruments and their players individually. There are Juno 60 players, ARP Odyessy players, MiniMoog players, Crumar Spirit players and more. Naturally an individual player can be well versed in more than one of these instruments and thusly be thought of as a keyboardist, but their ability as a keyboardist would have to be properly contextualized per instrument in their keyboard repertoire. Using the MiniMoog as an example we can show how its variability as an instrument defines it and plays into how a performance on the instrument can be perceived.

Minimoog, Image by Flickr User Francesco Romito
The first variable worth considering when evaluating the MiniMoog is that it is a monophonic instrument. This is radically different from the piano; despite one’s ability to use ten fingers (or other extremity) only one note will sound at one time. The keyboard section of the instrument is only three and a half octaves long, though the range is itself variable. On the left-hand side there is a pitch wheel and a modulation wheel. The pitch wheel can vary the pitch of the currently playing note, while the modulation wheel can alter the actual sound design.
As a monophonic instrument, one does not need to have both hands on the keyboard, as only one note will ever sound at a time. This frees the hands to modify the sound being triggered by the keyboard exemplified via the pitch and modulation wheels, but also available are all of the exposed controls for the sound design. This means that in performance every aspect of the sound design and the triggering can be variable. Of course these changes are limited to what one can do with their hands, but the MiniMoog also features a common function in analog synths, a Control Voltage input. This means that an external source can control either the aspects of the sound design and/or the triggering for the instrument.
Despite this obvious difference from the piano, playing the MiniMoog does not have to be any less of a physical act. A player using their right hand to play the keyboard while modulating the sound with their left, plays with a different level of dexterity than the piano player. The right and left hand are performing different motions; while the right hand uses fingers to press keys as the arm moves it up and down the keyboard, the left hand can be adjusting the pitch or modulation wheels with a pushing action or alternately adjusting the knobs with a turning action. Like patting your head and rubbing your belly, controlling a well-timed filter sweep while simultaneously playing a melody is nowhere near as easy as it sounds.
At the same time playing the MiniMoog doesn’t have to be very physical at all. A sequencer could be responsible for all of the note triggering leaving both hands free to modulate the sound. Similarly the performer may not touch the MiniMoog at all, instead playing the sequencer itself as an intermediary between them and the sound of the instrument. In this case the MiniMoog is not being used as a keyboard, yet it retains its instrument status as all of the sounds are being generated from it, with the sequencer being used as the controller. Despite not having any physical contact with the instrument itself, the performer can still play it.

Minimoog in Live DJ Performance, Image by Fluckr Users Huba and Silica
Taking it one step further – if a performer were to only touch a sequencer at the start of the performance to press play and never touch the instrument, could they still be said to be playing the MiniMoog live? There is little doubt that the MiniMoog is indeed still performing because it does not have the mechanism to play by itself, but requires agency to illicit a sonic response. In this example that agency comes from the sequencer, but that does not eliminate the performer. The sequencer itself has to be programmed in order to provide the instrument with the proper control voltages, and the instrument itself has to be set up sonically with a designed sound receptive to the sequencer’s control. If the performer is not physically manipulating either device however, they are not performing live, the machines are.
From this we can establish the first dichotomy of electronic performance; the layers of variability in an electronic performance can be isolated into two specific categories: physical variability and sonic variability. While these two aspects are also present in traditional instrument performance, they are generally thought to not be mutually exclusive without additional devices. The vibrato of an acoustic guitar is only accomplished by physically modulating the strings to produce the effect. With an electronic instrument however, vibrato can be performed by an LFO controlling the amplitude. That LFO can be controlled physically but there does not have to be a physical motion (such as a knob turn) associated with it in order for it to be a live modulation or performance. The benefit of it running without physical aid is that it frees up the body to be able to control other things, increasing the variability of the performance.
In a situation where all of the aspects of the performance are being controlled by electronic functions, the agency in performance shifts from the artist performing live, to the artists establishing the parameters by which the machines perform live. Is the artist calling this a live performance a hack? Absolutely not, but it’s important that the context of the performance is understood for it to be evaluated. Like evaluating the monophonic MiniMoog performer based on the criteria of the polyphonic pianist, evaluating a machine performance based on physical criteria is unfair.

Daniel Carter on horn during Overcast Radio’s set at EPICENTER: 02, Photo courtesy of Raymond Angelo (c)
In the evaluation of a machine performance, just as with a physical one, variability still plays an important role. At the most base level the machine has to actually be performing and this is best measured by the potential variability of the sound. This gets tricky with digital instruments, as, barring outside influences, it is completely possible to repeat the exact same performance in the digital domain, so that there is no variation between each iteration. But even such cases with a digital sequencer controlling a digital instrument, with no physical interaction, are still a machine performances; they just exhibit very little variability. The performance aspect of the machine only disappears when the possibility for variability is completely removed, at which point the machine is no longer a performance instrument but a playback device as is the case with a CD player playing a backing track. The CD player if not being manipulated physically or by an external control is not a performance instrument as all of the sound contained within it can only be heard as one fixed recorded performance, not live. It is only when these fixed performances are manipulated either physically (ie a DJ set) or by other means, that they go from fixed performances to potentially live ones.

Mala and Hatcha in Detroit, Image by Tom Selekta
From all of this we arrive at four basic distinctions for live electronic performances:
• The electro/mechanical manipulation of fixed sonic performances
• The physical manipulation of electronic instruments
• The mechanized manipulation of electronic instruments
• A hybrid of physical and mechanized manipulation of electronic instruments
These help set up the context for evaluating electronic performances, as before we can determine the quality of a performance we must first be able to distinguish what type of performance we are observing. So far we’ve only dealt with a monophonic instrument, but even with its limitations can see how the potential variability is quite high. As we get into the laptop as a performance instrument that variability increases exponentially.
This is part two of a three part series. In the next part we will begin to exemplify the laptop as performance instrument, using this language to show the breadth of variability available in electronic performance and perhaps show that indeed, where that variability continues to be explored, there is merit to the potential of live electronic music as an extension of jazz.
—
Native Frequencies at the Trocadero 2013, Featured Image Courtesy of Raymond Angelo (C)
—
Primus Luta is a husband and father of three. He is a writer, technologist and an artist exploring the intersection of technology and art, and their philosophical implications. He is a regular guest contributor to the Create Digital Music website, and maintains his own AvantUrb site. Luta is a regular presenter for the Rhythm Incursions Podcast series with his show RIPL. As an artist, he is a founding member of the live electronic music collective Concrète Sound System, which spun off into a record label for the exploratory realms of sound in 2012.
—
REWIND! . . .If you liked this post, you may also dig:
Experiments in Agent-based Sonic Composition–Andreas Pape
Evoking the Object: Physicality in the Digital Age of Music–-Primus Luta
Sound as Art as Anti-environment–Steven Hammer
Recent Comments