Archive by Author | primusluta

Me & My Rhythm Box

paula shepp

I’m fortunate to have quite a few friends with eclectic musical tastes, who continually expose me some of the best, albeit often obscure, sources for inspiration. They arrive as random selections sent with a simple “you’d appreciate this” note attached. Good friends that they are, they rarely miss the mark. Most intriguing is when a cluster of things from different people carry a similar theme, converging to a need on my part for some sort of musical action.

The Inspiration

A few years back I received a huge dump of gigabytes of audio and video. Within it were some concert footage and performances this friend and I had been discussing; I consumed those quickly in an effort to keep that conversation going. Tucked amidst that dump however, was a copy of the movie Liquid Sky. I asked the friend about it because the description of the plot–“heroin-pushing aliens invade 80’s New York”–led me to believe it wasn’t really my thing (not a big fan of needles). Although my friend insisted I’d enjoy it, it took me several months if not a whole year before I finally pressed play.

Even though Liquid Sky was not my favorite movie by any measure, it was immediately apparent to my ears why my friend insisted I check it out. The film’s score was performed completely on a Fairlight CMI, capturing the synthesized undercurrent of the early 80’s New York music scene, more popularly seen in the cult classic Downtown 81, starring Jean Michel Basquiat. While the performances in that movie are perhaps closer to my tastes, none of them compare to one scene from Liquid Sky that I  fell in love with, instantly:

The song grabbed me so much, I quickly churned out a cover version.

Primus Luta “Me & My Rhythm Box (V1)”

 

While felt good to make, there remained something less than satisfying about it. The cover had captured my sound, but at a moment of transition. More specifically, the means by which I was trying to achieve my sound at the time had shifted from a DAW-in-the-box aesthetic to a live performance feel, one that I had already begun writing about here on Sounding Out! in 2013.  Interestingly, the inspiration to cover the song pushed me back to my in-the-box comfort zone.

It was good, but I knew I could do more.

As I said, these inspirations tend to group around a theme. Prior to receiving the Liquid Sky dump, I had received an email out of the blue from Hank Shocklee, producer and member of the Bomb Squad. I’ve been a longtime fan, and we had the opportunity to meet a few years prior. Since then he’s played a bit of a mentoring role for me. In the email he asked if I wanted to join an experimental electronic jazz project he was pulling together as the drummer.

I was taken aback. Hank Shocklee asking me to be his drummer. Honestly, I was shook.

Not that I didn’t know why he might think to ask me, but immediately I started to question whether I was good enough. Rather than dwell on those feelings, though, I started stepping up my game. While the project itself never came to fruition, Shocklee’s email led me to building my drmcrshr set of digital instruments.

kb-bring-the-noise-5A year or so later, I ran into Shocklee again when he was in Philadelphia for King Britt’s Afrofuturism event with mutual friend artist HPrizm. By this time I had already recorded the “Me and My Rhythm Box” cover. Serendipitously, HPrizm ended up dropping a sample from it in the midst of his set that night. A month or so later, HPrizm and I met up in the studio with longtime collaborator Takuma Kanaiwa to record a live set on which I played my drmcrshr instruments.

Primus Luta x HPrizm x Takuma Kanaiwa – “Excerpt”

 

Not too long after, I received an email from NYC-based electronic musician Elucid, saying he was digging for samples on this awesome soundtrack. . .Liquid Sky.

The final convergence point had been hanging over my head for a while. Having finished the first part of my “Toward a Practical Language series on Live Performance” series, I knew I wanted the next part to focus on electronic instruments, but wasn’t yet sure how to approach it. I had an inkling about a practicum on the actual design and development of an electronic instrument, but I didn’t yet have a project in mind.

As all of these things, people, and sounds came together–Liquid Sky, Shocklee, HPrizm, Elucid–it became clear that I needed to build a rhythm box.

The History

What stands out in Paula Sheppard’s performance from Liquid Sky is the visual itself. She stands in the warehouse performance space surrounded by 80’s scenesters posing with one hand in the air, mic in the other while strapped to her side is her rhythm box, the Roland CR-78, wires dangling from it to connect to the venue’s sound system. She hits play to start the beat launching into the ode for the rhythm machine.

Paula Shephard Performing "Me & My Rhythm Box" in Liquid Sky

Contextually, it’s far more performance art than music performance. There isn’t much evidence from the clip that the CR-78 is any more than a prop, as the synthesizer lines indicate the use of a backing track. The commentary in the lyrics however, hone in on an intent to present the rhythm box as the perfect musical companion, reminiscent of comments Raymond Scott often made about his desire to make a machine to replace musicians.

My rhythm box is sweet

Never forgets a beat

It does its rule

Do you want to know why?

It is pre-programmed

Rhythm machines such as the CR-78 were originally designed as accompaniment machines, specifically for organ players. They came pre-programmed with a number of traditional rhythm patterns–the standards being rock, swing, waltz and samba–though the CR-78 had many more variations. Such machines were not designed to be instruments themselves, rather musicians would play other instruments to them.

rolcr7801ad

In 1978 when the CR-78 was introduced, rhythm machines were becoming quite sophisticated. The CR-78 included automatic fills that could be set to play at set intervals, providing natural breaks for songs. As with a few other machines, selecting multiple rhythms could combine patterns into new rhythms. The CR-78 also had mute buttons and a small mixer, which allowed slight customization of patterns, but what truly set the CR-78 apart was the fact that users could program their own patterns and even save them.

drumtrio

TR-808 (top) and TR-909

By the time it appeared in Liquid Sky, the CR-78 had already been succeeded by other CR lines culminating in the CR-8000. Roland also had the TR series including the TR-808 and the TR-909, which was released in 1982, the same year Liquid Sky premiered.

In 1980 however, Roger Linn’s LM-1 premiered. What distinguished the LM-1 from other drum machines was that it used drum samples–rather than analog sounds–giving it more “real” sounding drum rhythms (for the time). The LM-1 and its predecessor, the Linn Drum both had individual drum triggers for its sounds that could be programmed into user sequences or played live. These features in particular marked the shift from rhythm machines to drum machines.

In the post-MIDI decades since,  we’ve come to think less and less about rhythm machines. With the rise of in-the-box virtual instruments, the idea of drum programming limitations (such as those found on most rhythm machines) seems absurd or arcane to modern tastes. People love the sounds of these older machines, evidenced by the tons of analog drum samples and virtual and hardware clones/remakes on the market, but they want the level of control modern technologies have grown them accustomed to.

Controlling the Roland CR-5000 from an Akai MPC-1000 using a custom built converter

 

The general assumption is that rhythm machines aren’t traditionally playable, and considering how outdated their rhythms tend to seem, lacking in the modern sensibility. My challenge thus, became clearer: I sought out to build a rhythm machine that would challenge this notion, while retaining the spirit of the traditional rhythm box.

Challenges and Limitations

At the outset, I wanted to base my rhythm machine on analog circuitry. I had previously built a number of digital drum machines–both sample and synthesis-based–for my Heads collection. Working in the analog arena allowed me to approach the design of my instrument in a way that respected the limitations my rhythm machine predecessors worked with and around.

By this time I had spent a couple of years mentoring with Jeff Blenkinsopp at The Analog Lab in New York, a place devoted to helping people from all over the world gain “further understanding the inner workings of their musical equipment.” I had already designed a rather complex analog signal processor, so I felt comfortable in the format. However, I hadn’t truly honed my skills around instrument design. In many ways, I wanted this project to be the testing ground for my own ability to create instruments, but prior experience taught me that going into such a complex project without the proper skills would be self defeating. Even more, my true goal was centered more around functionality rather than details like circuit board designs for individual sounds.

To avoid those rabbit holes–at least temporarily, I’ve since gone full circuit design on my analog sound projects–I chose to use DIY designs from the modular synth community as the basis for my rhythm box. That said, I limited myself to designs that featured analog sound sources, and only allowed myself to use designs that were available as PCB only. I would source all my own parts, solder all of my boards and configure them into the rhythm machine of my dreams.

Features

The wonderful thing about the modular synth community is that there is a lot of stuff out there. The difficult thing about the modular synth community is that there’s a lot of stuff out there. If you’ve got enough rack space, you can pretty much put together a modular that will perform whatever functionality you want. How modules patch together fundamentally defines your instrument, making module selection the most essential process.  I was aiming to build a more semi-modular configuration, forgoing the patch cables, but that didn’t make my selection any easier.  I wanted to have three sound sources (nominally: kick, snare and hi-hat), a sequencer and some sort of filter, which would all flow into a simple monophonic mixer design of my own.

For the sounds I chose a simple kick module from Barton, and the Jupiter Storm unit from Hex Inverter. The sound of the kick module was rooted enough in the classic analog sound while offering enough modulation points to make it mutable. The triple square wave design of the Jupiter Storm really excited me as It had the range to pull off hi-hat and snare sounds in addition to other percussive and drone sounds, plus it featured two outputs giving me all three of my voices on in two pcb sets.

Filters are often considered the heart of a modular set up, as they way they shape the sound tends to define its character. In choosing one for my rhythm machine the main thing I wanted was control over multiple frequency bands. Because there would be three different sound sources I needed to be able to tailor the filter for a wide spectrum of sounds. As such I chose the AM2140 Resonant Filter.

am2140pcb-800x800

The AMS2140 PCB layout, based on the classic eMu filter

 

I had no plans to include triggers for the sounds on my rhythm machine so the sequencer was going to be the heart of the performance as it would be responsible for any and all triggering of sounds.  Needing to control three sounds simultaneously without any stored memory was quite a tall order, but fortunately I found the perfect solution in the amazing Turing Machine modules. With its expansion board the Turing machine can put out four different patterns based on it’s  main pattern creator which can create fully random patterns or patterns that mutate as they progress.

The Results

I spent a couple of weeks after getting all the pcb’s parts and hardware together, wiring and rewiring connections until I got comfortable with how all of these parts were interacting with each other. I was fortunate to happen upon a vintage White Instruments box, which formally housed an attenuation meter, that was perfect for my machine. After testing with cardboard I laid out my own faceplates, which and put everything in the box. As soon as I plugged it in and started playing, I knew I had succeeded.

Early test of RIDM before it went in the Box

 

I call it the RIDM Box (Rhythmically Intelligent Drum Machine Box). I’ve been playing it now for over two years, to the point where today I would say it is my primary instrument. Almost immediately afterward I built a companion piece called the Snare Bender which works both as a standalone and as a controller for the RIDM Box. That one I did from scratch hand wired with no layouts.

stillconcrete2016 (1)

My current live rig with the RIDM Box and the Snare Bender (on the right)

 

While this is by no means a standard approach to modern electronic instrument design (if a standard approach even exists), what I learned through the process is really the value of looking back. With so much of modern technology being future forward in its approach, the assumption is that we’re at better starting positions for innovation than our predecessors. While we have so many more resources at our disposal, I think the limitations of the past were often more conductive to truly innovative approaches. By exploring those limitations with modern eyes a doorway opened up for me, the result of which is an instrument like no other, past or present.

I will probably continue playing the two of these instruments together for a while, but ultimately I’m leaning toward a new original design which takes the learnings from these projects and fully flushes out the performing instrument aspect of analog design.  In the meantime, my process would not be complete if I did not return to the original inspiration. So I’ll leave you with the RIDM Box version of “Me & My Rhythm Box”—available on my library sessions release for the instrument.

Primus Luta is a husband and father of three. He is a writer and an artist exploring the intersection of technology and art, and their philosophical implications.  

tape reelREWIND!…If you liked this post, you may also dig:

Heads: Reblurring The Lines–Primus Luta

Into the Woods: A Brief History of Wood Paneling on Synthesizers*–Tara Rodgers

Afrofuturism, Public Enemy, and Fear of a Black Planet at 25andré carrington

 

Heads: Reblurring The Lines

Primus Luta "Heads"

I don’t intend to discuss the “Blurred Lines” case in this post. There are plenty of folk already committing thoughts on the ruling. While the circumstances of the recent Thicke/Williams/Gaye case are not explicitly about sampling, they are indicative of the direction sample/copyright litigation can go in the future.  When samples from a composition infringe upon the copyrights for the song, it is dangerous territory. Rather than focus on those dangers however, I’d like to exemplify possibilities of a more open (and arguably the intended) interpretation of copyright laws, by doing something I should have done seven years ago – put out my project Heads (dropping on April 1st, 2015).

My position has not changed from previous writings on sample laws – transformative sampling produces original work. My intent here is to present an artist’s statement on Heads that illustrates how transformative sampling and derivatives of it require broader interpretation; they should be legally covered as original compositions.

heads

Cover art for Proto-Heads project from 2009

I’ve kept Heads in the vaults since 2007 while continuing from its artistic direction, all the while doing little tinkerings to convince myself it wasn’t done yet (it was).  I had been pursuing analog technologies I swore would be the finishing touches it needed, to convince myself it wasn’t ready yet (it was). Then I lost 4TB of files in a quadruple hard drive killer power surge. The last Heads masters were among the 500GB that survived.

The project was born in response to comments made by Wynton Marsalis, dismissing hip-hop and denying its connection to the legacy of black music.

It’s mostly sung in triplets. So what? And as for sampling, it just shows you that the drummer has been replaced by a loop. The drum – the central instrument in African-American music, the sound of freedom – has been replaced by a repetitive loop. What does that tell you about hip-hop’s respect for African-American tradition? – Wynton Marsalis

I was offended as both a hip-hop and jazz head, so I set out to produce a body of work that showed the artistic originality of sampling and tied the practice to black musical traditions.

Prior to the analog experiments, I was modeling a series of digital Open Sound Control (OSC) instruments based on the monome, starting with a sampler but expanding into drum machines synthesizers and other noise makers. Together I called them the Heads Instruments. 95% of the composition work on Heads began with these instruments, all of which were built around the concept of sampling.

The title Heads, comes from the musical head, which is a fundamental part of the jazz tradition. The head is the thematic phrase or group of phrasings that signify a song; heads can be comprised of melody, harmony and/or rhythm. Jazz musicians use the head as a foundation for improvisation, a traditional form including the alternating of head and solo improvisations . Often times in jazz, the head comes from popular songs re-envisioned through improvisation in a jazz context, such as John Coltrane’s famous refiguring of “My Favorite Things” from The Sound of Music. In addition to being covers, these versions are transformations of the original into a different musical context. The Heads Instruments were designed specifically as instruments that could perform a head in a transformative manner.

Hip-hop attacks itself. It has no merit, rhythmically, musically, lyrically. What is there to discuss? – Wynton Marsalis

Tony Wynn

I was a bit annoyed at Marsalis, just how much is illustrated by the opening track of Heads, “Tony Wynn,” eponymously named after the contemporary jazz saxophonist, who, like Marsalis, feels that hip hop is not music. In it a character berates his friend for bringing up Wynn’s position. On the surface the song talks trash, but musically it makes layers of references.

First, the song’s format (down to the title) is a nod to the Prince tune “Bob George.” In his song, Prince parodies a character berating a girlfriend for being with Bob George. The voice of the character in “Tony Wynn” and some of his comments come straight from Prince’s song, but the work as a whole is not a direct cover of “Bob George.”

Tony Wynn

“Tony Wynn” is undeniably influenced by the Minneapolis sound, that eclectic late 1970s and early 80s scene that blend of funk, rock, and synthpop, but how the track arrives there is complicated. It does contain a Prince sample, but not from “Bob George.” The sample is played in a transformative manner, chopping a new riff different from the source material. It also includes a hit from another song, a sample of only one note, yet one identifiable as signature. The drums are ‘played’ in what could be described as the Minneapolis vibe. You can also hear a refrain that mimics yet another song. All of these sampled parts create a new head, to which I added instrumental embellishments with co-conspirator Dolphin on bass, synth, and the killer Prince-esque guitar solo.

The track represents a hodgepodge of Prince influences, but because those influences are so varied, none can be individually identified as the heart of “Tony Wynn.”  Furthermore, at the bridge all of the samples get flipped on each other, some re-sampled and performed anew. Nothing can be pinned down as an infringement on technicalities, without taking into account the full context of the transformation.  While “Tony Wynn” is heavily influenced by Prince, it is not a Prince song.

Rap Rap Rap

The second track on Heads,”Rap Rap Rap,” features Murda Miles and Killa Trane. I chose its title and head to reference the 1936 Louis Palma song “Sing Sing Sing,” made popular by the Benny Goodman Band. Coming out of the big band era, the song is closer to a traditionally composed Western standard, the heavy percussions however distinguish it. While you will find no samples of sound recordings from any version of “Sing Sing Sing” in “Rap Rap Rap,” it still represents the primary sample head used.

The opening percussive phrases are influenced by rhythmic hand games—an important but often overlooked precursor to hip hop discussed in Kyra Gaunt’s The Games Black Girls Play: Learning the Ropes from Double-Dutch to Hip-Hop.  Here the rhythm sets the pace before charging into the head with a swing type of groove as the two featured artists, Murda Miles on trumpet and Killa Trane on sax, call out the head. What distinguishes these horns however, is that they are both sample based.

The song’s head is still based on “Sing Sing Sing,” but for the dueling horn parts the samples come from the recordings of Miles Davis and John. While Davis and Coltrane played together at a fair number of sessions, these samples come from two divergent sources from their individual catalogs. I chopped, tuned and arranged them for performance so that they could play in tune with the head.

The opening half of “Rap Rap Rap” sees both sticking to the head with little flourishes, but at the half way mark, the accompaniment changes to a distinct hip-hop beat still firmly rooted in the head. The two horns shift here as well, trading bars in a way that nods to both jazz and rap. The phrasing of the sample performance itself mimics a rapping cadence here, bridging the gap between the two traditions.

La Botella

The head for next track “La Botella” (The Bottle), uses a popular salsa motif as the head, accentuated by a son influenced percussive wall of sound. The percussions vary from live tracked percussions to percussion samples to percussive synthesis. I performed many of the percussive sounds utilizing the Heads Instruments sequencer, which lends itself to the slightly off—while still in the pocket—swing.

The format of this particular head allowed for an expanded arrangement, through which I nod to the Afro-Cuban influence in the African American tradition, from jazz to hard soul/funk to rock and roll. Son evolved from drumming traditions that have their own forms of the head.  There is a duality in these two traditions that pairs a desire for tightness with a looseness in spirit, and this tension continues into musics influenced by them. The percussions on “La Botella” carry that duality.  The collective drums sound as an instrument, while each individual drum can be aurally isolated.

The actual samples in the song come from vocal bits of The Fania All-Stars, but the true Fania mark I emulate on “La Botella” is the horn section. They sound nowhere near as good—let’s just get that out of the way—but the role they play comes directly from the feel of a classic Fania release. Could the horns actually be attributable to a single source? I doubt it, but more importantly, they operate only as a component of the song itself, placing this inspiration in a different musical context.

Sound Power

“Sound Power” fully embraces ‘sound’ as a fundamental musical object. Sounds in and of themselves can be understood as heads. The primary instrument I used on “Sound Power” is the sound generator of the 4|5 Ccls Heads Instrument. 4|5 Ccls is an arpeggiator modeled after John Coltrane’s sketches on the cycle of fifths. I tend to think of such sounds in relationship to the latter Coltrane years when he was using his instrument as a sound generator, clustering notes together and condensing melody.

Similarly, arpeggiators group notes into singular phrases which can be interpreted as heads. The head on “Sound Power” does not push the possibilities to the extreme, as Coltrane did; it remains constrained within a rhythmic framework.  However, it shows the power of sound as fundamental. All of the drums, percussive elements, bass and harmonies flow from the head, accentuated by heavyweight vocal chops from the Heads Instrument scratch emulator.

Come Clean

The intro to “Come Clean” marks a turning point in the album. The first four tracks present are technical feats to illustrate the point. “Come Clean” doesn’t slack off. Musically this track is the closest to the “Blurred Lines” case; notably, other than the intro, it contains no sample. It’s head, however, comes from the Jeru the Damaja song “Come Clean” produced by DJ Premier. I did an extensive breakdown on the technical details of “Come Clean” on Avanturb a few years ago; my online installation shows how (and for how long) I have been contemplating this track. But to paraphrase the sample here, the true power of music is helping the listener realize the breadth of their own existence in this universe. My use of the song is very intentional, and I deliberately change its themes for the album.

For “Come Clean,” I worked with percussionists Zach and Claudia who studied in the Olatunji line of drumming. They noted the physical timing challenges getting used to the song’s unique head, but, once they locked in, the head held its own. That exemplifies the power of this means of composing – new original ideas which can push music’s possibilities.

As an artist, I advocate for the interpretation of copyright laws so that someone cannot sue because three notes of a song appear in one they own, or because a sound from the recording the record company convinced the artist to sign over to them for pennies was repitched and played into a melody.  I know that arriving to music via these methods can push the traditions further, everything copyright laws were written to encourage. If we don’t change the way we think about copyright, the ability to create in this manner will be lost in litigation.

Heads comes out on April 1, 2015

Primus Luta is a husband and father of three. He is a writer and an artist exploring the intersection of technology and art, and their philosophical implications.  He maintains his own AvantUrb site. Luta was a regular presenter for Rhythm Incursions. As an artist, he is a founding member of the collective Concrète Sound System. Recently Concréte released the second part of their Ultimate Break Beats series for Shocklee.

tape reelREWIND!…If you liked this post, you may also dig:

“SO! Reads: Jonathan Sterne’s MP3: The Meaning of a Format”-Aaron Trammell

“Remixing Girl Talk: The Poetics and Aesthetics of Mashups”-Aram Sinnreich

The Blue Notes of Sampling– Primus Luta

The Blue Notes of Sampling

4287453890_eb4c960243_o

Sound and TechThis is article 2.0  in Sounding Out!‘s April  Forum on “Sound and Technology.” Every Monday this month, you’ll be hearing new insights on this age-old pairing from the likes of Sounding Out! veterano Aaron Trammell along with new voices Andrew Salvati and Owen Marshall.  These fast-forward folks will share their thinking about everything from Auto-tune to techie manifestos. So, turn on your quantizing for Sounding Out! and enjoy today’s supersonic in-depth look at sampling from from SO! Regular Writer Primus Luta.  –JS, Editor-in-Chief

My favorite sample-based composition? No question about it: “Stroke of Death” by Ghostface and produced by The RZA.

.

Supposedly the story goes, RZA was playing records in the studio when he put on the Harlem Underground Band’s album. It is a go-to album in a sample-based composer collection, because of the open drum breaks. One such break appears in the cover of Bill Wither’s “Ain’t No Sunshine”, notably used by A Tribe Called Quest on “Everything is Fair.”

.

RZA, a known break beat head, listened as the song approached the open drums, when the unthinkable happened: a scratch in his copy of the record. Suddenly, right before the open drums dropped, the vinyl created its own loop, one that caught RZA’s ear. He recorded it right there and started crafting the beat.

This sample is the only source material for the track. RZA throws a slight turntable backspin in for emphasis, adding to the jarring feel that drives the beat. That backspin provides a pitch shift for the horn that dominates the sample, changing it from a single sound into a three-note melody. RZA also captures some of the open drums so that the track can breathe a bit before coming back to the jarring loop. As accidental as the discovery may have been, it is a very precisely arranged track, tailor-made for the attacking vocals of Ghostface, Solomon Childs, and the RZA himself.

"How to: fix a scratched record" by Flickr user Fred Scharmen, CC BY-NC-SA 2.0

“How to: fix a scratched record” by Flickr user Fred Scharmen, CC BY-NC-SA 2.0

“Stroke of Death” exemplifies how transformative sample-based composition can be. Other than by knowing the source material, the sample is hard to identify. You cannot figure out that the original composition is Wither’s “Ain’t No Sunshine” from the one note RZA sampled, especially considering the note has been manipulated into a three-note melody that appears nowhere in either rendition of the composition. It is sample based, yes, but also completely original.

Classifying a composition like this as a ‘happy accident’ downplays just how important the ear is in sample-based composition, particularly on the transformative end of the spectrum. J Dilla once said finding the mistakes in a record excited him and that it was often those mistakes he would try to capture in his production style. Working with vinyl as a source went a long way in that regard, as each piece of vinyl had the potential to have its own physical characteristics that affected what one heard. It’s hard to imagine “Stroke of Death” being inspired from a digital source. While digital files can have their own glitches, one that would create an internal loop on playback would be rare.

*****

"Unpacking of the Proteus 2000" by Flickr user Anders Dahnielson, CC BY-NC-SA 2.0

“Unpacking of the Proteus 2000” by Flickr user Anders Dahnielson, CC BY-NC-SA 2.0

There has been a change in the sound of sampling over the past few decades. It is subtle but still perceptible; one can hear it even if a person does not know what it is they are hearing. It is akin to the difference between hearing a blues man play and hearing a music student play the blues. They technically are both still the blues, but the music student misses all of the blue notes.

The ‘blue notes’ of the blues were those aspects of the music that could not be transcribed yet were directly related to how the song conveyed emotion. It might be the fact that the instrument was not fully in tune, or the way certain notes were bent but others were not, it could even be the way a finger hit the body of a guitar right after the string was strummed. It goes back farther than the blues and ultimately is not exclusive to the African American tradition from which the phrase derives; most folk music traditions around the world have parallels. “The Rite of Spring” can be understood as Stravinsky ‘sampling’ the blue notes of Transylvanian folk music. In many regards sample-based composing is a modern folk tradition, so it should come as no surprise that it has its own blue notes.

The sample-based composition work of today is still sampling, but much of it lacks the blue notes that helped define the golden era of the art. I attribute this discrepancy to the evolution of technology over the last two decades. Many of the things that could be understood as the blue notes of sampling were merely ways around the limits of the technology. In the same way, the blue notes of most folk music happened when the emotion went beyond the standards of the instrument (or alternately the limits imposed upon it by the literal analysis of western theory). By looking at how the technology has evolved we can see how blue notes of sampling are being lost as key limitations are being overcome by “advances.”

e-muFirst, let’s consider the E-Mu SP-1200, which is still thought to be the most definitive sounding sampler for hip-hop styled sample-based compositions, particularly related to drums. The primary reason for this is its low-resolution sampling and conversion rates. For the SP-1200 the Analog to Digital (A/D) and Digital to Analog (D/A) converters were 12-bit at a sample rate of 26.04 kHz (CD quality is 16-bit 44.1 kHz). No matter what quality the source material, there would be a loss in quality once it was sampled into and played out of the SP-1200. This loss proved desirable for drum sounds particularly when combined with the analog filtering available in the unit, giving them a grit that reflected the environments from which the music was emerging.

Sp1200_Back_PanelOn top of this, individual samples could only be 2.5 seconds long, with a total available sample time of only 10 seconds. While the sample and conversion rates directly affected the sound of the samples, the time limits drove the way that composers sampled. Instead of finding loops, beatmakers focused on individual sounds or phrases, using the sequencer to arrange those elements into loops. There were workarounds for the sample time constraints; for example, playing a 33-rpm record at 45 rpm to sample, then pitching it back down post sampling was a quick way to increase the sample time. Doing this would further reduce the sample rate, but again, that could be sonically appealing.

An under appreciated limitation of the SP-1200 however, was the visual feedback for editing samples. The display of the SP-1200 was completely alpha numeric; there were no visual representations of the sample other than numbers that were controlled by the faders on the interface. The composer had to find the start and end points of the sample solely by ear. Two producers might edit the exact same kick drum with start times 100 samples (a fraction of a millisecond) apart. Were one of the composers to have recorded the kick at 45 rpm and pitched it down, the actual resolution for the start and end times would be different. When played in a sequence, these 100 samples affect the groove, contributing directly to the feel of the composition. The timing of when the sample starts playback is combined with the quantization setting and the swing percentage of the sequencer. That difference of 100 samples in the edit further offsets the trigger times, which even with quantization turned off fit into the 24 parts per quarter grid limitations of the machine.

akaiAkai’s MPC-60 was the next evolution in sampling technology. It raised the sample and conversion rates to 16-bit and 40 kHz. Sample time increased to a total of 13.1 seconds (upgradable to 26.2). Sequencing resolution increased to 96 parts per quarter. Gone was the crunch of the SP-1200, but the precision went up both in sampling and in sequencing. The main trademark of the MPC series was the swing and groove that came to Akai from Roger Linn’s Linn Drum. For years shrouded in mystery and considered a myth by many, in truth there was a timing difference that Linn says was achieved by delaying certain notes by samples. Combined with the greater PPQ resolution in unquantized mode, even with more precision than the SP-1200, the MPC lent itself to capturing user variation.

Despite these technological advances, sample time and editing limitations, combined with the fact that the higher resolution sampling lacked the character of the SP-1200, kept the MPC from being the complete package sample composers desired. For this reason it was often paired with Akai’s S-950 rack sampler. The S-950 was a 12-bit sampler but had a variable sample rate between 7.5 kHz and 40 kHz. The stock memory could hold 750 KB of samples which at the lowest sample rate could garner upwards of 60 seconds of sampling and at the higher sample rates around 10 seconds. This was expandable to up to 2.5 MB of sample memory.

s950.

The editing capabilities made the S-950 such a powerful sampler. Being able to create internal sample loops, key map samples to a keyboard, modify envelopes for playback, and take advantage of early time stretching (which would come of age with the S-1000)—not to mention the filter on the unit—helped take sampling deeper into the sound design territory. This again increased the variable possibilities from composer to composer even when working from the same source material. Often combined with the MPC for sequencing, composers had the ultimate sample-based composition workstation.

Today, there are literally no limitations for sampling. Perhaps the subtlest advances have developed the precision with which samples can be edited. With these advances, the biggest shift has been the reduction of the reliance on ears. Recycle was an early software program that started to replace the ears in the editing process. With Recycle an audio file could be loaded, and the software would chop the sample into component parts by searching for the transients. Utilizing Recycle on the same source, it was more likely two different composers could arrive at a kick sample that was truncated identically.

waveformAnother factor has been the waveform visualization of samples for editing. Some earlier hardware samplers featured the waveform display for truncating samples, but the graphic resolution within the computer made this even more precise. By looking at the waveform you are able to edit samples at the point where a waveform crosses the middle point between the negative and positive side of the signal, known as the zero-crossing. The advantage of zero-crossing sampling is that it prevents errors that happen when playback goes from either side of the zero point to another point in one sample, which can make the edit point audible because of the break in the waveform. The end result of zero-crossing edited samples is a seamlessness that makes samples sound like they naturally fit into a sequence without audible errors. In many audio applications snap-to settings mean that edits automatically snap to zero-crossing—no ears needed to get a “perfect” sounding sample.

It is interesting to note that with digital files it’s not about recording the sample, but editing it out of the original file. It is much different from having to put the turntable on 45 rpm to fit a sample into 2.5 seconds. Another differentiation between digital sample sources is the quality of the files, whether original digital files (CD quality or higher), lossless compression (FLAC), lossy compressed (MP3, AAC) or the least desirable though most accessible, transcoded (lossy compression recompressed such as YouTube rips). These all result in a different degradation of quality than the SP-1200. Where the SP-1200’s downsampling often led to fatter sounds, these forms of compression trend toward thinner-sounding samples.

converter.

Some producers have created their own sound using thinned out samples with the same level of sonic intent as The RZA’s on “Stroke of Death.” The lo-fi aesthetic is often an attempt to capture a sound to parallel the golden era of hardware-based sampling. Some software-based samplers by example will have an SP-1200 emulation button that reduces bit rates to 12-bit. Most of software sequencers have groove templates that allow for the sequencers to emulate grooves like the MPC timing.

Perhaps the most important part of the sample-based composition process however, cannot be emulated: the ear. The ear in this case is not so much about the identification of the hot sample. Decades of history should tell us that the hot sample is truly a dime a dozen. It takes a keen composer’s ear to hear how to manipulate those sounds into something uniquely theirs. Being able to listen for that and then create that unique sound—utilizing whatever tools— that is the blue note of sampling. And there is simply no way to automate that process.

Featured image: “Blue note inverted” by Flickr user Tim, CC BY-ND 2.0

Primus Luta is a husband and father of three. He is a writer and an artist exploring the intersection of technology and art, and their philosophical implications.  He maintains his own AvantUrb site. Luta was a regular presenter for Rhythm Incursions. As an artist, he is a founding member of the collective Concrète Sound System, which spun off into a record label for the exploratory realms of sound in 2012. Recently Concréte released the second part of their Ultimate Break Beats series for Shocklee.

tape reelREWIND!…If you liked this post, you may also dig:

“SO! Reads: Jonathan Sterne’s MP3: The Meaning of a Format”-Aaron Trammell

“Remixing Girl Talk: The Poetics and Aesthetics of Mashups”-Aram Sinnreich

“Sound as Art as Anti-environment”-Steven Hammer

Live Electronic Performance: Theory And Practice

3493272789_8c7302c8fa_z

This is the third and final installment of a three part series on live Electronic music.  To review part one, “Toward a Practical Language for Live Electronic Performance” click here. To peep part two, “Musical Objects, Variability and Live Electronic Performance” click here.

“So often these laptop + controller sets are absolutely boring but this was a real performance – none of that checking your emails on stage BS. Dude rocked some Busta, Madlib, Aphex Twin, Burial and so on…”

This quote, from a blogger about Flying Lotus’ 2008 Mutek set, speaks volumes about audience expectations of live laptop performances. First, this blogger acknowledges that the perception of laptop performances is that they are generally boring, using the “checking your email” adage to drive home the point. He goes to express what he perceived to set Lotus’s performance apart from that standard. Oddly enough, it isn’t the individualism of his sound, rather it was Lotus’s particular selection and configuration of other artists’ work into his mix – a trademark of the DJ.

Contrasting this with the review of the 2011 Flying Lotus set that began this series, both reveal how context and expectations are very important to the evaluation of live electronic performance.  While the 2008 piece praises Lotus for a DJ like approach to his live set, the 2011 critique was that the performance was more of a DJ set rather than a live electronic performance. What changed in the years between these two sets was the familiarity with the style of performance (from Lotus and the various other artists on the scene with similar approaches) providing a shift in expectations. What they both lack, however, is a language to provide the musical context for their praise or critique; a language which this series has sought to elucidate.

To put live electronic performances into the proper musical context, one must determine what type of performance is being observed. In the last part of this series, I arrive at four helpful distinctions to compare and describe live electronic performance, continuing this project of developing a usable aesthetic language and enabling a critical conversation about the artform.  The first of the four distinctions between different types of live electronic music performance concerns the manipulation of fixed pre-recorded sound sources into variable performances. The second distinction cites the physical manipulation of electronic instruments into variable performances. The third distinction demarcates the manipulation of electronic instruments into variable performances by the programming of machines. The last one is an integrated category that can be expanded to include any and all combinations of the previous three.

Essential to all categories of live electronic music performance, however, is the performance’s variability, without which music—and its concomitant listening practices–transforms from  a “live” event to a fixed musical object. The trick to any analysis of such performance however, is to remember that, while these distinctions are easy to maintain in theory, in performance they quickly blur one into the other, and often the intensity and pleasure of live electronic music performance comes from their complex combinations.

6250416351_d5ca1fc1f3_b

Flying Lotus at Treasure Island, San Francisco on 10-15-2011, image by Flickr User laviddichterman

For example, an artist who performs a set using solely vinyl with nothing but two turntables and a manual crossfading mixer, falls in the first distinction between live electronic music performances. Technically, the turntables and manual crossfading mixer are machines, but they are being controlled manually rather than performing on their own as machines.  If the artist includes a drum machine in the set, however, it becomes a hybrid (the fourth distinction), depending on whether the drum machine is being triggered by the performer (physical manipulation) or playing sequences (machine manipulation) or both. Furthermore, if the drum machine triggers samples, it becomes machine manipulation (third distinction) of fixed pre-recorded sounds (first distinction) If the drum machine is used to playback sequences while the artist performs a turntablist routine, the turntable becomes the performance instrument while the drum machine holds as a fixed source. All of these relationships can be realized by a single performer over the course of a single performance, making the whole set of the hybrid variety.

While in practice the hybrid set is perhaps the most common, it’s important to understand the other three distinctions as each of them comes with their own set of limitations which define their potential variability.  Critical listening to a live performance includes identifying when these shifts happen and how they change the variability of the set.  Through the combination their individual limitations can be overcome increasing the overall variability of the performance. One can see a performer playing the drum machine with pads and correlate that physicality of it with the sound produced and then see them shift to playing the turntable and know that the drum machine has shifted to a machine performance. In this example the visual cues would be clear indicators, but if one is familiar with the distinctions the shifts can be noticed just from the audio.

This blending of physical and mechanical elements in live music performance exposes the modular nature of live electronic performance and its instruments. teaching us that the instruments themselves shouldn’t be looked at as distinction qualifiers but rather their combination in the live rig, and the variability that it offers. Where we typically think of an instrument as singular, within live electronic music, it is perhaps best to think of the individual components (eg turntables and drum machines) as the musical objects of the live rig as instrument.

Flying Lotus at the Echoplex, Los Angeles, Image by Flickr User  sunny_J

Flying Lotus at the Echoplex, Los Angeles, Image by Flickr User sunny_J

Percussionists are a close acoustic parallel to the modular musical rig of electronic performers. While there are percussion players who use a single percussive instrument for their performances, others will have a rig of component elements to use at various points throughout a set. The electronic performer inherits such a configuration from keyboardists, who typically have a rig of keyboards, each with different sounds, to be used throughout a set. Availing themselves of a palette of sounds allows keyboardists to break out of the limitations of timbre and verge toward the realm of multi-instrumentalists.  For electronic performers, these limitations in timbre only exist by choice in the way the individual artists configure their rigs.

From the perspective of users of traditional instruments, a multi-instrumentalist is one who goes beyond the standard of single instrument musicianship, representing a musician well versed at performing on a number of different instruments, usually of different categories.  In the context of electronic performance, the definition of instrument is so changed that it is more practical to think not of multi-instrumentalists but multi-timbralists.  The multi-timbralist can be understood as the standard in electronic performance.  This is not to say there are not single instrument electronic performers, however  it is practical to think about the live electronic musician’s instrument not as a singular musical object, but rather a group of musical objects (timbres) organized into the live rig.  Because these rigs can be comprised of a nearly infinite number of musical objects, the electronic performer has the opportunity to craft a live rig that is uniquely their own. The choices they make in the configuration of their rig will define not just the sound of their performance, but the degrees of variability they can control.

Because the electronic performer’s instrument is the live rig comprised of multiple musical objects, one of the primary factors involved in the configuration of the rig is how the various components interact with one another over the time dependent course of a performance. In a live tape loop performance, the musician may use a series of cassette players with an array of effects units and a small mixer. In such a rig, the mixer is the primary means of communication between objects. In this type of rig however, the communication isn’t direct. The objects cannot directly communicate with each other, rather the artist is the mediator. It is the artist that determines when the sound from any particular tape loop is fed to an effect or what levels the effects return sound in relation to the loops. While watching a performance such as this, one would expect the performer to be very involved in physically manipulating the various musical objects. We can categorize this as an unsynchronized electronic performance meaning that the musical objects employed are not locked into the same temporal relations.

Big Tape Loops, Image by Flickr User  choffee

Big Tape Loops, Image by Flickr User choffee

The key difference between an unsynchronized and s synchronized performance rigs is the amount of control over the performance that can be left to the machines. The benefit of synchronized performance rigs is that they allow for greater complexity either in configuration or control. The value of unsynchronized performance rigs is they have a more natural and less mechanized feel, as the timing flows from the performer’s physical body. Neither could be understood as better than the other, but in general they do make for separate kinds of listening experiences, which the listener should be aware of in evaluating a performance. Expectations should shift depending upon whether or not a performance rig is synchronized.

This notion of a synchronized performance rig should not only be understood as an inter-machine relationship. With the rise of digital technology, many manufacturers developed workstation style hardware which could perform multiple functions on multiple sound sources with a central synchronized control. The Roland SP-404 is a popular sampling workstation, used by many artists in a live setting. Within this modest box you get twelve voices of sample polyphony, which can be organized with the internal sequencer and processed with onboard effects. However, a performer may choose not to utilize a sequencer at all and as such, it can be performed unsynchronized, just triggering the pads. In fact, in recent years there has been a rise of drum pad players or finger drummers who perform using hardware machines without synchronization. Going back to our three distinctions a performance such as this would be a hybrid of physical manipulation of fixed sources with the physical manipulation of an electronic instrument.  From this qualification, we know to look for extensive physical effort in such performances as indicators of the the artists agency on the variable performance.

Now that we’ve brought synchronization into the discussion it makes sense to talk about what is often the main means of communication in the live performance rig – the computer. The ultimate benefit of a computer is the ability to process a large number of calculations per computational cycle. Put another way, it allows users to act on a number of musical variables in single functions. Practically, this means the ability to store, organize recall and even perform a number of complex variables. With the advent of digital synthesizers, computers were being used in workstations to control everything from sequencing to the patch sound design data. In studios, computers quickly replaced mixing consoles and tape machines (even their digital equivalents like the ADAT) becoming the nerve center of the recording process. Eventually all of these functions and more were able to fit into the small and portable laptop computer, bringing the processing power in a practical way to the performance stage.

Flying Lotus and his Computer, Image by Flickr User  jaswooduk

Flying Lotus and his Computer, All Tomorrow’s Parties 2011, Image by Flickr User jaswooduk

A laptop can be understood as a rig in and of itself, comprised of a combination of software and plugins as musical objects, which can be controlled internally or via external controllers. If there were only two software choices and ten plugins available for laptops, there would be over seven million permutations possible. While it is entirely possible (and for many artists practical) for the laptop to be the sole object of a live rig, the laptop is often paired with one or more controllers. The number of controllers available is nowhere near the volume of software on the market, but the possible combinations of hardware controllers take the laptop + controller + software combination possibilities toward infinity. With both hardware and software there is also the possibility of building custom musical objects that add to the potential uniqueness of a rig.

Unfortunately, quite often it is impossible to know exactly what range of tools are being utilized within a laptop strictly by looking at an artist on stage. This is what leads to probably the biggest misnomer about the performing laptop musician. As common as the musical object may look on the stage, housed inside of it can be the most unique and intricate configurations music (yes all of music) has ever seen. The reductionist thought that laptop performers aren’t “doing anything but checking email” is directly tied to the acousmatic nature of the objects as instruments. We can hear the sounds, but determining the sources and understanding the processes required to produce them is often shrouded in mystery. Technology has arrived at the point where what one performs live can precisely replicate what one hears in recorded form, making it easy to leap to the conclusion that all laptop musicians do is press play.

Indeed some of them do, but to varying degrees a large number of artists are actively doing more during their live sets. A major reasons for this is that one of the leading Digital Audio Workstations (DAW) of today also doubles as a performance environment. Designed with the intent of taking the DAW to the stage, Ableton Live allows artists to have an interface that facilitates the translation of electronic concepts from the studio to the stage. There are a world of things that are possible just by learning the Live basics, but there’s also a rabbit hole of advanced functions all the way to the modular Max for Live environment which lies on the frontier discovering new variables for sound manipulation. For many people, however, the software is powerful enough at the basic level of use to create effective live performances.

Sample Screenshot from a performer's Ableton Live set up for an "experimental and noisy performance" with no prerecorded material, Image by Flickr User Furibond

Sample Screenshot from a performer’s Ableton Live set up for an “experimental and noisy performance” with no prerecorded material, Image by Flickr User Furibond

In its most basic use case, Ableton Live is set up much like a DJ rig, with a selection of pre-existing tracks queued up as clips which the performer blends into a uniform mix, with transitions and effects handled within the software. The possibilities expand out from that: multi-track parts of a song separated into different clips so the performer can take different parts in and out over the course of the performance; a plugin drum machine providing an additional sound source on top of the track(s), or alternately the drum machine holding a sequence while track elements are laid on top of it. With the multitude of plugins available just the combination of multi-track Live clips with a single soft synth plugin, lends itself to near infinite combinations. The variable possibilities of this type of set, even while not exploiting the breadth of variable possibilities presented by the gear, clearly points to the artist’s agency in performance.

Within the context of both the DJ set and the Ableton Live set, synchronization plays a major role in contextualization. Both categories of performance can be either synchronized or unsynchronized. The tightest of unsynchronized sets will sound synchronized, while the loosest of synchronized sets will sound unsynchronized. This plays very much into audience perception of what they are hearing and the performers’ choice of synchronization and tightness can be heavily influenced by those same audience expectations.

A second performance screen capture by the same artist, this time using pre-recorded midi sequences, Image by Flickr User Furibond

A second performance screen capture by the same artist, this time using pre-recorded midi sequences, Image by Flickr User Furibond

A techno set is expected to maintain somewhat of a locked groove, indicative of a synchronized performance. A synchronized rig either on the DJ side (Serato utilizing automated beat matching) or on the Ableton side (time stretch and auto bpm detection sync’d to midi) can make this a non factor for the physical performance, and so listening to such a performance it would be the variability of other factors which reveals the artist’s control over the performance. For the DJ, the factors would include the selection, transitions and effects use. For the Ableton user, it can include all of those things as well as the control over the individual elements in tracks and potentially other sound sources.

On the unsychronized end of the spectrum, a vinyl DJ could accomplish the same mix as the synchronized DJ set but it would physically require more effort on their part to keep all of the selections in time. This would mean they might have to limit exerting control on the other variables. An unsychronized Live set would be utilizing the software primarily as a sound source, without MIDI, placing the timing in the hands of the performer. With the human element added to the timing it would be more difficult to produce the machine-like timing of the other sets. This doesn’t mean that it couldn’t be effective, but there would be an audible difference in this type of set compared to the others.

What we’ve established is that through the modular nature of the electronic musician’s rig as an instrument, from synthesizer keyboards to Ableton Live, every variable consideration combines to produce infinite possibilities. Where the trumpet is limited in timbre, range and dynamics, the turntable is has infinite timbres; the range is the full spectrum of human hearing; and the dynamics directly proportional to the output. The limitations of the electronic musician’s instrument appear only in electrical current constraints, processor speed limits, the selection of components and the limitations of the human body.

Flying Lotus at Electric Zoo, 2010, Image by Flickr User TheMusic.FM

Flying Lotus at Electric Zoo, 2010, Image by Flickr User TheMusic.FM

Within these constraints however, we have only begun touching the surface of possibilities. There are combinations that have never happened, variables that haven’t come close to their full potential, and a multitude of variables that have yet to be discovered. One thing that the electronic artist can learn from jazz toward maximizing that potential is the notion of play, as epitomized with jazz improvisation. For jazz, improvisation opened up the possibilities of the form which impacted, performance and composition. I contend that the electronic artist can push the boundaries of variable interaction by incorporating the ability to play from the rig both in their physical performance and giving the machine its own sense of play. Within this play lie the variables which I believe can push electronic into the jazz of tomorrow.

Featured Image by Flickr User Juha van ‘t Zelfde

Primus Luta is a husband and father of three. He is a writer and an artist exploring the intersection of technology and art, and their philosophical implications. In 2014 he will be releasing an expanded version of this series as a book entitled “Toward a Practical Language: Live Electronic Performance”. He is a regular guest contributor to the Create Digital Music website, and maintains his own AvantUrb site. Luta is a regular presenter for theRhythm Incursions Podcast series with his monthly showRIPL. As an artist, he is a founding member of the live electronic music collectiveConcrète Sound System, which spun off into a record label for the exploratory realms of sound in 2012.

tape reelREWIND! . . .If you liked this post, you may also dig:

Evoking the Object: Physicality in the Digital Age of Music-Primus Luta

Experiments in Agent-based Sonic Composition–Andreas Pape

Calling Out To (Anti)Liveness: Recording and the Question of PresenceOsvaldo Oyola

%d bloggers like this: