Archive by Author | primusluta

The Blue Notes of Sampling

Sound and TechThis is article 2.0  in Sounding Out!‘s April  Forum on “Sound and Technology.” Every Monday this month, you’ll be hearing new insights on this age-old pairing from the likes of Sounding Out! veterano Aaron Trammell along with new voices Andrew Salvati and Owen Marshall.  These fast-forward folks will share their thinking about everything from Auto-tune to techie manifestos. So, turn on your quantizing for Sounding Out! and enjoy today’s supersonic in-depth look at sampling from from SO! Regular Writer Primus Luta.  –JS, Editor-in-Chief

My favorite sample-based composition? No question about it: “Stroke of Death” by Ghostface and produced by The RZA.

.

Supposedly the story goes, RZA was playing records in the studio when he put on the Harlem Underground Band’s album. It is a go-to album in a sample-based composer collection, because of the open drum breaks. One such break appears in the cover of Bill Wither’s “Ain’t No Sunshine”, notably used by A Tribe Called Quest on “Everything is Fair.”

.

RZA, a known break beat head, listened as the song approached the open drums, when the unthinkable happened: a scratch in his copy of the record. Suddenly, right before the open drums dropped, the vinyl created its own loop, one that caught RZA’s ear. He recorded it right there and started crafting the beat.

This sample is the only source material for the track. RZA throws a slight turntable backspin in for emphasis, adding to the jarring feel that drives the beat. That backspin provides a pitch shift for the horn that dominates the sample, changing it from a single sound into a three-note melody. RZA also captures some of the open drums so that the track can breathe a bit before coming back to the jarring loop. As accidental as the discovery may have been, it is a very precisely arranged track, tailor-made for the attacking vocals of Ghostface, Solomon Childs, and the RZA himself.

"How to: fix a scratched record" by Flickr user Fred Scharmen, CC BY-NC-SA 2.0

“How to: fix a scratched record” by Flickr user Fred Scharmen, CC BY-NC-SA 2.0

“Stroke of Death” exemplifies how transformative sample-based composition can be. Other than by knowing the source material, the sample is hard to identify. You cannot figure out that the original composition is Wither’s “Ain’t No Sunshine” from the one note RZA sampled, especially considering the note has been manipulated into a three-note melody that appears nowhere in either rendition of the composition. It is sample based, yes, but also completely original.

Classifying a composition like this as a ‘happy accident’ downplays just how important the ear is in sample-based composition, particularly on the transformative end of the spectrum. J Dilla once said finding the mistakes in a record excited him and that it was often those mistakes he would try to capture in his production style. Working with vinyl as a source went a long way in that regard, as each piece of vinyl had the potential to have its own physical characteristics that affected what one heard. It’s hard to imagine “Stroke of Death” being inspired from a digital source. While digital files can have their own glitches, one that would create an internal loop on playback would be rare.

*****

"Unpacking of the Proteus 2000" by Flickr user Anders Dahnielson, CC BY-NC-SA 2.0

“Unpacking of the Proteus 2000” by Flickr user Anders Dahnielson, CC BY-NC-SA 2.0

There has been a change in the sound of sampling over the past few decades. It is subtle but still perceptible; one can hear it even if a person does not know what it is they are hearing. It is akin to the difference between hearing a blues man play and hearing a music student play the blues. They technically are both still the blues, but the music student misses all of the blue notes.

The ‘blue notes’ of the blues were those aspects of the music that could not be transcribed yet were directly related to how the song conveyed emotion. It might be the fact that the instrument was not fully in tune, or the way certain notes were bent but others were not, it could even be the way a finger hit the body of a guitar right after the string was strummed. It goes back farther than the blues and ultimately is not exclusive to the African American tradition from which the phrase derives; most folk music traditions around the world have parallels. “The Rite of Spring” can be understood as Stravinsky ‘sampling’ the blue notes of Transylvanian folk music. In many regards sample-based composing is a modern folk tradition, so it should come as no surprise that it has its own blue notes.

The sample-based composition work of today is still sampling, but much of it lacks the blue notes that helped define the golden era of the art. I attribute this discrepancy to the evolution of technology over the last two decades. Many of the things that could be understood as the blue notes of sampling were merely ways around the limits of the technology. In the same way, the blue notes of most folk music happened when the emotion went beyond the standards of the instrument (or alternately the limits imposed upon it by the literal analysis of western theory). By looking at how the technology has evolved we can see how blue notes of sampling are being lost as key limitations are being overcome by “advances.”

e-muFirst, let’s consider the E-Mu SP-1200, which is still thought to be the most definitive sounding sampler for hip-hop styled sample-based compositions, particularly related to drums. The primary reason for this is its low-resolution sampling and conversion rates. For the SP-1200 the Analog to Digital (A/D) and Digital to Analog (D/A) converters were 12-bit at a sample rate of 26.04 kHz (CD quality is 16-bit 44.1 kHz). No matter what quality the source material, there would be a loss in quality once it was sampled into and played out of the SP-1200. This loss proved desirable for drum sounds particularly when combined with the analog filtering available in the unit, giving them a grit that reflected the environments from which the music was emerging.

Sp1200_Back_PanelOn top of this, individual samples could only be 2.5 seconds long, with a total available sample time of only 10 seconds. While the sample and conversion rates directly affected the sound of the samples, the time limits drove the way that composers sampled. Instead of finding loops, beatmakers focused on individual sounds or phrases, using the sequencer to arrange those elements into loops. There were workarounds for the sample time constraints; for example, playing a 33-rpm record at 45 rpm to sample, then pitching it back down post sampling was a quick way to increase the sample time. Doing this would further reduce the sample rate, but again, that could be sonically appealing.

An under appreciated limitation of the SP-1200 however, was the visual feedback for editing samples. The display of the SP-1200 was completely alpha numeric; there were no visual representations of the sample other than numbers that were controlled by the faders on the interface. The composer had to find the start and end points of the sample solely by ear. Two producers might edit the exact same kick drum with start times 100 samples (a fraction of a millisecond) apart. Were one of the composers to have recorded the kick at 45 rpm and pitched it down, the actual resolution for the start and end times would be different. When played in a sequence, these 100 samples affect the groove, contributing directly to the feel of the composition. The timing of when the sample starts playback is combined with the quantization setting and the swing percentage of the sequencer. That difference of 100 samples in the edit further offsets the trigger times, which even with quantization turned off fit into the 24 parts per quarter grid limitations of the machine.

akaiAkai’s MPC-60 was the next evolution in sampling technology. It raised the sample and conversion rates to 16-bit and 40 kHz. Sample time increased to a total of 13.1 seconds (upgradable to 26.2). Sequencing resolution increased to 96 parts per quarter. Gone was the crunch of the SP-1200, but the precision went up both in sampling and in sequencing. The main trademark of the MPC series was the swing and groove that came to Akai from Roger Linn’s Linn Drum. For years shrouded in mystery and considered a myth by many, in truth there was a timing difference that Linn says was achieved by delaying certain notes by samples. Combined with the greater PPQ resolution in unquantized mode, even with more precision than the SP-1200, the MPC lent itself to capturing user variation.

Despite these technological advances, sample time and editing limitations, combined with the fact that the higher resolution sampling lacked the character of the SP-1200, kept the MPC from being the complete package sample composers desired. For this reason it was often paired with Akai’s S-950 rack sampler. The S-950 was a 12-bit sampler but had a variable sample rate between 7.5 kHz and 40 kHz. The stock memory could hold 750 KB of samples which at the lowest sample rate could garner upwards of 60 seconds of sampling and at the higher sample rates around 10 seconds. This was expandable to up to 2.5 MB of sample memory.

s950.

The editing capabilities made the S-950 such a powerful sampler. Being able to create internal sample loops, key map samples to a keyboard, modify envelopes for playback, and take advantage of early time stretching (which would come of age with the S-1000)—not to mention the filter on the unit—helped take sampling deeper into the sound design territory. This again increased the variable possibilities from composer to composer even when working from the same source material. Often combined with the MPC for sequencing, composers had the ultimate sample-based composition workstation.

Today, there are literally no limitations for sampling. Perhaps the subtlest advances have developed the precision with which samples can be edited. With these advances, the biggest shift has been the reduction of the reliance on ears. Recycle was an early software program that started to replace the ears in the editing process. With Recycle an audio file could be loaded, and the software would chop the sample into component parts by searching for the transients. Utilizing Recycle on the same source, it was more likely two different composers could arrive at a kick sample that was truncated identically.

waveformAnother factor has been the waveform visualization of samples for editing. Some earlier hardware samplers featured the waveform display for truncating samples, but the graphic resolution within the computer made this even more precise. By looking at the waveform you are able to edit samples at the point where a waveform crosses the middle point between the negative and positive side of the signal, known as the zero-crossing. The advantage of zero-crossing sampling is that it prevents errors that happen when playback goes from either side of the zero point to another point in one sample, which can make the edit point audible because of the break in the waveform. The end result of zero-crossing edited samples is a seamlessness that makes samples sound like they naturally fit into a sequence without audible errors. In many audio applications snap-to settings mean that edits automatically snap to zero-crossing—no ears needed to get a “perfect” sounding sample.

It is interesting to note that with digital files it’s not about recording the sample, but editing it out of the original file. It is much different from having to put the turntable on 45 rpm to fit a sample into 2.5 seconds. Another differentiation between digital sample sources is the quality of the files, whether original digital files (CD quality or higher), lossless compression (FLAC), lossy compressed (MP3, AAC) or the least desirable though most accessible, transcoded (lossy compression recompressed such as YouTube rips). These all result in a different degradation of quality than the SP-1200. Where the SP-1200’s downsampling often led to fatter sounds, these forms of compression trend toward thinner-sounding samples.

converter.

Some producers have created their own sound using thinned out samples with the same level of sonic intent as The RZA’s on “Stroke of Death.” The lo-fi aesthetic is often an attempt to capture a sound to parallel the golden era of hardware-based sampling. Some software-based samplers by example will have an SP-1200 emulation button that reduces bit rates to 12-bit. Most of software sequencers have groove templates that allow for the sequencers to emulate grooves like the MPC timing.

Perhaps the most important part of the sample-based composition process however, cannot be emulated: the ear. The ear in this case is not so much about the identification of the hot sample. Decades of history should tell us that the hot sample is truly a dime a dozen. It takes a keen composer’s ear to hear how to manipulate those sounds into something uniquely theirs. Being able to listen for that and then create that unique sound—utilizing whatever tools— that is the blue note of sampling. And there is simply no way to automate that process.

Featured image: “Blue note inverted” by Flickr user Tim, CC BY-ND 2.0

Primus Luta is a husband and father of three. He is a writer and an artist exploring the intersection of technology and art, and their philosophical implications.  He maintains his own AvantUrb site. Luta was a regular presenter for Rhythm Incursions. As an artist, he is a founding member of the collective Concrète Sound System, which spun off into a record label for the exploratory realms of sound in 2012. Recently Concréte released the second part of their Ultimate Break Beats series for Shocklee.

tape reelREWIND!…If you liked this post, you may also dig:

“SO! Reads: Jonathan Sterne’s MP3: The Meaning of a Format”-Aaron Trammell

“Remixing Girl Talk: The Poetics and Aesthetics of Mashups”-Aram Sinnreich

“Sound as Art as Anti-environment”-Steven Hammer

Live Electronic Performance: Theory And Practice

This is the third and final installment of a three part series on live Electronic music.  To review part one, “Toward a Practical Language for Live Electronic Performance” click here. To peep part two, “Musical Objects, Variability and Live Electronic Performance” click here.

“So often these laptop + controller sets are absolutely boring but this was a real performance – none of that checking your emails on stage BS. Dude rocked some Busta, Madlib, Aphex Twin, Burial and so on…”

This quote, from a blogger about Flying Lotus’ 2008 Mutek set, speaks volumes about audience expectations of live laptop performances. First, this blogger acknowledges that the perception of laptop performances is that they are generally boring, using the “checking your email” adage to drive home the point. He goes to express what he perceived to set Lotus’s performance apart from that standard. Oddly enough, it isn’t the individualism of his sound, rather it was Lotus’s particular selection and configuration of other artists’ work into his mix – a trademark of the DJ.

Contrasting this with the review of the 2011 Flying Lotus set that began this series, both reveal how context and expectations are very important to the evaluation of live electronic performance.  While the 2008 piece praises Lotus for a DJ like approach to his live set, the 2011 critique was that the performance was more of a DJ set rather than a live electronic performance. What changed in the years between these two sets was the familiarity with the style of performance (from Lotus and the various other artists on the scene with similar approaches) providing a shift in expectations. What they both lack, however, is a language to provide the musical context for their praise or critique; a language which this series has sought to elucidate.

To put live electronic performances into the proper musical context, one must determine what type of performance is being observed. In the last part of this series, I arrive at four helpful distinctions to compare and describe live electronic performance, continuing this project of developing a usable aesthetic language and enabling a critical conversation about the artform.  The first of the four distinctions between different types of live electronic music performance concerns the manipulation of fixed pre-recorded sound sources into variable performances. The second distinction cites the physical manipulation of electronic instruments into variable performances. The third distinction demarcates the manipulation of electronic instruments into variable performances by the programming of machines. The last one is an integrated category that can be expanded to include any and all combinations of the previous three.

Essential to all categories of live electronic music performance, however, is the performance’s variability, without which music—and its concomitant listening practices–transforms from  a “live” event to a fixed musical object. The trick to any analysis of such performance however, is to remember that, while these distinctions are easy to maintain in theory, in performance they quickly blur one into the other, and often the intensity and pleasure of live electronic music performance comes from their complex combinations.

6250416351_d5ca1fc1f3_b

Flying Lotus at Treasure Island, San Francisco on 10-15-2011, image by Flickr User laviddichterman

For example, an artist who performs a set using solely vinyl with nothing but two turntables and a manual crossfading mixer, falls in the first distinction between live electronic music performances. Technically, the turntables and manual crossfading mixer are machines, but they are being controlled manually rather than performing on their own as machines.  If the artist includes a drum machine in the set, however, it becomes a hybrid (the fourth distinction), depending on whether the drum machine is being triggered by the performer (physical manipulation) or playing sequences (machine manipulation) or both. Furthermore, if the drum machine triggers samples, it becomes machine manipulation (third distinction) of fixed pre-recorded sounds (first distinction) If the drum machine is used to playback sequences while the artist performs a turntablist routine, the turntable becomes the performance instrument while the drum machine holds as a fixed source. All of these relationships can be realized by a single performer over the course of a single performance, making the whole set of the hybrid variety.

While in practice the hybrid set is perhaps the most common, it’s important to understand the other three distinctions as each of them comes with their own set of limitations which define their potential variability.  Critical listening to a live performance includes identifying when these shifts happen and how they change the variability of the set.  Through the combination their individual limitations can be overcome increasing the overall variability of the performance. One can see a performer playing the drum machine with pads and correlate that physicality of it with the sound produced and then see them shift to playing the turntable and know that the drum machine has shifted to a machine performance. In this example the visual cues would be clear indicators, but if one is familiar with the distinctions the shifts can be noticed just from the audio.

This blending of physical and mechanical elements in live music performance exposes the modular nature of live electronic performance and its instruments. teaching us that the instruments themselves shouldn’t be looked at as distinction qualifiers but rather their combination in the live rig, and the variability that it offers. Where we typically think of an instrument as singular, within live electronic music, it is perhaps best to think of the individual components (eg turntables and drum machines) as the musical objects of the live rig as instrument.

Flying Lotus at the Echoplex, Los Angeles, Image by Flickr User  sunny_J

Flying Lotus at the Echoplex, Los Angeles, Image by Flickr User sunny_J

Percussionists are a close acoustic parallel to the modular musical rig of electronic performers. While there are percussion players who use a single percussive instrument for their performances, others will have a rig of component elements to use at various points throughout a set. The electronic performer inherits such a configuration from keyboardists, who typically have a rig of keyboards, each with different sounds, to be used throughout a set. Availing themselves of a palette of sounds allows keyboardists to break out of the limitations of timbre and verge toward the realm of multi-instrumentalists.  For electronic performers, these limitations in timbre only exist by choice in the way the individual artists configure their rigs.

From the perspective of users of traditional instruments, a multi-instrumentalist is one who goes beyond the standard of single instrument musicianship, representing a musician well versed at performing on a number of different instruments, usually of different categories.  In the context of electronic performance, the definition of instrument is so changed that it is more practical to think not of multi-instrumentalists but multi-timbralists.  The multi-timbralist can be understood as the standard in electronic performance.  This is not to say there are not single instrument electronic performers, however  it is practical to think about the live electronic musician’s instrument not as a singular musical object, but rather a group of musical objects (timbres) organized into the live rig.  Because these rigs can be comprised of a nearly infinite number of musical objects, the electronic performer has the opportunity to craft a live rig that is uniquely their own. The choices they make in the configuration of their rig will define not just the sound of their performance, but the degrees of variability they can control.

Because the electronic performer’s instrument is the live rig comprised of multiple musical objects, one of the primary factors involved in the configuration of the rig is how the various components interact with one another over the time dependent course of a performance. In a live tape loop performance, the musician may use a series of cassette players with an array of effects units and a small mixer. In such a rig, the mixer is the primary means of communication between objects. In this type of rig however, the communication isn’t direct. The objects cannot directly communicate with each other, rather the artist is the mediator. It is the artist that determines when the sound from any particular tape loop is fed to an effect or what levels the effects return sound in relation to the loops. While watching a performance such as this, one would expect the performer to be very involved in physically manipulating the various musical objects. We can categorize this as an unsynchronized electronic performance meaning that the musical objects employed are not locked into the same temporal relations.

Big Tape Loops, Image by Flickr User  choffee

Big Tape Loops, Image by Flickr User choffee

The key difference between an unsynchronized and s synchronized performance rigs is the amount of control over the performance that can be left to the machines. The benefit of synchronized performance rigs is that they allow for greater complexity either in configuration or control. The value of unsynchronized performance rigs is they have a more natural and less mechanized feel, as the timing flows from the performer’s physical body. Neither could be understood as better than the other, but in general they do make for separate kinds of listening experiences, which the listener should be aware of in evaluating a performance. Expectations should shift depending upon whether or not a performance rig is synchronized.

This notion of a synchronized performance rig should not only be understood as an inter-machine relationship. With the rise of digital technology, many manufacturers developed workstation style hardware which could perform multiple functions on multiple sound sources with a central synchronized control. The Roland SP-404 is a popular sampling workstation, used by many artists in a live setting. Within this modest box you get twelve voices of sample polyphony, which can be organized with the internal sequencer and processed with onboard effects. However, a performer may choose not to utilize a sequencer at all and as such, it can be performed unsynchronized, just triggering the pads. In fact, in recent years there has been a rise of drum pad players or finger drummers who perform using hardware machines without synchronization. Going back to our three distinctions a performance such as this would be a hybrid of physical manipulation of fixed sources with the physical manipulation of an electronic instrument.  From this qualification, we know to look for extensive physical effort in such performances as indicators of the the artists agency on the variable performance.

Now that we’ve brought synchronization into the discussion it makes sense to talk about what is often the main means of communication in the live performance rig – the computer. The ultimate benefit of a computer is the ability to process a large number of calculations per computational cycle. Put another way, it allows users to act on a number of musical variables in single functions. Practically, this means the ability to store, organize recall and even perform a number of complex variables. With the advent of digital synthesizers, computers were being used in workstations to control everything from sequencing to the patch sound design data. In studios, computers quickly replaced mixing consoles and tape machines (even their digital equivalents like the ADAT) becoming the nerve center of the recording process. Eventually all of these functions and more were able to fit into the small and portable laptop computer, bringing the processing power in a practical way to the performance stage.

Flying Lotus and his Computer, Image by Flickr User  jaswooduk

Flying Lotus and his Computer, All Tomorrow’s Parties 2011, Image by Flickr User jaswooduk

A laptop can be understood as a rig in and of itself, comprised of a combination of software and plugins as musical objects, which can be controlled internally or via external controllers. If there were only two software choices and ten plugins available for laptops, there would be over seven million permutations possible. While it is entirely possible (and for many artists practical) for the laptop to be the sole object of a live rig, the laptop is often paired with one or more controllers. The number of controllers available is nowhere near the volume of software on the market, but the possible combinations of hardware controllers take the laptop + controller + software combination possibilities toward infinity. With both hardware and software there is also the possibility of building custom musical objects that add to the potential uniqueness of a rig.

Unfortunately, quite often it is impossible to know exactly what range of tools are being utilized within a laptop strictly by looking at an artist on stage. This is what leads to probably the biggest misnomer about the performing laptop musician. As common as the musical object may look on the stage, housed inside of it can be the most unique and intricate configurations music (yes all of music) has ever seen. The reductionist thought that laptop performers aren’t “doing anything but checking email” is directly tied to the acousmatic nature of the objects as instruments. We can hear the sounds, but determining the sources and understanding the processes required to produce them is often shrouded in mystery. Technology has arrived at the point where what one performs live can precisely replicate what one hears in recorded form, making it easy to leap to the conclusion that all laptop musicians do is press play.

Indeed some of them do, but to varying degrees a large number of artists are actively doing more during their live sets. A major reasons for this is that one of the leading Digital Audio Workstations (DAW) of today also doubles as a performance environment. Designed with the intent of taking the DAW to the stage, Ableton Live allows artists to have an interface that facilitates the translation of electronic concepts from the studio to the stage. There are a world of things that are possible just by learning the Live basics, but there’s also a rabbit hole of advanced functions all the way to the modular Max for Live environment which lies on the frontier discovering new variables for sound manipulation. For many people, however, the software is powerful enough at the basic level of use to create effective live performances.

Sample Screenshot from a performer's Ableton Live set up for an "experimental and noisy performance" with no prerecorded material, Image by Flickr User Furibond

Sample Screenshot from a performer’s Ableton Live set up for an “experimental and noisy performance” with no prerecorded material, Image by Flickr User Furibond

In its most basic use case, Ableton Live is set up much like a DJ rig, with a selection of pre-existing tracks queued up as clips which the performer blends into a uniform mix, with transitions and effects handled within the software. The possibilities expand out from that: multi-track parts of a song separated into different clips so the performer can take different parts in and out over the course of the performance; a plugin drum machine providing an additional sound source on top of the track(s), or alternately the drum machine holding a sequence while track elements are laid on top of it. With the multitude of plugins available just the combination of multi-track Live clips with a single soft synth plugin, lends itself to near infinite combinations. The variable possibilities of this type of set, even while not exploiting the breadth of variable possibilities presented by the gear, clearly points to the artist’s agency in performance.

Within the context of both the DJ set and the Ableton Live set, synchronization plays a major role in contextualization. Both categories of performance can be either synchronized or unsynchronized. The tightest of unsynchronized sets will sound synchronized, while the loosest of synchronized sets will sound unsynchronized. This plays very much into audience perception of what they are hearing and the performers’ choice of synchronization and tightness can be heavily influenced by those same audience expectations.

A second performance screen capture by the same artist, this time using pre-recorded midi sequences, Image by Flickr User Furibond

A second performance screen capture by the same artist, this time using pre-recorded midi sequences, Image by Flickr User Furibond

A techno set is expected to maintain somewhat of a locked groove, indicative of a synchronized performance. A synchronized rig either on the DJ side (Serato utilizing automated beat matching) or on the Ableton side (time stretch and auto bpm detection sync’d to midi) can make this a non factor for the physical performance, and so listening to such a performance it would be the variability of other factors which reveals the artist’s control over the performance. For the DJ, the factors would include the selection, transitions and effects use. For the Ableton user, it can include all of those things as well as the control over the individual elements in tracks and potentially other sound sources.

On the unsychronized end of the spectrum, a vinyl DJ could accomplish the same mix as the synchronized DJ set but it would physically require more effort on their part to keep all of the selections in time. This would mean they might have to limit exerting control on the other variables. An unsychronized Live set would be utilizing the software primarily as a sound source, without MIDI, placing the timing in the hands of the performer. With the human element added to the timing it would be more difficult to produce the machine-like timing of the other sets. This doesn’t mean that it couldn’t be effective, but there would be an audible difference in this type of set compared to the others.

What we’ve established is that through the modular nature of the electronic musician’s rig as an instrument, from synthesizer keyboards to Ableton Live, every variable consideration combines to produce infinite possibilities. Where the trumpet is limited in timbre, range and dynamics, the turntable is has infinite timbres; the range is the full spectrum of human hearing; and the dynamics directly proportional to the output. The limitations of the electronic musician’s instrument appear only in electrical current constraints, processor speed limits, the selection of components and the limitations of the human body.

Flying Lotus at Electric Zoo, 2010, Image by Flickr User TheMusic.FM

Flying Lotus at Electric Zoo, 2010, Image by Flickr User TheMusic.FM

Within these constraints however, we have only begun touching the surface of possibilities. There are combinations that have never happened, variables that haven’t come close to their full potential, and a multitude of variables that have yet to be discovered. One thing that the electronic artist can learn from jazz toward maximizing that potential is the notion of play, as epitomized with jazz improvisation. For jazz, improvisation opened up the possibilities of the form which impacted, performance and composition. I contend that the electronic artist can push the boundaries of variable interaction by incorporating the ability to play from the rig both in their physical performance and giving the machine its own sense of play. Within this play lie the variables which I believe can push electronic into the jazz of tomorrow.

Featured Image by Flickr User Juha van ‘t Zelfde

Primus Luta is a husband and father of three. He is a writer and an artist exploring the intersection of technology and art, and their philosophical implications. In 2014 he will be releasing an expanded version of this series as a book entitled “Toward a Practical Language: Live Electronic Performance”. He is a regular guest contributor to the Create Digital Music website, and maintains his own AvantUrb site. Luta is a regular presenter for theRhythm Incursions Podcast series with his monthly showRIPL. As an artist, he is a founding member of the live electronic music collectiveConcrète Sound System, which spun off into a record label for the exploratory realms of sound in 2012.

tape reelREWIND! . . .If you liked this post, you may also dig:

Evoking the Object: Physicality in the Digital Age of Music-Primus Luta

Experiments in Agent-based Sonic Composition–Andreas Pape

Calling Out To (Anti)Liveness: Recording and the Question of PresenceOsvaldo Oyola