Tag Archive | electronic music

Live Electronic Performance: Theory And Practice

This is the third and final installment of a three part series on live Electronic music.  To review part one, “Toward a Practical Language for Live Electronic Performance” click here. To peep part two, “Musical Objects, Variability and Live Electronic Performance” click here.

“So often these laptop + controller sets are absolutely boring but this was a real performance – none of that checking your emails on stage BS. Dude rocked some Busta, Madlib, Aphex Twin, Burial and so on…”

This quote, from a blogger about Flying Lotus’ 2008 Mutek set, speaks volumes about audience expectations of live laptop performances. First, this blogger acknowledges that the perception of laptop performances is that they are generally boring, using the “checking your email” adage to drive home the point. He goes to express what he perceived to set Lotus’s performance apart from that standard. Oddly enough, it isn’t the individualism of his sound, rather it was Lotus’s particular selection and configuration of other artists’ work into his mix – a trademark of the DJ.

Contrasting this with the review of the 2011 Flying Lotus set that began this series, both reveal how context and expectations are very important to the evaluation of live electronic performance.  While the 2008 piece praises Lotus for a DJ like approach to his live set, the 2011 critique was that the performance was more of a DJ set rather than a live electronic performance. What changed in the years between these two sets was the familiarity with the style of performance (from Lotus and the various other artists on the scene with similar approaches) providing a shift in expectations. What they both lack, however, is a language to provide the musical context for their praise or critique; a language which this series has sought to elucidate.

To put live electronic performances into the proper musical context, one must determine what type of performance is being observed. In the last part of this series, I arrive at four helpful distinctions to compare and describe live electronic performance, continuing this project of developing a usable aesthetic language and enabling a critical conversation about the artform.  The first of the four distinctions between different types of live electronic music performance concerns the manipulation of fixed pre-recorded sound sources into variable performances. The second distinction cites the physical manipulation of electronic instruments into variable performances. The third distinction demarcates the manipulation of electronic instruments into variable performances by the programming of machines. The last one is an integrated category that can be expanded to include any and all combinations of the previous three.

Essential to all categories of live electronic music performance, however, is the performance’s variability, without which music—and its concomitant listening practices–transforms from  a “live” event to a fixed musical object. The trick to any analysis of such performance however, is to remember that, while these distinctions are easy to maintain in theory, in performance they quickly blur one into the other, and often the intensity and pleasure of live electronic music performance comes from their complex combinations.

6250416351_d5ca1fc1f3_b

Flying Lotus at Treasure Island, San Francisco on 10-15-2011, image by Flickr User laviddichterman

For example, an artist who performs a set using solely vinyl with nothing but two turntables and a manual crossfading mixer, falls in the first distinction between live electronic music performances. Technically, the turntables and manual crossfading mixer are machines, but they are being controlled manually rather than performing on their own as machines.  If the artist includes a drum machine in the set, however, it becomes a hybrid (the fourth distinction), depending on whether the drum machine is being triggered by the performer (physical manipulation) or playing sequences (machine manipulation) or both. Furthermore, if the drum machine triggers samples, it becomes machine manipulation (third distinction) of fixed pre-recorded sounds (first distinction) If the drum machine is used to playback sequences while the artist performs a turntablist routine, the turntable becomes the performance instrument while the drum machine holds as a fixed source. All of these relationships can be realized by a single performer over the course of a single performance, making the whole set of the hybrid variety.

While in practice the hybrid set is perhaps the most common, it’s important to understand the other three distinctions as each of them comes with their own set of limitations which define their potential variability.  Critical listening to a live performance includes identifying when these shifts happen and how they change the variability of the set.  Through the combination their individual limitations can be overcome increasing the overall variability of the performance. One can see a performer playing the drum machine with pads and correlate that physicality of it with the sound produced and then see them shift to playing the turntable and know that the drum machine has shifted to a machine performance. In this example the visual cues would be clear indicators, but if one is familiar with the distinctions the shifts can be noticed just from the audio.

This blending of physical and mechanical elements in live music performance exposes the modular nature of live electronic performance and its instruments. teaching us that the instruments themselves shouldn’t be looked at as distinction qualifiers but rather their combination in the live rig, and the variability that it offers. Where we typically think of an instrument as singular, within live electronic music, it is perhaps best to think of the individual components (eg turntables and drum machines) as the musical objects of the live rig as instrument.

Flying Lotus at the Echoplex, Los Angeles, Image by Flickr User  sunny_J

Flying Lotus at the Echoplex, Los Angeles, Image by Flickr User sunny_J

Percussionists are a close acoustic parallel to the modular musical rig of electronic performers. While there are percussion players who use a single percussive instrument for their performances, others will have a rig of component elements to use at various points throughout a set. The electronic performer inherits such a configuration from keyboardists, who typically have a rig of keyboards, each with different sounds, to be used throughout a set. Availing themselves of a palette of sounds allows keyboardists to break out of the limitations of timbre and verge toward the realm of multi-instrumentalists.  For electronic performers, these limitations in timbre only exist by choice in the way the individual artists configure their rigs.

From the perspective of users of traditional instruments, a multi-instrumentalist is one who goes beyond the standard of single instrument musicianship, representing a musician well versed at performing on a number of different instruments, usually of different categories.  In the context of electronic performance, the definition of instrument is so changed that it is more practical to think not of multi-instrumentalists but multi-timbralists.  The multi-timbralist can be understood as the standard in electronic performance.  This is not to say there are not single instrument electronic performers, however  it is practical to think about the live electronic musician’s instrument not as a singular musical object, but rather a group of musical objects (timbres) organized into the live rig.  Because these rigs can be comprised of a nearly infinite number of musical objects, the electronic performer has the opportunity to craft a live rig that is uniquely their own. The choices they make in the configuration of their rig will define not just the sound of their performance, but the degrees of variability they can control.

Because the electronic performer’s instrument is the live rig comprised of multiple musical objects, one of the primary factors involved in the configuration of the rig is how the various components interact with one another over the time dependent course of a performance. In a live tape loop performance, the musician may use a series of cassette players with an array of effects units and a small mixer. In such a rig, the mixer is the primary means of communication between objects. In this type of rig however, the communication isn’t direct. The objects cannot directly communicate with each other, rather the artist is the mediator. It is the artist that determines when the sound from any particular tape loop is fed to an effect or what levels the effects return sound in relation to the loops. While watching a performance such as this, one would expect the performer to be very involved in physically manipulating the various musical objects. We can categorize this as an unsynchronized electronic performance meaning that the musical objects employed are not locked into the same temporal relations.

Big Tape Loops, Image by Flickr User  choffee

Big Tape Loops, Image by Flickr User choffee

The key difference between an unsynchronized and s synchronized performance rigs is the amount of control over the performance that can be left to the machines. The benefit of synchronized performance rigs is that they allow for greater complexity either in configuration or control. The value of unsynchronized performance rigs is they have a more natural and less mechanized feel, as the timing flows from the performer’s physical body. Neither could be understood as better than the other, but in general they do make for separate kinds of listening experiences, which the listener should be aware of in evaluating a performance. Expectations should shift depending upon whether or not a performance rig is synchronized.

This notion of a synchronized performance rig should not only be understood as an inter-machine relationship. With the rise of digital technology, many manufacturers developed workstation style hardware which could perform multiple functions on multiple sound sources with a central synchronized control. The Roland SP-404 is a popular sampling workstation, used by many artists in a live setting. Within this modest box you get twelve voices of sample polyphony, which can be organized with the internal sequencer and processed with onboard effects. However, a performer may choose not to utilize a sequencer at all and as such, it can be performed unsynchronized, just triggering the pads. In fact, in recent years there has been a rise of drum pad players or finger drummers who perform using hardware machines without synchronization. Going back to our three distinctions a performance such as this would be a hybrid of physical manipulation of fixed sources with the physical manipulation of an electronic instrument.  From this qualification, we know to look for extensive physical effort in such performances as indicators of the the artists agency on the variable performance.

Now that we’ve brought synchronization into the discussion it makes sense to talk about what is often the main means of communication in the live performance rig – the computer. The ultimate benefit of a computer is the ability to process a large number of calculations per computational cycle. Put another way, it allows users to act on a number of musical variables in single functions. Practically, this means the ability to store, organize recall and even perform a number of complex variables. With the advent of digital synthesizers, computers were being used in workstations to control everything from sequencing to the patch sound design data. In studios, computers quickly replaced mixing consoles and tape machines (even their digital equivalents like the ADAT) becoming the nerve center of the recording process. Eventually all of these functions and more were able to fit into the small and portable laptop computer, bringing the processing power in a practical way to the performance stage.

Flying Lotus and his Computer, Image by Flickr User  jaswooduk

Flying Lotus and his Computer, All Tomorrow’s Parties 2011, Image by Flickr User jaswooduk

A laptop can be understood as a rig in and of itself, comprised of a combination of software and plugins as musical objects, which can be controlled internally or via external controllers. If there were only two software choices and ten plugins available for laptops, there would be over seven million permutations possible. While it is entirely possible (and for many artists practical) for the laptop to be the sole object of a live rig, the laptop is often paired with one or more controllers. The number of controllers available is nowhere near the volume of software on the market, but the possible combinations of hardware controllers take the laptop + controller + software combination possibilities toward infinity. With both hardware and software there is also the possibility of building custom musical objects that add to the potential uniqueness of a rig.

Unfortunately, quite often it is impossible to know exactly what range of tools are being utilized within a laptop strictly by looking at an artist on stage. This is what leads to probably the biggest misnomer about the performing laptop musician. As common as the musical object may look on the stage, housed inside of it can be the most unique and intricate configurations music (yes all of music) has ever seen. The reductionist thought that laptop performers aren’t “doing anything but checking email” is directly tied to the acousmatic nature of the objects as instruments. We can hear the sounds, but determining the sources and understanding the processes required to produce them is often shrouded in mystery. Technology has arrived at the point where what one performs live can precisely replicate what one hears in recorded form, making it easy to leap to the conclusion that all laptop musicians do is press play.

Indeed some of them do, but to varying degrees a large number of artists are actively doing more during their live sets. A major reasons for this is that one of the leading Digital Audio Workstations (DAW) of today also doubles as a performance environment. Designed with the intent of taking the DAW to the stage, Ableton Live allows artists to have an interface that facilitates the translation of electronic concepts from the studio to the stage. There are a world of things that are possible just by learning the Live basics, but there’s also a rabbit hole of advanced functions all the way to the modular Max for Live environment which lies on the frontier discovering new variables for sound manipulation. For many people, however, the software is powerful enough at the basic level of use to create effective live performances.

Sample Screenshot from a performer's Ableton Live set up for an "experimental and noisy performance" with no prerecorded material, Image by Flickr User Furibond

Sample Screenshot from a performer’s Ableton Live set up for an “experimental and noisy performance” with no prerecorded material, Image by Flickr User Furibond

In its most basic use case, Ableton Live is set up much like a DJ rig, with a selection of pre-existing tracks queued up as clips which the performer blends into a uniform mix, with transitions and effects handled within the software. The possibilities expand out from that: multi-track parts of a song separated into different clips so the performer can take different parts in and out over the course of the performance; a plugin drum machine providing an additional sound source on top of the track(s), or alternately the drum machine holding a sequence while track elements are laid on top of it. With the multitude of plugins available just the combination of multi-track Live clips with a single soft synth plugin, lends itself to near infinite combinations. The variable possibilities of this type of set, even while not exploiting the breadth of variable possibilities presented by the gear, clearly points to the artist’s agency in performance.

Within the context of both the DJ set and the Ableton Live set, synchronization plays a major role in contextualization. Both categories of performance can be either synchronized or unsynchronized. The tightest of unsynchronized sets will sound synchronized, while the loosest of synchronized sets will sound unsynchronized. This plays very much into audience perception of what they are hearing and the performers’ choice of synchronization and tightness can be heavily influenced by those same audience expectations.

A second performance screen capture by the same artist, this time using pre-recorded midi sequences, Image by Flickr User Furibond

A second performance screen capture by the same artist, this time using pre-recorded midi sequences, Image by Flickr User Furibond

A techno set is expected to maintain somewhat of a locked groove, indicative of a synchronized performance. A synchronized rig either on the DJ side (Serato utilizing automated beat matching) or on the Ableton side (time stretch and auto bpm detection sync’d to midi) can make this a non factor for the physical performance, and so listening to such a performance it would be the variability of other factors which reveals the artist’s control over the performance. For the DJ, the factors would include the selection, transitions and effects use. For the Ableton user, it can include all of those things as well as the control over the individual elements in tracks and potentially other sound sources.

On the unsychronized end of the spectrum, a vinyl DJ could accomplish the same mix as the synchronized DJ set but it would physically require more effort on their part to keep all of the selections in time. This would mean they might have to limit exerting control on the other variables. An unsychronized Live set would be utilizing the software primarily as a sound source, without MIDI, placing the timing in the hands of the performer. With the human element added to the timing it would be more difficult to produce the machine-like timing of the other sets. This doesn’t mean that it couldn’t be effective, but there would be an audible difference in this type of set compared to the others.

What we’ve established is that through the modular nature of the electronic musician’s rig as an instrument, from synthesizer keyboards to Ableton Live, every variable consideration combines to produce infinite possibilities. Where the trumpet is limited in timbre, range and dynamics, the turntable is has infinite timbres; the range is the full spectrum of human hearing; and the dynamics directly proportional to the output. The limitations of the electronic musician’s instrument appear only in electrical current constraints, processor speed limits, the selection of components and the limitations of the human body.

Flying Lotus at Electric Zoo, 2010, Image by Flickr User TheMusic.FM

Flying Lotus at Electric Zoo, 2010, Image by Flickr User TheMusic.FM

Within these constraints however, we have only begun touching the surface of possibilities. There are combinations that have never happened, variables that haven’t come close to their full potential, and a multitude of variables that have yet to be discovered. One thing that the electronic artist can learn from jazz toward maximizing that potential is the notion of play, as epitomized with jazz improvisation. For jazz, improvisation opened up the possibilities of the form which impacted, performance and composition. I contend that the electronic artist can push the boundaries of variable interaction by incorporating the ability to play from the rig both in their physical performance and giving the machine its own sense of play. Within this play lie the variables which I believe can push electronic into the jazz of tomorrow.

Featured Image by Flickr User Juha van ‘t Zelfde

Primus Luta is a husband and father of three. He is a writer and an artist exploring the intersection of technology and art, and their philosophical implications. In 2014 he will be releasing an expanded version of this series as a book entitled “Toward a Practical Language: Live Electronic Performance”. He is a regular guest contributor to the Create Digital Music website, and maintains his own AvantUrb site. Luta is a regular presenter for theRhythm Incursions Podcast series with his monthly showRIPL. As an artist, he is a founding member of the live electronic music collectiveConcrète Sound System, which spun off into a record label for the exploratory realms of sound in 2012.

tape reelREWIND! . . .If you liked this post, you may also dig:

Evoking the Object: Physicality in the Digital Age of Music-Primus Luta

Experiments in Agent-based Sonic Composition–Andreas Pape

Calling Out To (Anti)Liveness: Recording and the Question of PresenceOsvaldo Oyola

Musical Objects, Variability and Live Electronic Performance

This is part two of a three part series on live Electronic music.  To review part one, click here.

In the first part of this series,”Toward a Practical Language for Live Electronic Performance,” I established the language of variability as an index for objectively measuring the quality of musical performances. From this we were able to rethink traditional musical instruments as musical objects with variables that could be used to evaluate performer proficiency, both as a universal for the instrument (proficiency on the trumpet) and within genre and style constraints (proficiency as a Shrill C trumpet player). I propose that this language can be used to describe the performance of electronic music, making it easier to parallel with traditional western forms. Having proven useful with traditional instruments we’ll now see if this language can be used to describe the musical objects of electronic performance.

We’ll start with a DJ set. While not necessarily an instrument, a performed DJ set is a live musical object comprised of a number of variables. First would be the hardware.  A vinyl DJ rig consists of at least two turntables, a mixer and a selection of vinyl. A CDJ rig uses two CD decks and a mixer. Serato and other vinyl controller software require only one turntable, a mixer and a laptop. Laptop mixing can be done with or without a controller. One could also do a cassette mix, reel to reel mix, or other hardware format mixing. Critical is a means to combine audio from separate sources into one uniform mix.  Some of the other variables involved in this include selection, transitions and effects.

Jenn Lush (Home Bass), FORWARD Winter Session 2012, Image by JHG Photography (c)

DJ  Lush, FORWARD Winter Session 2012, Image by JHG Photography (c)

Because DJ sets are expected to be filled with pre-recorded sounds, the selection of sounds available is as broad as all of the sounds ever recorded. Specific styles of DJ sets, like an ambient DJ set, limit the selection down to a subset of recorded music. The choice of hardware can limit that even more. An all-vinyl DJ set of ambient music presents more of a challenge, in terms of selection, than a laptop set in the same style, because there are fewer ambient records pressed to vinyl than are available in a digital format.

Connected to selections are transitions, which could be said to define a DJ. When thinking of transitions there are two component factors: the playlist and going from one song to another. The playlist is obviously directly tied to the selection; however, even if you select the most popular songs for the style, unless they are put into a logical order, the transitions between them could make the set horrible.

One of the transitional keys to keeping a mix flowing is beat matching. In a turntable DJ set the beat mapping degree of difficulty is high because all of the tempos have to be matched manually by adjusting the speed of the two selections on the spinning turntables. When the tempos are synchronized, transitioning from one to the other is accomplished via a simple crossfade. With digital hardware such as the laptop, Serato and even CDJ setups, there is commonly a way to automatically match beats between selections. This makes the degree of difficulty to beat match in these formats much lower.

Effects, another variable, rely on what’s available through the hardware medium. With the turntable DJ set, the mixer is the primary source of effects and those until recent years have been limited to disc manipulation (e.g. scratching), crossfader, and EQ effects. Many of the non vinyl setups and even some of the vinyl setups now include a variety of digital effects like delay, reverb, sampler, granular effects and more.

With these variables so defined it becomes easier to objectively analyze the expressed variability of a live DJ set. But, while the variables themselves are objective, the value placed on them and even how they are evaluated are not. The language only provides the common ground for analysis and discussion. So the next time you’re at an event and the person next to you says, “this DJ is a hack!” you can say, “well they’ve got a pretty diverse selection with rather seemless transitions, maybe you just don’t like the music,” to which they’ll reply, “yeah, I don’t think I like this music,” which is decent progress in the scheme of things.  If we really want to talk about live electronic performance however we will need to move beyond the DJ set to exemplify how this variable language can work to accurately describe the other musical objects which appear at a live electronic performance.

887086_429686600439328_1011608022_o

Joe Nice at Reconstrvct in Brooklyn NY on 2-23-2013, Photo by Kyle Rober, Courtesy of Electrogenic

Take for example another electronic instrument: the keyboard. The keyboard itself is a challenging instrument to define; in fact I could argue that the keyboard is itself not actually an instrument but a musical object. It is a component part of a group of instruments commonly referred to as keyboards, but the keyboard itself is not the instrument. What it is is one of the earliest examples of controllerism.

On a piano, typically fingers are used to press keys on the keyboard, which trigger the hammers to hit the strings and produce sound. The range of the instrument travels seven octaves from A0 to C8, and can theoretically have 88 voice polyphony, though in typical that polyphony is limited to the ten fingers. It can play a wide range of dynamics and includes pedals which can be used to modify the sustainabilty of pitches. With a pipe organ, the keyboard controls woodwind instruments with completely different timbre,  range, and dynamics; the polyphony increases and the foot pedals can perform radically different functions. The differences from the piano grow even more once we enter the realm where the term “keyboard,” as instrument, is most commonly used: the synthesizer keyboard.

The first glaring difference is that, even if you have an encyclopedia of knowledge about keyboard synthesizers, when you see a performer with one on stage you simply cannot know by seeing what sounds it will produce. Pressing the key on a synthesizer keyboard can produce an infinite number of sounds, which can change not just from song to song, but from second to second and key to key. A performer’s left thumb can produce an entirely different sound than their left index finger. Using a keyboard equipped with a sequencer, the performer’s fingers may not press any keys at all but can still be active in the performance.

Minimoog Voyager Electric Blue, Image by Flickr User harald walker

Minimoog Voyager Electric Blue, Image by Flickr User harald walker

When the keyboard synthesizer was first introduced, it was being used by traditional piano players in standard band configurations, like a piano or organ, with timbres being limited to one during a song and the performance aspect being limited to fingers pressing keys. Some keyboardists however used the instrument more as a bank for sound effects and textures. They may have been playing the same keys, but one wouldn’t necessarily expect to hear a I IV V chord progression. Rather than listening for the physical dexterity of the player’s fingers, the key to listening to a keyboard in this context was evaluating the sounds produced first and then how they were played to fit into the surrounding musical context.

Could one of these performers be seen as more competent than the other? Possibly. The first performer could be said to be one of the most amazing keyboard players in the piano player sense, but where they aren’t really maximizing the variability potential of the instrument, it could be said they fall short as a keyboard synthesizer performer. The second performer on the other hand may not even know what a I IV V chord progression is and thus be considered incompetent on the keyboard in the piano player sense, but the ways in which they exploit the variable possibilities shows their mastery of the keyboard synthesizer as an instrument.

Well, almost.

While generally speaking there isn’t a set of variables which define the keyboard synthesizer as an instrument, if we think of the keyboard synthesizer as a group of musical instruments, each of the individual types of keyboard synthesizers come with their own set of fixed variables which can be defined. Many of these variables are consistent across the various keyboards but not always in a standard arrangement.

As such, while the umbrella term “keyboard” persists it is perhaps more practical to define the instruments and their players individually. There are Juno 60 players,  ARP Odyessy players,  MiniMoog players, Crumar Spirit players and more. Naturally an individual player can be well versed in more than one of these instruments and thusly be thought of as a keyboardist, but their ability as a keyboardist would have to be properly contextualized per instrument in their keyboard repertoire. Using the MiniMoog as an example we can show how its variability as an instrument defines it and plays into how a performance on the instrument can be perceived.

3578152083_b3a0bcb610_z

Minimoog, Image by Flickr User Francesco Romito

The first variable worth considering when evaluating the MiniMoog is that it is a monophonic instrument. This is radically different from the piano; despite one’s ability to use ten fingers (or other extremity) only one note will sound at one time. The keyboard section of the instrument is only three and a half octaves long, though the range is itself variable. On the left-hand side there is a pitch wheel and a modulation wheel. The pitch wheel can vary the pitch of the currently playing note, while the modulation wheel can alter the actual sound design.

As a monophonic instrument, one does not need to have both hands on the keyboard, as only one note will ever sound at a time. This frees the hands  to modify the sound being triggered by the keyboard exemplified via the pitch and modulation wheels, but also available are all of the exposed controls for the sound design. This means that in performance every aspect of the sound design and the triggering can be variable. Of course these changes are limited to what one can do with their hands, but the MiniMoog also features a common function in analog synths, a Control Voltage input. This means that an external source can control either the aspects of the sound design and/or the triggering for the instrument.

Despite this obvious difference from the piano, playing the MiniMoog does not have to be any less of a physical act. A player using their right hand to play the keyboard while modulating the sound with their left, plays with a different level of dexterity than the piano player. The right and left hand are performing different motions; while the right hand uses fingers to press keys as the arm moves it up and down the keyboard, the left hand can be adjusting the pitch or modulation wheels with a pushing action or alternately adjusting the knobs with a turning action. Like patting your head and rubbing your belly, controlling a well-timed filter sweep while simultaneously playing a melody is nowhere near as easy as it sounds.

At the same time playing the MiniMoog doesn’t have to be very physical at all. A sequencer could be responsible for all of the note triggering leaving both hands free to modulate the sound. Similarly the performer may not touch the MiniMoog at all, instead playing the sequencer itself as an intermediary between them and the sound of the instrument. In this case the MiniMoog is not being used as a keyboard, yet it retains its instrument status as all of the sounds are being generated from it, with the sequencer being used as the controller. Despite not having any physical contact with the instrument itself, the performer can still play it.

Minimoog in Live DJ Performance, Image by Fluckr Users Huba and Silica

Minimoog in Live DJ Performance, Image by Fluckr Users Huba and Silica

Taking it one step further – if a performer were to only touch a sequencer at the start of the performance to press play and never touch the instrument, could they still be said to be playing the MiniMoog live? There is little doubt that the MiniMoog is indeed still performing because it does not have the mechanism to play by itself, but requires agency to illicit a sonic response. In this example that agency comes from the sequencer, but that does not eliminate the performer. The sequencer itself has to be programmed in order to provide the instrument with the proper control voltages, and the instrument itself has to be set up sonically with a designed sound receptive to the sequencer’s control. If the performer is not physically manipulating either device however, they are not performing live, the machines are.

From this we can establish the first dichotomy of electronic performance; the layers of variability in an electronic performance can be isolated into two specific categories: physical variability and sonic variability. While these two aspects are also present in traditional instrument performance, they are generally thought to not be mutually exclusive without additional devices. The vibrato of an acoustic guitar is only accomplished by physically modulating the strings to produce the effect. With an electronic instrument however, vibrato can be performed by an LFO controlling the amplitude. That LFO can be controlled physically but there does not have to be a physical motion (such as a knob turn) associated with it in order for it to be a live modulation or performance. The benefit of it running without physical aid is that it frees up the body to be able to control other things, increasing the variability of the performance.

In a situation where all of the aspects of the performance are being controlled by electronic functions, the agency in performance shifts from the artist performing live, to the artists establishing the parameters by which the machines perform live. Is the artist calling this a live performance a hack? Absolutely not, but it’s important that the context of the performance is understood for it to be evaluated. Like evaluating the monophonic MiniMoog performer based on the criteria of the polyphonic pianist, evaluating a machine performance based on physical criteria is unfair.

Daniel Carter on horn during Overcast Radio's set, Photo courtesy of Raymond Angelo (c)

Daniel Carter on horn during Overcast Radio’s set at EPICENTER: 02, Photo courtesy of Raymond Angelo (c)

In the evaluation of a machine performance, just as with a physical one, variability still plays an important role. At the most base level the machine has to actually be performing and this is best measured by the potential variability of the sound. This gets tricky with digital instruments, as, barring outside influences, it is completely possible to repeat the exact same performance in the digital domain, so that there is no variation between each iteration. But even such cases with a digital sequencer controlling a digital instrument,  with no physical interaction, are still a machine performances; they just exhibit very little variability. The performance aspect of the machine only disappears when the possibility for variability is completely removed, at which point the machine is no longer a performance instrument but a playback device as is the case with a CD player playing a backing track. The CD player if not being manipulated physically or by an external control is not a performance instrument as all of the sound contained within it can only be heard as one fixed recorded performance, not live. It is only when these fixed performances are manipulated either physically (ie a DJ set) or by other means, that they go from fixed performances to potentially live ones.

Mala and Hatcha in Detroit, Image by Tom Selekta, catch him on Soundcloud

Mala and Hatcha in Detroit, Image by Tom Selekta

From all of this we arrive at four basic distinctions for live electronic performances:

• The electro/mechanical manipulation of fixed sonic performances

• The physical manipulation of electronic instruments

• The mechanized manipulation of electronic instruments

• A hybrid of physical and mechanized manipulation of electronic instruments

These help set up the context for evaluating electronic performances, as before we can determine the quality of a performance we must first be able to distinguish what type of performance we are observing. So far we’ve only dealt with a monophonic instrument, but even with its limitations can see how the potential variability is quite high. As we get into the laptop as a performance instrument that variability increases exponentially.

This is part two of a three part series.  In the next part we will begin to exemplify the laptop as performance instrument, using this language to show the breadth of variability available in electronic performance and perhaps show that indeed, where that variability continues to be explored, there is merit to the potential of live electronic music as an extension of jazz.

Native Frequencies at the Trocadero 2013, Featured Image Courtesy of Raymond Angelo (C)

Primus Luta is a husband and father of three. He is a writer, technologist and an artist exploring the intersection of technology and art, and their philosophical implications. He is a regular guest contributor to the Create Digital Music website, and maintains his own AvantUrb site. Luta is a regular presenter for the Rhythm Incursions Podcast series with his  show RIPL. As an artist, he is a founding member of the live electronic music collective Concrète Sound System, which spun off into a record label for the exploratory realms of sound in 2012.

tape reelREWIND! . . .If you liked this post, you may also dig:

Experiments in Agent-based Sonic Composition–Andreas Pape

Evoking the Object: Physicality in the Digital Age of Music-Primus Luta

Sound as Art as Anti-environment–Steven Hammer

 

 

%d bloggers like this: