This is part two of a three part series on live Electronic music. To review part one, click here.
In the first part of this series,”Toward a Practical Language for Live Electronic Performance,” I established the language of variability as an index for objectively measuring the quality of musical performances. From this we were able to rethink traditional musical instruments as musical objects with variables that could be used to evaluate performer proficiency, both as a universal for the instrument (proficiency on the trumpet) and within genre and style constraints (proficiency as a Shrill C trumpet player). I propose that this language can be used to describe the performance of electronic music, making it easier to parallel with traditional western forms. Having proven useful with traditional instruments we’ll now see if this language can be used to describe the musical objects of electronic performance.
We’ll start with a DJ set. While not necessarily an instrument, a performed DJ set is a live musical object comprised of a number of variables. First would be the hardware. A vinyl DJ rig consists of at least two turntables, a mixer and a selection of vinyl. A CDJ rig uses two CD decks and a mixer. Serato and other vinyl controller software require only one turntable, a mixer and a laptop. Laptop mixing can be done with or without a controller. One could also do a cassette mix, reel to reel mix, or other hardware format mixing. Critical is a means to combine audio from separate sources into one uniform mix. Some of the other variables involved in this include selection, transitions and effects.
Because DJ sets are expected to be filled with pre-recorded sounds, the selection of sounds available is as broad as all of the sounds ever recorded. Specific styles of DJ sets, like an ambient DJ set, limit the selection down to a subset of recorded music. The choice of hardware can limit that even more. An all-vinyl DJ set of ambient music presents more of a challenge, in terms of selection, than a laptop set in the same style, because there are fewer ambient records pressed to vinyl than are available in a digital format.
Connected to selections are transitions, which could be said to define a DJ. When thinking of transitions there are two component factors: the playlist and going from one song to another. The playlist is obviously directly tied to the selection; however, even if you select the most popular songs for the style, unless they are put into a logical order, the transitions between them could make the set horrible.
One of the transitional keys to keeping a mix flowing is beat matching. In a turntable DJ set the beat mapping degree of difficulty is high because all of the tempos have to be matched manually by adjusting the speed of the two selections on the spinning turntables. When the tempos are synchronized, transitioning from one to the other is accomplished via a simple crossfade. With digital hardware such as the laptop, Serato and even CDJ setups, there is commonly a way to automatically match beats between selections. This makes the degree of difficulty to beat match in these formats much lower.
Effects, another variable, rely on what’s available through the hardware medium. With the turntable DJ set, the mixer is the primary source of effects and those until recent years have been limited to disc manipulation (e.g. scratching), crossfader, and EQ effects. Many of the non vinyl setups and even some of the vinyl setups now include a variety of digital effects like delay, reverb, sampler, granular effects and more.
With these variables so defined it becomes easier to objectively analyze the expressed variability of a live DJ set. But, while the variables themselves are objective, the value placed on them and even how they are evaluated are not. The language only provides the common ground for analysis and discussion. So the next time you’re at an event and the person next to you says, “this DJ is a hack!” you can say, “well they’ve got a pretty diverse selection with rather seemless transitions, maybe you just don’t like the music,” to which they’ll reply, “yeah, I don’t think I like this music,” which is decent progress in the scheme of things. If we really want to talk about live electronic performance however we will need to move beyond the DJ set to exemplify how this variable language can work to accurately describe the other musical objects which appear at a live electronic performance.
Take for example another electronic instrument: the keyboard. The keyboard itself is a challenging instrument to define; in fact I could argue that the keyboard is itself not actually an instrument but a musical object. It is a component part of a group of instruments commonly referred to as keyboards, but the keyboard itself is not the instrument. What it is is one of the earliest examples of controllerism.
On a piano, typically fingers are used to press keys on the keyboard, which trigger the hammers to hit the strings and produce sound. The range of the instrument travels seven octaves from A0 to C8, and can theoretically have 88 voice polyphony, though in typical that polyphony is limited to the ten fingers. It can play a wide range of dynamics and includes pedals which can be used to modify the sustainabilty of pitches. With a pipe organ, the keyboard controls woodwind instruments with completely different timbre, range, and dynamics; the polyphony increases and the foot pedals can perform radically different functions. The differences from the piano grow even more once we enter the realm where the term “keyboard,” as instrument, is most commonly used: the synthesizer keyboard.
The first glaring difference is that, even if you have an encyclopedia of knowledge about keyboard synthesizers, when you see a performer with one on stage you simply cannot know by seeing what sounds it will produce. Pressing the key on a synthesizer keyboard can produce an infinite number of sounds, which can change not just from song to song, but from second to second and key to key. A performer’s left thumb can produce an entirely different sound than their left index finger. Using a keyboard equipped with a sequencer, the performer’s fingers may not press any keys at all but can still be active in the performance.
When the keyboard synthesizer was first introduced, it was being used by traditional piano players in standard band configurations, like a piano or organ, with timbres being limited to one during a song and the performance aspect being limited to fingers pressing keys. Some keyboardists however used the instrument more as a bank for sound effects and textures. They may have been playing the same keys, but one wouldn’t necessarily expect to hear a I IV V chord progression. Rather than listening for the physical dexterity of the player’s fingers, the key to listening to a keyboard in this context was evaluating the sounds produced first and then how they were played to fit into the surrounding musical context.
Could one of these performers be seen as more competent than the other? Possibly. The first performer could be said to be one of the most amazing keyboard players in the piano player sense, but where they aren’t really maximizing the variability potential of the instrument, it could be said they fall short as a keyboard synthesizer performer. The second performer on the other hand may not even know what a I IV V chord progression is and thus be considered incompetent on the keyboard in the piano player sense, but the ways in which they exploit the variable possibilities shows their mastery of the keyboard synthesizer as an instrument.
While generally speaking there isn’t a set of variables which define the keyboard synthesizer as an instrument, if we think of the keyboard synthesizer as a group of musical instruments, each of the individual types of keyboard synthesizers come with their own set of fixed variables which can be defined. Many of these variables are consistent across the various keyboards but not always in a standard arrangement.
As such, while the umbrella term “keyboard” persists it is perhaps more practical to define the instruments and their players individually. There are Juno 60 players, ARP Odyessy players, MiniMoog players, Crumar Spirit players and more. Naturally an individual player can be well versed in more than one of these instruments and thusly be thought of as a keyboardist, but their ability as a keyboardist would have to be properly contextualized per instrument in their keyboard repertoire. Using the MiniMoog as an example we can show how its variability as an instrument defines it and plays into how a performance on the instrument can be perceived.
The first variable worth considering when evaluating the MiniMoog is that it is a monophonic instrument. This is radically different from the piano; despite one’s ability to use ten fingers (or other extremity) only one note will sound at one time. The keyboard section of the instrument is only three and a half octaves long, though the range is itself variable. On the left-hand side there is a pitch wheel and a modulation wheel. The pitch wheel can vary the pitch of the currently playing note, while the modulation wheel can alter the actual sound design.
As a monophonic instrument, one does not need to have both hands on the keyboard, as only one note will ever sound at a time. This frees the hands to modify the sound being triggered by the keyboard exemplified via the pitch and modulation wheels, but also available are all of the exposed controls for the sound design. This means that in performance every aspect of the sound design and the triggering can be variable. Of course these changes are limited to what one can do with their hands, but the MiniMoog also features a common function in analog synths, a Control Voltage input. This means that an external source can control either the aspects of the sound design and/or the triggering for the instrument.
Despite this obvious difference from the piano, playing the MiniMoog does not have to be any less of a physical act. A player using their right hand to play the keyboard while modulating the sound with their left, plays with a different level of dexterity than the piano player. The right and left hand are performing different motions; while the right hand uses fingers to press keys as the arm moves it up and down the keyboard, the left hand can be adjusting the pitch or modulation wheels with a pushing action or alternately adjusting the knobs with a turning action. Like patting your head and rubbing your belly, controlling a well-timed filter sweep while simultaneously playing a melody is nowhere near as easy as it sounds.
At the same time playing the MiniMoog doesn’t have to be very physical at all. A sequencer could be responsible for all of the note triggering leaving both hands free to modulate the sound. Similarly the performer may not touch the MiniMoog at all, instead playing the sequencer itself as an intermediary between them and the sound of the instrument. In this case the MiniMoog is not being used as a keyboard, yet it retains its instrument status as all of the sounds are being generated from it, with the sequencer being used as the controller. Despite not having any physical contact with the instrument itself, the performer can still play it.
Taking it one step further – if a performer were to only touch a sequencer at the start of the performance to press play and never touch the instrument, could they still be said to be playing the MiniMoog live? There is little doubt that the MiniMoog is indeed still performing because it does not have the mechanism to play by itself, but requires agency to illicit a sonic response. In this example that agency comes from the sequencer, but that does not eliminate the performer. The sequencer itself has to be programmed in order to provide the instrument with the proper control voltages, and the instrument itself has to be set up sonically with a designed sound receptive to the sequencer’s control. If the performer is not physically manipulating either device however, they are not performing live, the machines are.
From this we can establish the first dichotomy of electronic performance; the layers of variability in an electronic performance can be isolated into two specific categories: physical variability and sonic variability. While these two aspects are also present in traditional instrument performance, they are generally thought to not be mutually exclusive without additional devices. The vibrato of an acoustic guitar is only accomplished by physically modulating the strings to produce the effect. With an electronic instrument however, vibrato can be performed by an LFO controlling the amplitude. That LFO can be controlled physically but there does not have to be a physical motion (such as a knob turn) associated with it in order for it to be a live modulation or performance. The benefit of it running without physical aid is that it frees up the body to be able to control other things, increasing the variability of the performance.
In a situation where all of the aspects of the performance are being controlled by electronic functions, the agency in performance shifts from the artist performing live, to the artists establishing the parameters by which the machines perform live. Is the artist calling this a live performance a hack? Absolutely not, but it’s important that the context of the performance is understood for it to be evaluated. Like evaluating the monophonic MiniMoog performer based on the criteria of the polyphonic pianist, evaluating a machine performance based on physical criteria is unfair.
In the evaluation of a machine performance, just as with a physical one, variability still plays an important role. At the most base level the machine has to actually be performing and this is best measured by the potential variability of the sound. This gets tricky with digital instruments, as, barring outside influences, it is completely possible to repeat the exact same performance in the digital domain, so that there is no variation between each iteration. But even such cases with a digital sequencer controlling a digital instrument, with no physical interaction, are still a machine performances; they just exhibit very little variability. The performance aspect of the machine only disappears when the possibility for variability is completely removed, at which point the machine is no longer a performance instrument but a playback device as is the case with a CD player playing a backing track. The CD player if not being manipulated physically or by an external control is not a performance instrument as all of the sound contained within it can only be heard as one fixed recorded performance, not live. It is only when these fixed performances are manipulated either physically (ie a DJ set) or by other means, that they go from fixed performances to potentially live ones.
From all of this we arrive at four basic distinctions for live electronic performances:
• The electro/mechanical manipulation of fixed sonic performances
• The physical manipulation of electronic instruments
• The mechanized manipulation of electronic instruments
• A hybrid of physical and mechanized manipulation of electronic instruments
These help set up the context for evaluating electronic performances, as before we can determine the quality of a performance we must first be able to distinguish what type of performance we are observing. So far we’ve only dealt with a monophonic instrument, but even with its limitations can see how the potential variability is quite high. As we get into the laptop as a performance instrument that variability increases exponentially.
This is part two of a three part series. In the next part we will begin to exemplify the laptop as performance instrument, using this language to show the breadth of variability available in electronic performance and perhaps show that indeed, where that variability continues to be explored, there is merit to the potential of live electronic music as an extension of jazz.
Native Frequencies at the Trocadero 2013, Featured Image Courtesy of Raymond Angelo (C)
Primus Luta is a husband and father of three. He is a writer, technologist and an artist exploring the intersection of technology and art, and their philosophical implications. He is a regular guest contributor to the Create Digital Music website, and maintains his own AvantUrb site. Luta is a regular presenter for the Rhythm Incursions Podcast series with his show RIPL. As an artist, he is a founding member of the live electronic music collective Concrète Sound System, which spun off into a record label for the exploratory realms of sound in 2012.
REWIND! . . .If you liked this post, you may also dig:
Experiments in Agent-based Sonic Composition–Andreas Pape
Sound as Art as Anti-environment–Steven Hammer
In our current relationship with technology, we bring our bodies, but our minds rule–Linda Stone, “Conscious Computing”
I begin with an epigraph from Linda Stone, who coined the phrase ‘continuous partial attention’ to describe our mental state in the digital age. The passive cousin of multi-tasking, continuous partial attention is a reaction to our constantly connected lifestyles in which everything is happening right now and where value is increasingly equated with our ability to digest it all. Almost everything we do has the potential to be interrupted, be it by an email, a text or a tweet; often we will give only partial attention to any one thing in anticipation of the next thing that will require our attention. In this internal fight for mental attention, listening to music has been seriously impacted.
The digital era has seen more music releases than ever before. Unfortunately, the massive influx of quantity is by no means a measure of how we are engaging with said music. iPhones and similar devices, for which music players have become mere features, enable listening to become a thing of partial attention. From allowing the shuffle or random modes to choose music selections for you, or even streaming music algorithms to calculate things you might like, to listening while playing Angry Birds or reading your Twitter stream, less commitment is made to the act of listening, and as such only a portion of our working memory is committed to the experience. Without working memory actively processing musical information, it is less likely to be stored for the long term, particularly if other information is continuously vying for space and attention.
These days video games sell better than music. Despite being a digital product, games are able to instill memories (even of the music) into one’s consciousness, because the game interface allows our sensory memories to work together in an active manner with the medium. Iconic memory stores visual cues from the game, echoic memory takes the audible cues from the game and the haptic memory is engaged in controlling game play. There is only so much more which can be done while playing a video game. If something were to interrupt game play, the game would be paused to address the new information rather than giving it partial attention. This is quite different from music which plays a background role in so much of our lives even when we are actively putting music on we tend to only engage it with partial attention.
When I began thinking about turning Concrète Sound System into a record label, one of my main goals was to create works that could engage the audience in active musical experiences that could create long term memories. I felt that as important as the music would be, it would take something material to create these memories, a physical product more evocative of earlier moments in recording history than the CD, its most recent gasp. I wondered if, by creatively evoking the physical object, the listener could be engaged in an active manner that would enable the memory of music and its power to persist through the everyday waves of digital noise.
The first mass duplicated audio medium was the Gold Moulded Edison Cylinder at the turn of the twentieth century. Imagine two cylinder copies of one of these recording today, as musical objects. Each of them would have over a hundred years of physical history. From the wear of the cases to the condition of the wax based on the temperature in which they were stored, each of these cylinders would be unique musical objects, with completely different histories, despite having the same origin. It is reasonable to assume that if the cylinders were played today on the same playback device, despite the fact that the compositions and performances are exactly the same, the differences between the recordings would be audible.
Even without a century of history, there would likely be audible differences between the cylinders. If one cylinder was the first copy made, and another the 150th –master cylinders of Gold Moulded Edison Cylinders could only produce 150 copies reliably–the physical wear in the process of reproduction would leave its own imprint, making each of those copies distinct musical objects. In the analog world, as the technology improved the differences between copies decreased substantially. Cassettes were manufactured in batches of ten to hundreds of thousands without audible differences. But even in circulations so high, over time each of those analog copies took on their own identity and collected their own memories.
The listener as an active agent contributed to the development of these unique musical objects. After a purchase, any number of variables played into the ritual of the first experience of the music. Was there a way to listen upon walking out of the store? Were there liner notes or lyric sheets inside? Would you read those prior to listening or as you listen? Where would you listen? Through headphones? The listening chair in front of the hi-fi stereo? Or on the boombox with some friends? All of these possibilities shaped memories as musical objects that defined the music consumption culture of the past.
For example, I bought the debut 2Pac album 2Pacalypse Now on cassette the day it was released. I loved the album so much I kept it in regular rotation in my Walkman for months until finally the tape popped. Rather than go out and buy a new copy I decided to perform a surgery. It was in a screwless reel case which meant I couldn’t just open it up to retrieve the ends of the tape trapped inside, but rather had to crack the reel case open and transplant the reels into a new body. So, my copy of the 2Pacalypse Now cassette is now inside of a clear reel holder with no visual markings. It also has a piece of tape that was used to splice it back together, which makes an audible warp when played back. I can pretty much be sure that there is no other copy of 2Pacalypse which sounds exactly like mine. While this probably detracts from the resale value of the cassette (not that I’d sell it), it is imbued with a personal history that is priceless.
Cassettes, in particular, played a significant role in the attachment of physical memories to music beyond the recordings they held. They gave birth to the mixtape. The taper community was born from personal tape recorders that allowed concert-goers to record performances they attended, and, prior to the rise of peer to peer sharing online, these communities were trading tapes internationally via regular postal mail. European jazz and rock concerts were finding their way back to the states and South Bronx hip-hop performances were traveling with the military in Asia. All of these instances required a physical commitment with which came memories that inherently became their own musical objects.
Needless to say the nature of musical exchange has changed with the rise of the digital age of music. This is not to say that memories as musical objects have gone away, but they are being taken for granted as the objects lose their physicality. I remember going to The Wiz on 96th Street with $10 to spend on music. I spent at least ten minutes trying to decide between Sid and B-Tonn and Arabian Prince. I ended up with Arabian Prince and have regretted it since I got home and listened that day, as I never found Sid and B-Tonn for sale again. Today I could download both in the time it took me to walk to the train station. After skimming through the first few songs of Arabian Prince I could decide it was not for me and drag drop it in the trash where the memory of it would disappear with the files. No matter how I felt about the music then, the memory of it is a permanent fixture in my mind because of the physical actions it took to listen.
The first release for Concrète Sound System, Schrödinger’s Cassette, tackled this issue head on by presenting the audience with its own paradox, an update of physicist Erwin Schrödinger’s famous Thought Experiment, where the ultimate fate of the cassette inside is left up to the individual. Schrödinger’s Cassette sought to take listeners out of digital modes of consumption by using an analog medium to evoke the physical. The cassette release trend has been growing over the last few years, almost in parallel to the rise of the digital music and speaking to the need to separate music from our digital lives and to a desire to work harder for it. At the minimum, listening to a cassette requires having a cassette player, and acquiring one these days takes commitment. Unlike digital media, listeners cannot instantly skip a song on a cassette or put a favorite on repeat. It takes physical manipulation of the medium to move through its songs and doing so is a time investment. All these limitations make the cassette a medium that is best for linear listening, from beginning to end (unless you physically cut, rearrange, and splice it yourself).
Schrödinger’s Cassette took the required commitment a step further by encasing the cassette itself in industrial grade concrete. This required the user to actively crack the concrete (or the french concrète meaning ‘real’, from which the label derives its name) in order to listen to the music. The paradox is that, depending on the listener’s method for cracking, harm could be done to the cassette that might render it ‘unlistenable’. Upon receiving one of these pieces, the listener holds in their hands a musical object which they must physically act upon in order to create an unrepeatable musical event. Schrödinger’s Cassette has a look, a sound (if shaken you can hear the cassette reels), a feel, a smell, and a taste as well (though I wouldn’t advise it). All of the senses can be actively focused on the object and, as such, the whole of one’s working memory is engaged in the discernment of the object’s musical contents.
For many, Schrödinger’s Cassette was taken as a work of art and left uncracked. The Wire magazine successfully cracked one edition open, revealing a portion of the musical contents on their regular radio program. For those that decided not to crack it, digital versions were made available so that they could listen, though this option was only made available after the listener spent some time with their physical object. In this way, the music from the project, a compilation called Between the Cracks, was directly connected to physical memories spurred by a material presence.
Triggering active memory during the consumption of music through physical objects need not be this complex. Old medium such as vinyl and cassette releases inherently have the physical properties required without the concrete or much else. Perhaps for this reason they show new signs of life despite the rise of digital. No matter how much our reality is augmented by our digital lives, we still inhabit those bodies that we bring with us, and, as far as the memories those bodies carry with them go, physicality rules.
Featured Image: Wax Cylinders in the Library of Congress, Image by Flickr User Photo Phiend
Primus Luta is a husband and father of three. He is a writer and an artist exploring the intersection of technology and art, and their philosophical implications. He is a regular guest contributor to theCreate Digital Music website, and maintains his own AvantUrb site. Luta is a regular presenter for the Rhythm Incursions Podcast series with his monthly showRIPL. As an artist, he is a founding member of the live electronic music collectiveConcrète Sound System, which spun off into a record label for the exploratory realms of sound in 2012.