Tag Archive | electronic music

Toward a Practical Language for Live Electronic Performance

Amongst friends I’ve been known to say, “electronic music is the new jazz.” They are friends, so they smile, scoff at the notion and then indulge me in the Socratic exercise I am begging for. They usually win. The onus after all is on me to prove electronic music worthy of such an accolade. I definitely hold my own; often getting them to acknowledge that there is potential, but it usually takes a die hard electronic fan to accept my claim. Admittedly the weakest link in my argument has been live performance. I can talk about redefinitions of structure, freedom of forms and timbral infinity for days, but measuring a laptop performance up to a Miles Davis set (even one of the ones where his back remained to the crowd) is a seemingly impossible hurdle.

Mind you, I come from a jazzist perspective, which means that I consider jazz the pinnacle of western music. My classicist interlocutors will naturally cite the numerous accomplishments of classical composers as being unmatched within jazz. That will bring us to long debates about the merits of Charles Mingus and Duke Ellington as a composers, which leads, for a good many, to a concession on the part of Duke at least, but an inevitable assertion of the general inferiority U.S. composers compared to the European canon. And then I will say “why are we limiting things to composition when jazz goes so much further than the page?” To which I will get the reply: “orchestral performers were of the highest caliber.” Then I will rebut, “well why was Europe so impressed by Sidney Bechet?” But I digress.

Why talk about classical music in a piece on electronic music, you, my current interlocutor, may ask? Well, in placing electronic music in a historical context, its current stage of development keeps pace with the mental cleverness found in classical but applies it to different theoretical principles. The electronic musician’s DAW (Digital Audio Workstation) file amounts to the classical composer’s score; the electronic musician’s DSP (Digital Signal Processor) parallels the classical composer’s orchestra. I could call electronic music “the new classical” and I’d have a few supporters. But. . .taking it to the level of jazz? Electronic music would have to include not only the mental cleverness, but the physical cleverness as well.

Electronic artist using Ableton 5 Live, Image by Flickr user Nofi

Electronic artist using Ableton 5 Live, Image by Flickr user Nofi

Let’s back up for a bit. A couple years back, I did a piece for Create Digital Music on Live Electronic performance. I talked to a diverse group of artists about their processes for live performance, and I wrote it up with some video examples. It ended up being one of the most discussed pieces on CDM that year, with commentary ranging from fascination at the presentation of techniques to dismissal of the videos as drug-addled email inbox management.

This was to be expected, because of the lack of a language for evaluating electronic music. It is impossible to defend an artist who has been called a hack without the language through which to express their proficiency. Using Miles Davis as an example–specifically a show where his back is to the audience–there are fans that could defend his actions by saying the music he produced was nonetheless some of the best live material of his career, citing the solos and band interactions as examples. To the lay person, however, it may just seem rude and unprofessional for Davis to have his back to the audience; as such, it cannot be qualitatively a good performance no matter what. Any discussion of tone and lyrical fluidity often means little to the lay person.

The extent of this disconnect can be even greater with electronic performances. With his back turned to the audience, they can no longer see Miles’ fingers at work, or how he was cycling breath. Even when facing the crowd, an electronic musician whose regimen is largely comprised of pad triggers, knob turns, and other such gestures which simply do not have the same expected sonic correspondence as, for example, blowing and fingering do to the sound of a trumpet. Also, it is well known that the sound the trumpet produces cannot be made without human action. With electronic music however, particularly with laptop performances, audiences know that the instrument (laptop) is capable of playing music without human aid other than telling it to play. The “checking their email” sentiment is a challenge to the notion that what one is seeing in a live electronic performance is indeed an “actual performance.”

In the time since writing the CDM piece, I’ve seen well over a hundred live sets, listened to days worth of live recordings, spoken in-depth with countless artists about their choices on stage, and gauged fan reactions many times over: from mind-blowing performances in barns to glorified electronic karaoke in sold-out venues, tempo locked beat matching to eight channel cassette tape loops, ten thousand dollar hardware to circuit bent baby toys. After all of that, I still don’t know that I can win the jazz vs. electronic music debate, but I will at least try.

*****

A while back, I was paging through the December 2011 edition of The Wire when I came upon a review of a Flying Lotus performance, the conclusion of which stood out:

On record, the music has the unruly liquidity of dream logic wandering from astral pathways down alphabet street, returning via back alleys on its own whims. Maybe the listening mind, presented with pretty straight analogues of those tracks, rebels, expecting something more mercurial, more improvised. The atmosphere in the venue reflected this upper-downer tension and constraint: the crowd noise was positive, but crowd movement was minimal – a strange sight in the midst of FlyLo’s headier jams. When the hall emptied there was a grumbling undercurrent as the tide of humanity was spilling slowly down the Roundhouse steps, whispers of it must have reached the upper levels. One casualty high above leaned over to berate them: “You don’t know, even understand, what you just FELT.” Sadly though, he didn’t stick around to enlighten anyone.

It should be noted that there are positive reviews of the show, and while not necessarily the best gauge, the videos from the event may seem to tell a different story.

What stood out for me from the review however, was that in trying to write about what the writer felt was a less than stellar performance, there was only one critique which could directly be attributed to the music, which was to say that Flying Lotus performed “straight analogues” of his tracks. Beyond that, the writer was left describing the feelings from the audience.

Feelings are tricky things. We all have them and they are the fundamental point of connection we seek when experiencing music. The message conveyed through the medium of music is meant to be an emotional one. But measuring those emotions is a task which cannot escape subjectivity. In a case like this when one writer is attempting to speak for the feelings of the whole audience, it becomes really tricky. Sure the writer may consider their analysis to have been objective, but it was still based on their perception of the audience, not the audience’s perception. Even more, this gauging of the audience dynamic does not tell us how the actual music performance was regardless of the varied perspectives from within the audience. I contend that this gap occurs because the language for discussing electronic performance has not yet been established.

Around the time I read The Wire review I was also reading Adam Harper’s Infinite Music, which offers variability as a primary factor of analysis in music. Instead of building on traditional music theory, Harper takes cues from those on the fringes of western music. He builds a concept of ‘music space’ by expanding John Cage’s “sound space,” the limits of which are ear determined. Furthermore, Harper’s non-musical variables and how they play into creating individually unique musical events, strengthens Christopher Small’s notion of musicking as a verb. In this way, Harper creates a fluid language for discussing music which might prove practical for these purposes.

It is helpful to use one of the central concepts of Harper’s music space, musical objects, as a means of distinguishing electronic performance.

Systems of variables constitute musical objects – Adam Harper

Going back to Miles Davis, his instrument is a monophonic musical object with a limited pitch and dynamic range in the upper register of the brass timbre. His musical talent is evaluated based on how he is able to work within those limitations to create variable experiences. His band represents another musical object comprised of the individual players as musical objects as well. The venue in which they are playing is a musical object, as is the audience and Davis’ decision to perform with his back to it. It is the coming together of all of these musical objects that creates the musical event (an alternate event includes the musical object which recorded the performance, and the complete setting of the listener as an individual musical objects upon playing the live recording). In a musical event comprised of these musical objects – Davis performing live in front of an audience with his back turned so he can face the band–it is possible to imagine a similar reaction to the above commentary about Flying Lotus, including a guy berating the audience for not making the connection.

Miles Davis @ Montreux, 8.7.1984 Image by Flickr user Christophe Losberger

Miles Davis @ Montreux, 8.7.1984 Image by Flickr user Christophe Losberger

In this Davis example however, we could listen to the audio to determine whether or not it was a “good” performance by analyzing the musical objects which can be observed in the recording (note: this would be technical analysis of the performance, not the event or its reception). Does Davis’s tone falter? How strong are the solos? Is he staying in the pocket with the rest of the band? Evaluation of these variables would be a testament to his proficiency which could be compared to other performances to determine if it measures up.

Flying Lotus’s set however is a bit different. Yes, we could listen back to the audio (or watch the video) and determine if indeed it measures up to other sets he has performed, but unlike with Davis, we cannot translate what we hear directly to his agency. When we hear the trumpet on the Davis recording we know that the sound is caused by him physically blowing into his instrument. When we hear a bass in a Flying Lotus set, there isn’t necessarily a physical act associated with the creation of the sound. With all of the visual cues removed in the Davis example, we can still speak about the performance aspect of the music; the same is not necessarily so about an electronic set, even with visual cues. In many electronic sets, it is only when something goes wrong that actual agency in the music being performed can be attributed.

Flying Lotus,@ SonarDome, Sonar 2012, Image by Flickr user Boolker

Flying Lotus,@ SonarDome, Sonar 2012, Image by Flickr user Boolker

Where the advent of the laptop and DSP advances for music have expanded creative possibilities, they only shroud what the performers using them are actually doing in more mystery. It’s an esoteric language, or perhaps languages, as ultimately each artist’s live rig configuration amounts to different musical objects, across which there may not be compatibilities.

However, in certain musical circles there are common musical objects. Perhaps the most common musical object for performance in electronic music right now is Ableton Live, which results in common component musical objects across performances by different artists. Further, an Ableton Live set can sound just like a Roland 404 set, which can sound just like a DJ set with a Kaoss pad, all of which can sound identical to a set not performed live but produced in the studio (or bedroom as the case may be) for a podcast. The reason for this is that much of the music is already fixed. What changes is the sequencing of these fixed pieces of music over time, their transitions and the variety of effects employed. The goal for these types of sets is a continuous flow of pre-arranged music, which parallels that of a DJ set.

In the past few years, the line between a live electronic set and a DJ set has been blurred extensively. Fans have become fairly critical of artists, to the point that it has become standard practice for promoters to list whether performances will be live or a DJ set. Even on the DJ end of the spectrum there’s a lot of questions, as artists have been called out for their DJ set being an iPod playlist. To qualify as a live set however, an artist must be doing more than just playing songs. How much more is debatable, but should it be?

Flying Lotus - Sónar 2012 - Jueves 14/05/2012, Image by Flickr user scannerfm

Flying Lotus – Sónar 2012 – Jueves 14/05/2012, Image by Flickr user scannerfm

Nobody in their right mind would call Miles Davis a hack. Even if they didn’t like specific performances, few would question his proficiency with the instrument. The reason for this is that his talent rises above the standard performance, beneath which someone could be qualified as a hack. If a trumpet player spent a whole night performing only shrill notes of a C major chord around middle C, without properly qualifying that their performance would be so constrained as a stylistic choice, one might consider calling that artist out as a hack (I apologize in advance to the serious musician that fits in this description).

The rationale behind this assessment is based on knowing the potential variability of the instrument and realizing that the performer is not exploring any of that variability. Perhaps there could be other layers of variability (e.g. an effects chain) added to the trumpet to make it interesting musically, but it can be objectively said that they don’t measure up to a standard quality of a trumpet player. If we say that the trumpet has an extensive dynamic range, a tonality which can go from smooth to harsh and a pitch range of just over three octaves, we can see how the player in our example is exhibiting quite a low proficiency.

This goes across all styles of trumpet playing. Were a style to impose limitations on a player, it could be said that the style did not allow for the full expression of proficiency on the instrument. A player within that style could be considered proficient in that context, but would require a broader performance to be analyzed for general proficiency. So the player in our example could be a master of “Shrill C” trumpet, but in order to compare with a Miles Davis they would have to perform out of style. Conversely, Miles Davis may be one of the world’s greatest trumpet players, but possibly the worst “Shrill C” trumpet player ever.

From this we can see that the language of variability provides a unique way to objectively speak on the performance of musical objects, while fully taking into account the way styles can play into performance. Using this language we open the world of electronic performance up for analysis and comparison.

This is part one of a three part series. In my next installment, I will use some of the language here to analyze the instruments and techniques used in electronic performance today. Once we have a fluid language for describing what is being used, I believe we will be better equipped to speak about what happens on stage.

Featured Image by Flickr User Scanner FM, Flying Lotus – Sónar 2012 – Jueves 14/05/2012

Primus Luta is a husband and father of three. He is a writer and an artist exploring the intersection of technology and art, and their philosophical implications. He is a regular guest contributor to the Create Digital Music website, and maintains his own AvantUrb site. Luta is a regular presenter for the Rhythm Incursions Podcast series with his monthly showRIPL. As an artist, he is a founding member of the live electronic music collectiveConcrète Sound System, which spun off into a record label for the exploratory realms of sound in 2012.

Primus Luta will be playing “electronics” in a live jazz setting on Wed. May 1st. with Daniel Carter (Sun Ra, Matthew Shipp and others) at the Brecht Forum in NY. Facebook Event is here. And there’s a flyer here.

tape reelREWIND! . . .If you liked this post, you may also dig:

Experiments in Agent-based Sonic Composition–Andreas Pape

Evoking the Object: Physicality in the Digital Age of Music-Primus Luta

Sound as Art as Anti-environment–Steven Hammer

Sounding Out! Podcast #10: Interview with Theremin Master Eric Ross

This slideshow requires JavaScript.

In this podcast Sounding Out! interviews Ithaca, New York theremin master Eric Ross. Eric talks here about his background in avant-garde classical music but also waxes philosophical about performance, embodiment, emotion, technology, and play. Please listen in as Eric shares his experience as a pioneer in wrangling the interfaces of electronic music and as an explorer of the theremin’s wonderful contradictions.

Check out Eric on the internet here.

– AT

CLICK HERE TO DOWNLOAD: Interview with Theremin Guru Eric Ross.

SUBSCRIBE TO THE SERIES VIA ITUNES

Eric Ross (born in Carbondale, Pennsylvania, USA) received his B.A. and M.A. from the State University of New York at Oneonta. He premiered his Concerto for Orchestra at Lincoln Center in New York, and released his first solo album,Songs for Synthesized Soprano, in 1982. He has written symphonies, chamber pieces and many works for solo instruments. He’s performed concerts of his original music at the Newport, Berlin, Montreux, and North Sea Jazz Festivals, the Copenhagen New Music Festival, the Kennedy Center, and the Gilmore International Keyboard Festival among others worldwide.

Eric performs on piano, guitars, synthesizers, and the theremin. For over twenty years, he has led his own ensemble that has featured jazz greats John Abercrombie, Larry Coryell, Andrew Cyrille, Oliver Lake, Leroy Jenkins, Byard Lancaster, new music virtuosos Robert Dick, Lydia Kavina, Youseff Yancy and many others. He has also played with Blues Legends Champion Jack Dupree, Lonnie Brooks, Sonny Terry, and Brownie McGhee and appeared with BB King on Danish RTV.

With his wife, Mary Ross, Eric presents multi-media concerts of video, film, computer art, dance and music. He began playing the theremin in 1975, and has performed on radio, film and television. He has written an Overture for 14 Theremins and performed on the 1997 World Premiere of Percy Grainger’s Free Music No.1 in New York City. In 2006, he was guest artist on the No.1 Best Selling CD in Japan, Aqi Fzono’s Cosmology.

Further Experiments in Agent-based Musical Composition

Photo by whistler1984 @Flickr.

Editor’s Note:  WARNING: THE FOLLOWING POST IS INTERACTIVE!!!  This week’s post is especially designed by one of our regulars, Andreas Duus Pape, to spark conversation and provoke debate on its comment page.  I  have directly solicited feedback and commentary from several top sound studies scholars, thinkers, artists, and musicians, who will be posting at various times throughout the day and the week–responding to Andreas, responding to each other, and responding to your comments.  Look for commentary by Bill Bahng Boyer (NYU), Maile Colbert(Binaural/Nodar, Faculdade de Belas Artes da Universidade do Porto), Adriana Knouf(Cornell University), Primus Luta (AvantUrb, Concrète Sound System), Alejandro L. Madrid (University of Illinois at Chicago), Tara Rodgers (University of Maryland), Jonathan Skinner (ecopoetics),  Jonathan Sterne (McGill University), Aaron Trammell (Rutgers University, Sounding Out!) and yours truly (Binghamton University, Sounding Out!).  Full bios of our special respondents follow the post. We wholeheartedly wish to entice you this Monday to play. . .and listen. . .and then share your thoughts via the comment page. . .and play again. . .listen again. . .read the comments. . .and share more thoughts. . .yeah, just go ahead and loop that.  –JSA, Editor-in-Chief

I’m a musician and an economist. Sometimes you will find me playing acoustic folk rock and blues on guitar, harmonica and voice. And at other times I will be at work, where I apply my expertise in game theory to the computer modeling of social phenomena. I create simulations of people interacting – such as how people decide which way to vote on an issue such as a tax levy, or how people learn to sort objects given to them in an experiment. In these simulations, the user can set up characteristics of the environment, such as the number of people and their individual goals. After things are set up, users watch these interactions unfold. The simulation is a little story, and one need only tweak the inputs to see how the story changes.

As a musician, I was curious if a program that generates social stories could be refashioned to generate musical pieces. I wanted to build a music-generation engine that the listener could tweak in order to get a different piece each time. But not just any tune – a piece with some flow, some story. I like that tension between randomness and structure. On one hand, I want every song to vary in unpredictable ways; on the other hand, I want to create music and not structureless noise.

I created a basic story of predators and prey, whimsically naming the prey “Peters,” represented by rabbits, and the predators “Wolves.” My simulation depicts a plain in the savannah with a green oasis. The prey seek the oasis and the predators seek the prey. Each character has its own goals and the closer they are to achieving them, the happier they are. Both predators and prey want to have stomachs full of food, so naturally they want to be close to their target (be it prey or oasis). As they travel through the savannah, they learn what choices (directions of movement) make them happier, and use this experience to guide them.

Photo by bantam10 @Flickr

So how does this story become music? To this question there are two answers: a technical one and an intuitive one. The intuitive answer is that in real life the story of predators and prey plays out geographically on the savannah, but musically this is a story that plays out over a sonic landscape. To elaborate, I abstracted the movement of the prey and predator on the geography of the plain into the musical geometry of a sonic landscape. The farther north an agent travels, the higher the pitch. And, the farther east an agent travels the longer the duration. In other words, as an agent travels to the northwest, she makes longer-lasting tones that are higher pitched. I also mapped happiness to volume, so that happy agents make louder tones. Finally, so that each agent would have a distinct voice as they traveled through this space, I chose different instruments for each agent.

In the video below I assigned the “church organ” sound to prey, and the “brass section” sound to predators.

Ultimately, there are some things that I like about this piece and others that I do not.

As a harmonica player, I improvise by creating and resolving tension. I think this piece does that well. The predator will pursue the prey into a quiet, low-pitch corner, creating a distant, rumbling sound – only to watch prey escape to the densely polyphonic northwest corner. There is an ebb and flow to this chase that I recognize from blues harmonica solos. In contrast to my experience as a harmonica player, however, I have found that some of the most compelling parts of the dynamics come from the layering of notes. The addition of notes yields a rich sonic texture, much like adding notes to a chord on an organ.

Unfortunately, for largely technical reasons, there is a lack of coherent rhythm and pacing. The programming platform (agent-based modeling software called NetLogo) is not designed to have the interface proceed in real-time. Basically, the overall speed of the piece can change as the processing load increases or decreases. I found that as agents learnt more about their surroundings (and more system resources are allocated to this “memory”), they became slower and slower. To fix this, I capped the size of their memory banks so that they would forget their oldest memories. The closest I have come to a rhythmic structure is by ordering the way that the agents play. This technique makes the piece have a call-and-response feel. If only the piece to had a coherent rhythm,  then I could imagine playing harmonica along with it.

One last comment on pitch: while an earlier version of this piece mapped each step in space to a semitone, things sounded too mechanical. Even though this was the easiest and most intuitive decision from a technical standpoint, it was aesthetically lacking, so I have now integrated traditional musical scales. The minor scale, in my opinion, is the most interesting as it makes the predator/prey dynamic sound appropriately foreboding.

Photo by deivorytower @Flickr.

You can play this piece yourself. Simply go to this link with Java enabled in your browser (recommended: Google Chrome). Pressing “Setup” then “Go” will create your own run of the piece. As it is running, you can adjust the slider above the graphic window to change the speed. Press “Go” again to stop the model, adjust any parameters you wish and press “Setup” and “Go” again to see how the piece changes. Here are some parameters to try: instA and instB to change the instruments associated with prey and predators; PlayEveryXSteps to change the pace of the piece (higher results in a slower paced piece); Num-PackAs and Num-PackBs changes the number of prey and predators; the vertical PeterVol and WolfVol adjust the overall volume of prey and predators.

In regards to my version of “Peter and the Wolf,” I have a number of things that I’m curious about.

First, how does this relate to what you think of as music? Do you like listening to it? Which elements do you like and which do you dislike? For example, what do you think about about the tension and rhythm – do you agree the first works and that the second could be improved? Would you listen to this for enjoyments’ sake, and what would it take for this to be more than a novelty? What do you think about the narrative that drives the piece? I chose the predator and prey narrative, admittedly, on a whim. Do you think there might be some other narrative or agent specific goals that might better drive this piece? Is there any metaphor that might better describe this piece? As a listener do you enjoy the experience of being able to customize and configure the piece? What would you like to have control over that is missing here? Would you like more interaction with the piece or less interaction?

Finally, and perhaps most importantly, what do you think of the premise? Can simple electronic agents (albeit ones which interact socially) aspire to create music? Is there something promising in this act of simulation? Is music-making necessarily a human activity and is this kind of work destined to be artificial and uncanny?

Thanks for listening. I look forward to your thoughts.

“The Birth of Electronic Man.” Photo by xdxd_vs_xdxd @Flickr.

– – –

Andreas Duus Pape is an economist and a musician. As an economist, he studies microeconomic theory and game theory–that is, the analysis of strategy and the construction of models to understand social phenomena–and the theory of individual choice, including how to forecast the behavior of agents who construct models of social phenomena. As a musician, he plays folk in the tradition of Dylan and Guthrie, blues in the tradition of Williamson and McTell, and country in the tradition of Nelson and Cash. Pape is an assistant Professor in the department of Economics at Binghamton University and is a faculty member of the Collective Dynamics of Complex Systems (CoCo) Research Group.

– – –

Guest Respondents on the Comment Page (in alphabetical order)

Bill Bahng Boyer is a doctoral candidate in music at New York University who is completing a dissertation on public listening in the New York City subway system.

Maile Colbert  is an intermedia artist with a concentration in sound and video, living and working between New York and Portugal. She is an associated artist at Binaural/Nodar.

N. Adriana Knouf is a Ph.D. candidate in information science at Cornell University.

Primus Luta is a writer and an artist exploring the intersection of technology and art; he maintains his own AvantUrb site and is a founding member of the live electronic music collective Concrète Sound System.

Alejandro L. Madrid is Associate Professor of Latin American and Latino Studies at the University of Illinois at Chicago and a cultural theorist and music scholar whose research focuses on the intersection of modernity, tradition, globalization, and ethnic identity in popular and art music, dance, and expressive culture from Mexico, the U.S.-Mexico border, and the circum-Caribbean.

Tara Rodgers is an Assistant Professor of Women’s Studies and a faculty fellow in the Digital Cultures & Creativity program at the University of Maryland. As Analog Tara, she has released electronic music on compilations such as the Le Tigre 12″ and Source Records/Germany, and exhibited sound art at venues including Eyebeam (NYC) and the Museum of Contemporary Canadian Art (Toronto).

Jonathan Skinner founded and edits the journal ecopoetics, which features creative-critical intersections between writing and ecology. Skinner also writes ecocriticism on contemporary poetry and poetics.

Jonathan Sterne teaches in the Department of Art History and Communication Studies and the History and Philosophy of Science Program at McGill University. His latest book, Mp3 The Meaning of a Format comes out this fall from Duke University Press.

Jennifer Stoever-Ackerman is co-founder, Editor-in-Chief and Guest Posts Editor for Sounding Out! She is also Assistant Professor of English at Binghamton University and a former Fellow at the Society for the Humanities at Cornell University (2011-2012).

Aaron Trammell is Multimedia Editor of Sounding Out! and a Ph.D. Candidate in Media and Communications at Rutgers University.

%d bloggers like this: