Archive | April 2013

Toward a Practical Language for Live Electronic Performance

Amongst friends I’ve been known to say, “electronic music is the new jazz.” They are friends, so they smile, scoff at the notion and then indulge me in the Socratic exercise I am begging for. They usually win. The onus after all is on me to prove electronic music worthy of such an accolade. I definitely hold my own; often getting them to acknowledge that there is potential, but it usually takes a die hard electronic fan to accept my claim. Admittedly the weakest link in my argument has been live performance. I can talk about redefinitions of structure, freedom of forms and timbral infinity for days, but measuring a laptop performance up to a Miles Davis set (even one of the ones where his back remained to the crowd) is a seemingly impossible hurdle.

Mind you, I come from a jazzist perspective, which means that I consider jazz the pinnacle of western music. My classicist interlocutors will naturally cite the numerous accomplishments of classical composers as being unmatched within jazz. That will bring us to long debates about the merits of Charles Mingus and Duke Ellington as a composers, which leads, for a good many, to a concession on the part of Duke at least, but an inevitable assertion of the general inferiority U.S. composers compared to the European canon. And then I will say “why are we limiting things to composition when jazz goes so much further than the page?” To which I will get the reply: “orchestral performers were of the highest caliber.” Then I will rebut, “well why was Europe so impressed by Sidney Bechet?” But I digress.

Why talk about classical music in a piece on electronic music, you, my current interlocutor, may ask? Well, in placing electronic music in a historical context, its current stage of development keeps pace with the mental cleverness found in classical but applies it to different theoretical principles. The electronic musician’s DAW (Digital Audio Workstation) file amounts to the classical composer’s score; the electronic musician’s DSP (Digital Signal Processor) parallels the classical composer’s orchestra. I could call electronic music “the new classical” and I’d have a few supporters. But. . .taking it to the level of jazz? Electronic music would have to include not only the mental cleverness, but the physical cleverness as well.

Electronic artist using Ableton 5 Live, Image by Flickr user Nofi

Electronic artist using Ableton 5 Live, Image by Flickr user Nofi

Let’s back up for a bit. A couple years back, I did a piece for Create Digital Music on Live Electronic performance. I talked to a diverse group of artists about their processes for live performance, and I wrote it up with some video examples. It ended up being one of the most discussed pieces on CDM that year, with commentary ranging from fascination at the presentation of techniques to dismissal of the videos as drug-addled email inbox management.

This was to be expected, because of the lack of a language for evaluating electronic music. It is impossible to defend an artist who has been called a hack without the language through which to express their proficiency. Using Miles Davis as an example–specifically a show where his back is to the audience–there are fans that could defend his actions by saying the music he produced was nonetheless some of the best live material of his career, citing the solos and band interactions as examples. To the lay person, however, it may just seem rude and unprofessional for Davis to have his back to the audience; as such, it cannot be qualitatively a good performance no matter what. Any discussion of tone and lyrical fluidity often means little to the lay person.

The extent of this disconnect can be even greater with electronic performances. With his back turned to the audience, they can no longer see Miles’ fingers at work, or how he was cycling breath. Even when facing the crowd, an electronic musician whose regimen is largely comprised of pad triggers, knob turns, and other such gestures which simply do not have the same expected sonic correspondence as, for example, blowing and fingering do to the sound of a trumpet. Also, it is well known that the sound the trumpet produces cannot be made without human action. With electronic music however, particularly with laptop performances, audiences know that the instrument (laptop) is capable of playing music without human aid other than telling it to play. The “checking their email” sentiment is a challenge to the notion that what one is seeing in a live electronic performance is indeed an “actual performance.”

In the time since writing the CDM piece, I’ve seen well over a hundred live sets, listened to days worth of live recordings, spoken in-depth with countless artists about their choices on stage, and gauged fan reactions many times over: from mind-blowing performances in barns to glorified electronic karaoke in sold-out venues, tempo locked beat matching to eight channel cassette tape loops, ten thousand dollar hardware to circuit bent baby toys. After all of that, I still don’t know that I can win the jazz vs. electronic music debate, but I will at least try.

*****

A while back, I was paging through the December 2011 edition of The Wire when I came upon a review of a Flying Lotus performance, the conclusion of which stood out:

On record, the music has the unruly liquidity of dream logic wandering from astral pathways down alphabet street, returning via back alleys on its own whims. Maybe the listening mind, presented with pretty straight analogues of those tracks, rebels, expecting something more mercurial, more improvised. The atmosphere in the venue reflected this upper-downer tension and constraint: the crowd noise was positive, but crowd movement was minimal – a strange sight in the midst of FlyLo’s headier jams. When the hall emptied there was a grumbling undercurrent as the tide of humanity was spilling slowly down the Roundhouse steps, whispers of it must have reached the upper levels. One casualty high above leaned over to berate them: “You don’t know, even understand, what you just FELT.” Sadly though, he didn’t stick around to enlighten anyone.

It should be noted that there are positive reviews of the show, and while not necessarily the best gauge, the videos from the event may seem to tell a different story.

What stood out for me from the review however, was that in trying to write about what the writer felt was a less than stellar performance, there was only one critique which could directly be attributed to the music, which was to say that Flying Lotus performed “straight analogues” of his tracks. Beyond that, the writer was left describing the feelings from the audience.

Feelings are tricky things. We all have them and they are the fundamental point of connection we seek when experiencing music. The message conveyed through the medium of music is meant to be an emotional one. But measuring those emotions is a task which cannot escape subjectivity. In a case like this when one writer is attempting to speak for the feelings of the whole audience, it becomes really tricky. Sure the writer may consider their analysis to have been objective, but it was still based on their perception of the audience, not the audience’s perception. Even more, this gauging of the audience dynamic does not tell us how the actual music performance was regardless of the varied perspectives from within the audience. I contend that this gap occurs because the language for discussing electronic performance has not yet been established.

Around the time I read The Wire review I was also reading Adam Harper’s Infinite Music, which offers variability as a primary factor of analysis in music. Instead of building on traditional music theory, Harper takes cues from those on the fringes of western music. He builds a concept of ‘music space’ by expanding John Cage’s “sound space,” the limits of which are ear determined. Furthermore, Harper’s non-musical variables and how they play into creating individually unique musical events, strengthens Christopher Small’s notion of musicking as a verb. In this way, Harper creates a fluid language for discussing music which might prove practical for these purposes.

It is helpful to use one of the central concepts of Harper’s music space, musical objects, as a means of distinguishing electronic performance.

Systems of variables constitute musical objects – Adam Harper

Going back to Miles Davis, his instrument is a monophonic musical object with a limited pitch and dynamic range in the upper register of the brass timbre. His musical talent is evaluated based on how he is able to work within those limitations to create variable experiences. His band represents another musical object comprised of the individual players as musical objects as well. The venue in which they are playing is a musical object, as is the audience and Davis’ decision to perform with his back to it. It is the coming together of all of these musical objects that creates the musical event (an alternate event includes the musical object which recorded the performance, and the complete setting of the listener as an individual musical objects upon playing the live recording). In a musical event comprised of these musical objects – Davis performing live in front of an audience with his back turned so he can face the band–it is possible to imagine a similar reaction to the above commentary about Flying Lotus, including a guy berating the audience for not making the connection.

Miles Davis @ Montreux, 8.7.1984 Image by Flickr user Christophe Losberger

Miles Davis @ Montreux, 8.7.1984 Image by Flickr user Christophe Losberger

In this Davis example however, we could listen to the audio to determine whether or not it was a “good” performance by analyzing the musical objects which can be observed in the recording (note: this would be technical analysis of the performance, not the event or its reception). Does Davis’s tone falter? How strong are the solos? Is he staying in the pocket with the rest of the band? Evaluation of these variables would be a testament to his proficiency which could be compared to other performances to determine if it measures up.

Flying Lotus’s set however is a bit different. Yes, we could listen back to the audio (or watch the video) and determine if indeed it measures up to other sets he has performed, but unlike with Davis, we cannot translate what we hear directly to his agency. When we hear the trumpet on the Davis recording we know that the sound is caused by him physically blowing into his instrument. When we hear a bass in a Flying Lotus set, there isn’t necessarily a physical act associated with the creation of the sound. With all of the visual cues removed in the Davis example, we can still speak about the performance aspect of the music; the same is not necessarily so about an electronic set, even with visual cues. In many electronic sets, it is only when something goes wrong that actual agency in the music being performed can be attributed.

Flying Lotus,@ SonarDome, Sonar 2012, Image by Flickr user Boolker

Flying Lotus,@ SonarDome, Sonar 2012, Image by Flickr user Boolker

Where the advent of the laptop and DSP advances for music have expanded creative possibilities, they only shroud what the performers using them are actually doing in more mystery. It’s an esoteric language, or perhaps languages, as ultimately each artist’s live rig configuration amounts to different musical objects, across which there may not be compatibilities.

However, in certain musical circles there are common musical objects. Perhaps the most common musical object for performance in electronic music right now is Ableton Live, which results in common component musical objects across performances by different artists. Further, an Ableton Live set can sound just like a Roland 404 set, which can sound just like a DJ set with a Kaoss pad, all of which can sound identical to a set not performed live but produced in the studio (or bedroom as the case may be) for a podcast. The reason for this is that much of the music is already fixed. What changes is the sequencing of these fixed pieces of music over time, their transitions and the variety of effects employed. The goal for these types of sets is a continuous flow of pre-arranged music, which parallels that of a DJ set.

In the past few years, the line between a live electronic set and a DJ set has been blurred extensively. Fans have become fairly critical of artists, to the point that it has become standard practice for promoters to list whether performances will be live or a DJ set. Even on the DJ end of the spectrum there’s a lot of questions, as artists have been called out for their DJ set being an iPod playlist. To qualify as a live set however, an artist must be doing more than just playing songs. How much more is debatable, but should it be?

Flying Lotus - Sónar 2012 - Jueves 14/05/2012, Image by Flickr user scannerfm

Flying Lotus – Sónar 2012 – Jueves 14/05/2012, Image by Flickr user scannerfm

Nobody in their right mind would call Miles Davis a hack. Even if they didn’t like specific performances, few would question his proficiency with the instrument. The reason for this is that his talent rises above the standard performance, beneath which someone could be qualified as a hack. If a trumpet player spent a whole night performing only shrill notes of a C major chord around middle C, without properly qualifying that their performance would be so constrained as a stylistic choice, one might consider calling that artist out as a hack (I apologize in advance to the serious musician that fits in this description).

The rationale behind this assessment is based on knowing the potential variability of the instrument and realizing that the performer is not exploring any of that variability. Perhaps there could be other layers of variability (e.g. an effects chain) added to the trumpet to make it interesting musically, but it can be objectively said that they don’t measure up to a standard quality of a trumpet player. If we say that the trumpet has an extensive dynamic range, a tonality which can go from smooth to harsh and a pitch range of just over three octaves, we can see how the player in our example is exhibiting quite a low proficiency.

This goes across all styles of trumpet playing. Were a style to impose limitations on a player, it could be said that the style did not allow for the full expression of proficiency on the instrument. A player within that style could be considered proficient in that context, but would require a broader performance to be analyzed for general proficiency. So the player in our example could be a master of “Shrill C” trumpet, but in order to compare with a Miles Davis they would have to perform out of style. Conversely, Miles Davis may be one of the world’s greatest trumpet players, but possibly the worst “Shrill C” trumpet player ever.

From this we can see that the language of variability provides a unique way to objectively speak on the performance of musical objects, while fully taking into account the way styles can play into performance. Using this language we open the world of electronic performance up for analysis and comparison.

This is part one of a three part series. In my next installment, I will use some of the language here to analyze the instruments and techniques used in electronic performance today. Once we have a fluid language for describing what is being used, I believe we will be better equipped to speak about what happens on stage.

Featured Image by Flickr User Scanner FM, Flying Lotus – Sónar 2012 – Jueves 14/05/2012

Primus Luta is a husband and father of three. He is a writer and an artist exploring the intersection of technology and art, and their philosophical implications. He is a regular guest contributor to the Create Digital Music website, and maintains his own AvantUrb site. Luta is a regular presenter for the Rhythm Incursions Podcast series with his monthly showRIPL. As an artist, he is a founding member of the live electronic music collectiveConcrète Sound System, which spun off into a record label for the exploratory realms of sound in 2012.

Primus Luta will be playing “electronics” in a live jazz setting on Wed. May 1st. with Daniel Carter (Sun Ra, Matthew Shipp and others) at the Brecht Forum in NY. Facebook Event is here. And there’s a flyer here.

tape reelREWIND! . . .If you liked this post, you may also dig:

Experiments in Agent-based Sonic Composition–Andreas Pape

Evoking the Object: Physicality in the Digital Age of Music-Primus Luta

Sound as Art as Anti-environment–Steven Hammer

The Noises of Finance

SO! Tickertape3What does finance sound like? Is it the clanging of the opening and closing bells at the New York Stock Exchange? The shouting of offers to buy or sell? The beeps made by cash registers as a credit card is swiped? The whirring of fans working overtime to cool computers? What is this noise?

Noise, however, is not purely a sonic phenomenon. Since the late 1940s, noise has been intimately linked with theories of communication and information, as Aaron Trammell discusses in Sounding Out! posts such as “What Mixtapes Can Teach Us About Noise.” My research attempts to bring these two aspects of noise—the sonic and informatic—into conversation. I trace the interferences noise makes within a set of disparate disciplines: I listen to the history of the impact of information theory on experimental and electronic music; investigate the interferences of “fearless speech,” artistic robotics, and the public; and examine how noises digital and sonic have impacted the development of finance. Rather than creating my own definition of noise, I follow how other disciplines deal with their encounters with noise as both a material phenomenon—something that interferes with a signal, or a sound that is deemed unwanted—and as something to be theorized, asking questions such as what are the meanings of these noises? or should we be controlling noise at all?

In this post, I discuss three vignettes that outline the different ways in which noise (sonic and informatic) interferes with different aspects of finance: the shouts of open-outcry pits and the information they may or may not convey; new forms of electronic trading and the noises of server farms and trading behavior; and the Flash Crash of May 6th, 2010 that provoked noises from both traders and artists. Each reflects a particular conjunction of the sonic and informatic aspects of noise. When we attend to both components simultaneously, we discover that financial noises are complex entities that are not inherently revolutionary nor regressive, but are rather an elusive combination of both.

Noisy Trading: The Pits

My interest in the noises of finance comes in part from listening to open-outcry trading, following the work of Caitlin Zaloom’s Out of the Pits: Traders And Technology from Chicago to London and the documentary Floored (2008). An open-outcry pit, such as that found on the floor of the Chicago Board of Trade (CBOT), pairs buyers and sellers through a bodily practice of trading involving the extremities of behavior. Shouting, pushing, and shoving occur on the steps of the pit as buyers and sellers work to match their orders through nearly whatever means necessary.

Chicago_bot

Chicago Board of Trade Corn pit, 1993, Image by Jeremy Kemp

In the wonderfully titled article “Is Sound Just Noise?”, the business school professors Joshua Coval and Tyler Shumway ask, in one of the few academic articles related to the sounds of the pits, whether or not the shouting might convey information that is not necessarily available on the computer screens that were then coming to dominate trading:

we ask whether there exists information that is regularly communicated across an open outcry pit but cannot be easily transmitted over a computer network. Any signals that convey information regarding the emotion of market participants—fear, excitement, uncertainty, eagerness, and so forth—are likely to be difficult to transmit across an electronic network (1890).

Coval and Shumway found that the ambient sound level of the pits did have predictive impact regarding various aspects of the market: in short, the louder the pits got, the higher the volatility in the prices of securities and the decrease in the likelihood of conducting a trade.

Noisy Trading, Redux: Datacenters

Yet changes in the structure of the market have not only shifted the location of activity to people behind computer screens and away from these types of sounds, it has also shifted the actual location of the exchanges themselves. No longer do most trades take place in the physical location of, for example, the NYSE; rather, they take place in buildings like this one, at 1700 MacArthur Boulevard in Mahwah, NJ.

Screen capture by author

Screen capture by author

This is the location of the NYSE’s new datacenter, a 400,000 square foot facility. (In the linked video, note the whirring of the fans, a new noise of finance beyond that of the pits.) The servers in these datacenters—run by highly-capitalized financial firms large and small alike—are able to respond much quicker to market information the closer they are to the computers that run the exchange. And what can be closer than being co-located in the same datacenter as the exchange? This need for speed has lead to all sorts of interesting situations, such as new fibre-optic lines being laid to shave off a millisecond or two in travel between New Jersey and Chicago, or the taking into account of special relativity effects in the location of future datacenters. The new High-Frequency Trading (HFT) algorithms run on these servers in these datacenters.

Noisy Trades, Sonified: May 6th 2010

The voice on this recording, made on May 6th, 2010, belongs to Ben Lichenstein, an employee of a firm called Trader’s Audio. Now, Trader’s Audio provides live coverage of market movements from a person on the floor of an exchange in order for day traders and others to get an idea of the “sentiment” of a market. It’s kind of like a play-by-play of market activity, a running commentary of major market movements that can’t be discerned soley by the watching of numbers on a screen. What, then, could have been going on for Ben Lichenstein to be in such a frenzy, for his voice to be inflected in such a way? What are we to make of this noise?

Well, May 6th, 2010 was the day of what has infamously become known as the Flash Crash. The full details of this day are beyond the scope of this post, so I will outline it schematically, following the findings of the official US report produced by the Commodity Futures Trading Commission (CFTC) and the Securities and Exchange Commission (SEC). (For a different take on this, see the sociologist of finance Donald MacKenzie’s “How to Make Money in Microseconds”.) In short, between the hours of 2 and 3PM Eastern Time the New York Stock Exchange (NYSE) had both its largest single day loss as well as its largest single day gain, a swing of over 600 points. A series of trades made by algorithms that failed to take into account their impact on the market caused the prices of securities to swing to extremes, excerbated by the activity of High-Frequency Trading (HFT) algorithms. While the market eventually recovered—in part due to the activity of the same algorithms that caused the problem in the first place—the event indicated the precariousness of the stock market, the potential for things to spiral quickly out of control, and the difficulty in forecasting the behavior of an ecosystem of opaque algorithms.

How do the HFT algorithms relate to the Flash Crash that took place on May 6th, 2010? While the report of the CFTC and the SEC regarding the Flash Crash does not lay blame on HFT in particular, it did indicate how these algorithms contributed to the large price swings, the immense number of shares traded, and the drying up of liquidity (that is, the ability to find buyers and sellers in the market). One of the reasons why the market swings were so severe on May 6th, 2010 was due to the fact that HFT algorithms react immediately to small fluctuations of price, a quality of markets that financial economists call microstructure noise, a fascinating topic that is unfortunately beyond the scope of this particular post. In general, HFT and these datacenters go hand-in-hand, as it is a truism that it will take longer for data to travel between a machine in New Jersey and one in Chicago, than it will to travel between two machines in the same data center in New Jersey. HFT works to take advantage of this shorter latency in order to exploit market movements on the timescale of milliseconds, accelerating trading far beyond the open-outcry pit.

Noisy Finance: The Sonic and the Informatic

.

Let’s conclude with a sonic artifact of the Flash Crash from the French collective rybn. Their work has explored the concept of “antidatamining,” that is, the use of the “data mining” techniques of computational capitalism in order to shed light on the intersection of data and society. Consider their piece FLASHCRASH SONIFICATION (one of the few artistic responses to the Flash Crash), where rybn took trading data from nine different exchanges on the afternoon of the Flash Crash and created an austere, digitally-sharp yet undulating soundscape that recalls the work of artists Ryoji Ikeda or Carsten Nicolai without the rhythmic precision. If you can, listen to their online-available, two-channel mix on headphones in order to appreciate the details of the piece.

The building towards the end of “FLASHCRASH SONIFICATION” was meant to “emphasize the moment of the crash, [by] adding an effect of resonance, which propagates slowly, making it more tense, as the krach goes on” (all quotes in this paragraph from author’s personal interview with rybn). Thus instead of merely transparently translating the data into sound, rybn constructed the sonification in order to bring out this resonance: “resonance is pointed [to] as one of the major risk[s] of HFT by many economists and the feedback phenomenon was in the center of our discussions when we were preparing the piece.” Isolating the Flash Crash was important for rybn as it was perhaps the “moment when people started to understand financ[ial] orientations more clearly” thereby highlighting the symptomatic nature of the “speculative short-term loop finance seems to be stuck in.”

In FLASHCRASH SONIFICATION, sonic noise becomes a translation of the data from the market—abstract yet eminently material—into a different abstract form that does not immediately signify. FLASHCRASH SONIFICATION suggests rather than indicates; listening to it cannot provide us with rational information regarding the dynamics of the Flash Crash. Instead it produces a dark foreboding of the mechanisms at work, the high-frequency pulses first recalling heartbeats that soon speed up beyond any ability for distinction. In FLASHCRASH SONIFICATION, rybn comments on the inability for computation—and by extension, the market—to be the perfectly rational, ordered space it is ideally understood to be.

In Noise We Cannot Trust

If there is one thing clear about the examples of noises heard and encountered in this post—the shouting in the pits, the fluctuations of prices, the whirring of air conditioning, the sonification of the Flash Crash—it is that noise cannot be counted upon for positive or negative disruption. Noise cannot be counted upon as a political exploit in the market, as it can signify the potential of a trade, or be recuperated into profit through the activity of HFT algorithms. Yet noise can also provide an alternative experience of the Flash Crash beyond that of bureaucratic reports and figures. It is thus through the interferences noise causes within the dynamics of finance that we come into contact with the equivocality of noise as a phenomenon, and thus become attuned to a particular need to not confine noise to preconceived notions of positivity or negativity.

Adriana Knouf (she/her/hers, sie/hir/hirs) is an Assistant Professor of Art + Design in the College of Arts, Media, and Design at Northeastern University. She is a media artist and scholar researching noise, interferences, boundaries, and limits in media technologies and communication.

Her current research project, tentatively entitled The Xenology Notebooks, is a transmedia, transdisciplinary corpus expansively considering the “xeno”. Her first book, How Noise Matters to Finance (University of Minnesota Press, 2016), traced how the concept of  “noise” in the sonic and informatic domains of finance mutated throughout the late 20th century into the 21st.

Her current artistic research explores queer and trans futurities on earth and in the cosmos. Projects include Enredos Sónicos/Sonic Plots, a collaborative sonic exchange between the US and Cuba; they transmitted continuously / but our times rarely aligned / and their signals dissipated in the æther (2018-present), a 20 channel sound art installation with speakers made from handmade abaca paper and piezo electric elements, with sounds collected by custom antennas from satellite transmissions; and PIECES FOR PERFORMER(S) AND EXTRATERRESTRIAL ENTITIES (2017-present), event scores laser etched into handmade translucent abaca paper.

tape reelREWIND! . . .If you liked this post, you may also dig:

Experiments in Agent-based Sonic Composition–Andreas Pape

Listening to Disaster: Our Relationship to Sound in Danger–Maile Colbert

SO! Reads: Jonathan Sterne’s MP3: The Meaning of a Format–Aaron Trammell