Live Electronic Performance: Theory And Practice
This is the third and final installment of a three part series on live Electronic music. To review part one, “Toward a Practical Language for Live Electronic Performance” click here. To peep part two, “Musical Objects, Variability and Live Electronic Performance” click here.
“So often these laptop + controller sets are absolutely boring but this was a real performance – none of that checking your emails on stage BS. Dude rocked some Busta, Madlib, Aphex Twin, Burial and so on…”
This quote, from a blogger about Flying Lotus’ 2008 Mutek set, speaks volumes about audience expectations of live laptop performances. First, this blogger acknowledges that the perception of laptop performances is that they are generally boring, using the “checking your email” adage to drive home the point. He goes to express what he perceived to set Lotus’s performance apart from that standard. Oddly enough, it isn’t the individualism of his sound, rather it was Lotus’s particular selection and configuration of other artists’ work into his mix – a trademark of the DJ.
Contrasting this with the review of the 2011 Flying Lotus set that began this series, both reveal how context and expectations are very important to the evaluation of live electronic performance. While the 2008 piece praises Lotus for a DJ like approach to his live set, the 2011 critique was that the performance was more of a DJ set rather than a live electronic performance. What changed in the years between these two sets was the familiarity with the style of performance (from Lotus and the various other artists on the scene with similar approaches) providing a shift in expectations. What they both lack, however, is a language to provide the musical context for their praise or critique; a language which this series has sought to elucidate.
To put live electronic performances into the proper musical context, one must determine what type of performance is being observed. In the last part of this series, I arrive at four helpful distinctions to compare and describe live electronic performance, continuing this project of developing a usable aesthetic language and enabling a critical conversation about the artform. The first of the four distinctions between different types of live electronic music performance concerns the manipulation of fixed pre-recorded sound sources into variable performances. The second distinction cites the physical manipulation of electronic instruments into variable performances. The third distinction demarcates the manipulation of electronic instruments into variable performances by the programming of machines. The last one is an integrated category that can be expanded to include any and all combinations of the previous three.
Essential to all categories of live electronic music performance, however, is the performance’s variability, without which music—and its concomitant listening practices–transforms from a “live” event to a fixed musical object. The trick to any analysis of such performance however, is to remember that, while these distinctions are easy to maintain in theory, in performance they quickly blur one into the other, and often the intensity and pleasure of live electronic music performance comes from their complex combinations.
For example, an artist who performs a set using solely vinyl with nothing but two turntables and a manual crossfading mixer, falls in the first distinction between live electronic music performances. Technically, the turntables and manual crossfading mixer are machines, but they are being controlled manually rather than performing on their own as machines. If the artist includes a drum machine in the set, however, it becomes a hybrid (the fourth distinction), depending on whether the drum machine is being triggered by the performer (physical manipulation) or playing sequences (machine manipulation) or both. Furthermore, if the drum machine triggers samples, it becomes machine manipulation (third distinction) of fixed pre-recorded sounds (first distinction) If the drum machine is used to playback sequences while the artist performs a turntablist routine, the turntable becomes the performance instrument while the drum machine holds as a fixed source. All of these relationships can be realized by a single performer over the course of a single performance, making the whole set of the hybrid variety.
While in practice the hybrid set is perhaps the most common, it’s important to understand the other three distinctions as each of them comes with their own set of limitations which define their potential variability. Critical listening to a live performance includes identifying when these shifts happen and how they change the variability of the set. Through the combination their individual limitations can be overcome increasing the overall variability of the performance. One can see a performer playing the drum machine with pads and correlate that physicality of it with the sound produced and then see them shift to playing the turntable and know that the drum machine has shifted to a machine performance. In this example the visual cues would be clear indicators, but if one is familiar with the distinctions the shifts can be noticed just from the audio.
This blending of physical and mechanical elements in live music performance exposes the modular nature of live electronic performance and its instruments. teaching us that the instruments themselves shouldn’t be looked at as distinction qualifiers but rather their combination in the live rig, and the variability that it offers. Where we typically think of an instrument as singular, within live electronic music, it is perhaps best to think of the individual components (eg turntables and drum machines) as the musical objects of the live rig as instrument.
Percussionists are a close acoustic parallel to the modular musical rig of electronic performers. While there are percussion players who use a single percussive instrument for their performances, others will have a rig of component elements to use at various points throughout a set. The electronic performer inherits such a configuration from keyboardists, who typically have a rig of keyboards, each with different sounds, to be used throughout a set. Availing themselves of a palette of sounds allows keyboardists to break out of the limitations of timbre and verge toward the realm of multi-instrumentalists. For electronic performers, these limitations in timbre only exist by choice in the way the individual artists configure their rigs.
From the perspective of users of traditional instruments, a multi-instrumentalist is one who goes beyond the standard of single instrument musicianship, representing a musician well versed at performing on a number of different instruments, usually of different categories. In the context of electronic performance, the definition of instrument is so changed that it is more practical to think not of multi-instrumentalists but multi-timbralists. The multi-timbralist can be understood as the standard in electronic performance. This is not to say there are not single instrument electronic performers, however it is practical to think about the live electronic musician’s instrument not as a singular musical object, but rather a group of musical objects (timbres) organized into the live rig. Because these rigs can be comprised of a nearly infinite number of musical objects, the electronic performer has the opportunity to craft a live rig that is uniquely their own. The choices they make in the configuration of their rig will define not just the sound of their performance, but the degrees of variability they can control.
Because the electronic performer’s instrument is the live rig comprised of multiple musical objects, one of the primary factors involved in the configuration of the rig is how the various components interact with one another over the time dependent course of a performance. In a live tape loop performance, the musician may use a series of cassette players with an array of effects units and a small mixer. In such a rig, the mixer is the primary means of communication between objects. In this type of rig however, the communication isn’t direct. The objects cannot directly communicate with each other, rather the artist is the mediator. It is the artist that determines when the sound from any particular tape loop is fed to an effect or what levels the effects return sound in relation to the loops. While watching a performance such as this, one would expect the performer to be very involved in physically manipulating the various musical objects. We can categorize this as an unsynchronized electronic performance meaning that the musical objects employed are not locked into the same temporal relations.
The key difference between an unsynchronized and s synchronized performance rigs is the amount of control over the performance that can be left to the machines. The benefit of synchronized performance rigs is that they allow for greater complexity either in configuration or control. The value of unsynchronized performance rigs is they have a more natural and less mechanized feel, as the timing flows from the performer’s physical body. Neither could be understood as better than the other, but in general they do make for separate kinds of listening experiences, which the listener should be aware of in evaluating a performance. Expectations should shift depending upon whether or not a performance rig is synchronized.
This notion of a synchronized performance rig should not only be understood as an inter-machine relationship. With the rise of digital technology, many manufacturers developed workstation style hardware which could perform multiple functions on multiple sound sources with a central synchronized control. The Roland SP-404 is a popular sampling workstation, used by many artists in a live setting. Within this modest box you get twelve voices of sample polyphony, which can be organized with the internal sequencer and processed with onboard effects. However, a performer may choose not to utilize a sequencer at all and as such, it can be performed unsynchronized, just triggering the pads. In fact, in recent years there has been a rise of drum pad players or finger drummers who perform using hardware machines without synchronization. Going back to our three distinctions a performance such as this would be a hybrid of physical manipulation of fixed sources with the physical manipulation of an electronic instrument. From this qualification, we know to look for extensive physical effort in such performances as indicators of the the artists agency on the variable performance.
Now that we’ve brought synchronization into the discussion it makes sense to talk about what is often the main means of communication in the live performance rig – the computer. The ultimate benefit of a computer is the ability to process a large number of calculations per computational cycle. Put another way, it allows users to act on a number of musical variables in single functions. Practically, this means the ability to store, organize recall and even perform a number of complex variables. With the advent of digital synthesizers, computers were being used in workstations to control everything from sequencing to the patch sound design data. In studios, computers quickly replaced mixing consoles and tape machines (even their digital equivalents like the ADAT) becoming the nerve center of the recording process. Eventually all of these functions and more were able to fit into the small and portable laptop computer, bringing the processing power in a practical way to the performance stage.
A laptop can be understood as a rig in and of itself, comprised of a combination of software and plugins as musical objects, which can be controlled internally or via external controllers. If there were only two software choices and ten plugins available for laptops, there would be over seven million permutations possible. While it is entirely possible (and for many artists practical) for the laptop to be the sole object of a live rig, the laptop is often paired with one or more controllers. The number of controllers available is nowhere near the volume of software on the market, but the possible combinations of hardware controllers take the laptop + controller + software combination possibilities toward infinity. With both hardware and software there is also the possibility of building custom musical objects that add to the potential uniqueness of a rig.
Unfortunately, quite often it is impossible to know exactly what range of tools are being utilized within a laptop strictly by looking at an artist on stage. This is what leads to probably the biggest misnomer about the performing laptop musician. As common as the musical object may look on the stage, housed inside of it can be the most unique and intricate configurations music (yes all of music) has ever seen. The reductionist thought that laptop performers aren’t “doing anything but checking email” is directly tied to the acousmatic nature of the objects as instruments. We can hear the sounds, but determining the sources and understanding the processes required to produce them is often shrouded in mystery. Technology has arrived at the point where what one performs live can precisely replicate what one hears in recorded form, making it easy to leap to the conclusion that all laptop musicians do is press play.
Indeed some of them do, but to varying degrees a large number of artists are actively doing more during their live sets. A major reasons for this is that one of the leading Digital Audio Workstations (DAW) of today also doubles as a performance environment. Designed with the intent of taking the DAW to the stage, Ableton Live allows artists to have an interface that facilitates the translation of electronic concepts from the studio to the stage. There are a world of things that are possible just by learning the Live basics, but there’s also a rabbit hole of advanced functions all the way to the modular Max for Live environment which lies on the frontier discovering new variables for sound manipulation. For many people, however, the software is powerful enough at the basic level of use to create effective live performances.
In its most basic use case, Ableton Live is set up much like a DJ rig, with a selection of pre-existing tracks queued up as clips which the performer blends into a uniform mix, with transitions and effects handled within the software. The possibilities expand out from that: multi-track parts of a song separated into different clips so the performer can take different parts in and out over the course of the performance; a plugin drum machine providing an additional sound source on top of the track(s), or alternately the drum machine holding a sequence while track elements are laid on top of it. With the multitude of plugins available just the combination of multi-track Live clips with a single soft synth plugin, lends itself to near infinite combinations. The variable possibilities of this type of set, even while not exploiting the breadth of variable possibilities presented by the gear, clearly points to the artist’s agency in performance.
Within the context of both the DJ set and the Ableton Live set, synchronization plays a major role in contextualization. Both categories of performance can be either synchronized or unsynchronized. The tightest of unsynchronized sets will sound synchronized, while the loosest of synchronized sets will sound unsynchronized. This plays very much into audience perception of what they are hearing and the performers’ choice of synchronization and tightness can be heavily influenced by those same audience expectations.
A techno set is expected to maintain somewhat of a locked groove, indicative of a synchronized performance. A synchronized rig either on the DJ side (Serato utilizing automated beat matching) or on the Ableton side (time stretch and auto bpm detection sync’d to midi) can make this a non factor for the physical performance, and so listening to such a performance it would be the variability of other factors which reveals the artist’s control over the performance. For the DJ, the factors would include the selection, transitions and effects use. For the Ableton user, it can include all of those things as well as the control over the individual elements in tracks and potentially other sound sources.
On the unsychronized end of the spectrum, a vinyl DJ could accomplish the same mix as the synchronized DJ set but it would physically require more effort on their part to keep all of the selections in time. This would mean they might have to limit exerting control on the other variables. An unsychronized Live set would be utilizing the software primarily as a sound source, without MIDI, placing the timing in the hands of the performer. With the human element added to the timing it would be more difficult to produce the machine-like timing of the other sets. This doesn’t mean that it couldn’t be effective, but there would be an audible difference in this type of set compared to the others.
What we’ve established is that through the modular nature of the electronic musician’s rig as an instrument, from synthesizer keyboards to Ableton Live, every variable consideration combines to produce infinite possibilities. Where the trumpet is limited in timbre, range and dynamics, the turntable is has infinite timbres; the range is the full spectrum of human hearing; and the dynamics directly proportional to the output. The limitations of the electronic musician’s instrument appear only in electrical current constraints, processor speed limits, the selection of components and the limitations of the human body.
Within these constraints however, we have only begun touching the surface of possibilities. There are combinations that have never happened, variables that haven’t come close to their full potential, and a multitude of variables that have yet to be discovered. One thing that the electronic artist can learn from jazz toward maximizing that potential is the notion of play, as epitomized with jazz improvisation. For jazz, improvisation opened up the possibilities of the form which impacted, performance and composition. I contend that the electronic artist can push the boundaries of variable interaction by incorporating the ability to play from the rig both in their physical performance and giving the machine its own sense of play. Within this play lie the variables which I believe can push electronic into the jazz of tomorrow.
Featured Image by Flickr User Juha van ‘t Zelfde
Primus Luta is a husband and father of three. He is a writer and an artist exploring the intersection of technology and art, and their philosophical implications. In 2014 he will be releasing an expanded version of this series as a book entitled “Toward a Practical Language: Live Electronic Performance”. He is a regular guest contributor to the Create Digital Music website, and maintains his own AvantUrb site. Luta is a regular presenter for theRhythm Incursions Podcast series with his monthly showRIPL. As an artist, he is a founding member of the live electronic music collectiveConcrète Sound System, which spun off into a record label for the exploratory realms of sound in 2012.
REWIND! . . .If you liked this post, you may also dig:
Evoking the Object: Physicality in the Digital Age of Music–-Primus Luta
Experiments in Agent-based Sonic Composition–Andreas Pape
Calling Out To (Anti)Liveness: Recording and the Question of Presence—Osvaldo Oyola
Toward a Practical Language for Live Electronic Performance
Amongst friends I’ve been known to say, “electronic music is the new jazz.” They are friends, so they smile, scoff at the notion and then indulge me in the Socratic exercise I am begging for. They usually win. The onus after all is on me to prove electronic music worthy of such an accolade. I definitely hold my own; often getting them to acknowledge that there is potential, but it usually takes a die hard electronic fan to accept my claim. Admittedly the weakest link in my argument has been live performance. I can talk about redefinitions of structure, freedom of forms and timbral infinity for days, but measuring a laptop performance up to a Miles Davis set (even one of the ones where his back remained to the crowd) is a seemingly impossible hurdle.
Mind you, I come from a jazzist perspective, which means that I consider jazz the pinnacle of western music. My classicist interlocutors will naturally cite the numerous accomplishments of classical composers as being unmatched within jazz. That will bring us to long debates about the merits of Charles Mingus and Duke Ellington as a composers, which leads, for a good many, to a concession on the part of Duke at least, but an inevitable assertion of the general inferiority U.S. composers compared to the European canon. And then I will say “why are we limiting things to composition when jazz goes so much further than the page?” To which I will get the reply: “orchestral performers were of the highest caliber.” Then I will rebut, “well why was Europe so impressed by Sidney Bechet?” But I digress.
Why talk about classical music in a piece on electronic music, you, my current interlocutor, may ask? Well, in placing electronic music in a historical context, its current stage of development keeps pace with the mental cleverness found in classical but applies it to different theoretical principles. The electronic musician’s DAW (Digital Audio Workstation) file amounts to the classical composer’s score; the electronic musician’s DSP (Digital Signal Processor) parallels the classical composer’s orchestra. I could call electronic music “the new classical” and I’d have a few supporters. But. . .taking it to the level of jazz? Electronic music would have to include not only the mental cleverness, but the physical cleverness as well.
Let’s back up for a bit. A couple years back, I did a piece for Create Digital Music on Live Electronic performance. I talked to a diverse group of artists about their processes for live performance, and I wrote it up with some video examples. It ended up being one of the most discussed pieces on CDM that year, with commentary ranging from fascination at the presentation of techniques to dismissal of the videos as drug-addled email inbox management.
This was to be expected, because of the lack of a language for evaluating electronic music. It is impossible to defend an artist who has been called a hack without the language through which to express their proficiency. Using Miles Davis as an example–specifically a show where his back is to the audience–there are fans that could defend his actions by saying the music he produced was nonetheless some of the best live material of his career, citing the solos and band interactions as examples. To the lay person, however, it may just seem rude and unprofessional for Davis to have his back to the audience; as such, it cannot be qualitatively a good performance no matter what. Any discussion of tone and lyrical fluidity often means little to the lay person.
The extent of this disconnect can be even greater with electronic performances. With his back turned to the audience, they can no longer see Miles’ fingers at work, or how he was cycling breath. Even when facing the crowd, an electronic musician whose regimen is largely comprised of pad triggers, knob turns, and other such gestures which simply do not have the same expected sonic correspondence as, for example, blowing and fingering do to the sound of a trumpet. Also, it is well known that the sound the trumpet produces cannot be made without human action. With electronic music however, particularly with laptop performances, audiences know that the instrument (laptop) is capable of playing music without human aid other than telling it to play. The “checking their email” sentiment is a challenge to the notion that what one is seeing in a live electronic performance is indeed an “actual performance.”
In the time since writing the CDM piece, I’ve seen well over a hundred live sets, listened to days worth of live recordings, spoken in-depth with countless artists about their choices on stage, and gauged fan reactions many times over: from mind-blowing performances in barns to glorified electronic karaoke in sold-out venues, tempo locked beat matching to eight channel cassette tape loops, ten thousand dollar hardware to circuit bent baby toys. After all of that, I still don’t know that I can win the jazz vs. electronic music debate, but I will at least try.
A while back, I was paging through the December 2011 edition of The Wire when I came upon a review of a Flying Lotus performance, the conclusion of which stood out:
On record, the music has the unruly liquidity of dream logic wandering from astral pathways down alphabet street, returning via back alleys on its own whims. Maybe the listening mind, presented with pretty straight analogues of those tracks, rebels, expecting something more mercurial, more improvised. The atmosphere in the venue reflected this upper-downer tension and constraint: the crowd noise was positive, but crowd movement was minimal – a strange sight in the midst of FlyLo’s headier jams. When the hall emptied there was a grumbling undercurrent as the tide of humanity was spilling slowly down the Roundhouse steps, whispers of it must have reached the upper levels. One casualty high above leaned over to berate them: “You don’t know, even understand, what you just FELT.” Sadly though, he didn’t stick around to enlighten anyone.
It should be noted that there are positive reviews of the show, and while not necessarily the best gauge, the videos from the event may seem to tell a different story.
What stood out for me from the review however, was that in trying to write about what the writer felt was a less than stellar performance, there was only one critique which could directly be attributed to the music, which was to say that Flying Lotus performed “straight analogues” of his tracks. Beyond that, the writer was left describing the feelings from the audience.
Feelings are tricky things. We all have them and they are the fundamental point of connection we seek when experiencing music. The message conveyed through the medium of music is meant to be an emotional one. But measuring those emotions is a task which cannot escape subjectivity. In a case like this when one writer is attempting to speak for the feelings of the whole audience, it becomes really tricky. Sure the writer may consider their analysis to have been objective, but it was still based on their perception of the audience, not the audience’s perception. Even more, this gauging of the audience dynamic does not tell us how the actual music performance was regardless of the varied perspectives from within the audience. I contend that this gap occurs because the language for discussing electronic performance has not yet been established.
Around the time I read The Wire review I was also reading Adam Harper’s Infinite Music, which offers variability as a primary factor of analysis in music. Instead of building on traditional music theory, Harper takes cues from those on the fringes of western music. He builds a concept of ‘music space’ by expanding John Cage’s “sound space,” the limits of which are ear determined. Furthermore, Harper’s non-musical variables and how they play into creating individually unique musical events, strengthens Christopher Small’s notion of musicking as a verb. In this way, Harper creates a fluid language for discussing music which might prove practical for these purposes.
It is helpful to use one of the central concepts of Harper’s music space, musical objects, as a means of distinguishing electronic performance.
Systems of variables constitute musical objects – Adam Harper
Going back to Miles Davis, his instrument is a monophonic musical object with a limited pitch and dynamic range in the upper register of the brass timbre. His musical talent is evaluated based on how he is able to work within those limitations to create variable experiences. His band represents another musical object comprised of the individual players as musical objects as well. The venue in which they are playing is a musical object, as is the audience and Davis’ decision to perform with his back to it. It is the coming together of all of these musical objects that creates the musical event (an alternate event includes the musical object which recorded the performance, and the complete setting of the listener as an individual musical objects upon playing the live recording). In a musical event comprised of these musical objects – Davis performing live in front of an audience with his back turned so he can face the band–it is possible to imagine a similar reaction to the above commentary about Flying Lotus, including a guy berating the audience for not making the connection.
In this Davis example however, we could listen to the audio to determine whether or not it was a “good” performance by analyzing the musical objects which can be observed in the recording (note: this would be technical analysis of the performance, not the event or its reception). Does Davis’s tone falter? How strong are the solos? Is he staying in the pocket with the rest of the band? Evaluation of these variables would be a testament to his proficiency which could be compared to other performances to determine if it measures up.
Flying Lotus’s set however is a bit different. Yes, we could listen back to the audio (or watch the video) and determine if indeed it measures up to other sets he has performed, but unlike with Davis, we cannot translate what we hear directly to his agency. When we hear the trumpet on the Davis recording we know that the sound is caused by him physically blowing into his instrument. When we hear a bass in a Flying Lotus set, there isn’t necessarily a physical act associated with the creation of the sound. With all of the visual cues removed in the Davis example, we can still speak about the performance aspect of the music; the same is not necessarily so about an electronic set, even with visual cues. In many electronic sets, it is only when something goes wrong that actual agency in the music being performed can be attributed.
Where the advent of the laptop and DSP advances for music have expanded creative possibilities, they only shroud what the performers using them are actually doing in more mystery. It’s an esoteric language, or perhaps languages, as ultimately each artist’s live rig configuration amounts to different musical objects, across which there may not be compatibilities.
However, in certain musical circles there are common musical objects. Perhaps the most common musical object for performance in electronic music right now is Ableton Live, which results in common component musical objects across performances by different artists. Further, an Ableton Live set can sound just like a Roland 404 set, which can sound just like a DJ set with a Kaoss pad, all of which can sound identical to a set not performed live but produced in the studio (or bedroom as the case may be) for a podcast. The reason for this is that much of the music is already fixed. What changes is the sequencing of these fixed pieces of music over time, their transitions and the variety of effects employed. The goal for these types of sets is a continuous flow of pre-arranged music, which parallels that of a DJ set.
In the past few years, the line between a live electronic set and a DJ set has been blurred extensively. Fans have become fairly critical of artists, to the point that it has become standard practice for promoters to list whether performances will be live or a DJ set. Even on the DJ end of the spectrum there’s a lot of questions, as artists have been called out for their DJ set being an iPod playlist. To qualify as a live set however, an artist must be doing more than just playing songs. How much more is debatable, but should it be?
Nobody in their right mind would call Miles Davis a hack. Even if they didn’t like specific performances, few would question his proficiency with the instrument. The reason for this is that his talent rises above the standard performance, beneath which someone could be qualified as a hack. If a trumpet player spent a whole night performing only shrill notes of a C major chord around middle C, without properly qualifying that their performance would be so constrained as a stylistic choice, one might consider calling that artist out as a hack (I apologize in advance to the serious musician that fits in this description).
The rationale behind this assessment is based on knowing the potential variability of the instrument and realizing that the performer is not exploring any of that variability. Perhaps there could be other layers of variability (e.g. an effects chain) added to the trumpet to make it interesting musically, but it can be objectively said that they don’t measure up to a standard quality of a trumpet player. If we say that the trumpet has an extensive dynamic range, a tonality which can go from smooth to harsh and a pitch range of just over three octaves, we can see how the player in our example is exhibiting quite a low proficiency.
This goes across all styles of trumpet playing. Were a style to impose limitations on a player, it could be said that the style did not allow for the full expression of proficiency on the instrument. A player within that style could be considered proficient in that context, but would require a broader performance to be analyzed for general proficiency. So the player in our example could be a master of “Shrill C” trumpet, but in order to compare with a Miles Davis they would have to perform out of style. Conversely, Miles Davis may be one of the world’s greatest trumpet players, but possibly the worst “Shrill C” trumpet player ever.
From this we can see that the language of variability provides a unique way to objectively speak on the performance of musical objects, while fully taking into account the way styles can play into performance. Using this language we open the world of electronic performance up for analysis and comparison.
This is part one of a three part series. In my next installment, I will use some of the language here to analyze the instruments and techniques used in electronic performance today. Once we have a fluid language for describing what is being used, I believe we will be better equipped to speak about what happens on stage.
Featured Image by Flickr User Scanner FM, Flying Lotus – Sónar 2012 – Jueves 14/05/2012
Primus Luta is a husband and father of three. He is a writer and an artist exploring the intersection of technology and art, and their philosophical implications. He is a regular guest contributor to the Create Digital Music website, and maintains his own AvantUrb site. Luta is a regular presenter for the Rhythm Incursions Podcast series with his monthly showRIPL. As an artist, he is a founding member of the live electronic music collectiveConcrète Sound System, which spun off into a record label for the exploratory realms of sound in 2012.
REWIND! . . .If you liked this post, you may also dig:
Experiments in Agent-based Sonic Composition–Andreas Pape
Evoking the Object: Physicality in the Digital Age of Music–-Primus Luta
Sound as Art as Anti-environment–Steven Hammer
- Robin Williams and the Shazbot Over the First Podcast
- The Top Ten Sounding Out! Posts of 2020-2022!
- “So Jao”: Sound, Death and the Postcolonial Politics of Cinematic Adaptation in Vishal Bhardwaj’s “Haider” (2014)
- What Do We Hear in Depp v. Heard?
- Voice as Ecology: Voice Donation, Materiality, Identity
Search for topics. . .
Looking for a Specific Post or Author?
Click here for the SOUNDING OUT INDEX. . .all posts and podcasts since 2009, scrollable by author, date, and title. Updated every 5 minutes.