Archive | Synthesizers RSS for this section

A Brief History of Auto-Tune

3886588096_193dd13dd6_o

Sound and TechThis is the final article  in Sounding Out!‘s April  Forum on “Sound and Technology.” Every Monday this month, you’ve heard new insights on this age-old pairing from the likes of Sounding Out! veteranos Aaron Trammell and Primus Luta along with new voices Andrew Salvati and Owen Marshall.  These fast-forward folks have shared their thinking about everything from Auto-tune to techie manifestos. Today, Marshall helps us understand just why we want to shift pitch-time so darn bad. Wait, let me clean that up a little bit. . .so darn badly. . .no wait, run that back one more time. . .jjuuuuust a little bit more. . .so damn badly. Whew! There! Perfect!–JS, Editor-in-Chief

A recording engineer once told me a story about a time when he was tasked with “tuning” the lead vocals from a recording session (identifying details have been changed to protect the innocent). Polishing-up vocals is an increasingly common job in the recording business, with some dedicated vocal producers even making it their specialty. Being able to comp, tune, and repair the timing of a vocal take is now a standard skill set among engineers, but in this case things were not going smoothly. Whereas singers usually tend towards being either consistently sharp or flat (“men go flat, women go sharp” as another engineer explained), in this case the vocalist was all over the map, making it difficult to always know exactly what note they were even trying to hit. Complicating matters further was the fact that this band had a decidedly lo-fi, garage-y reputation, making your standard-issue, Glee-grade tuning job decidedly inappropriate.

Undaunted, our engineer pulled up the Auto-Tune plugin inside Pro-Tools and set to work tuning the vocal, to use his words, “artistically” – that is, not perfectly, but enough to keep it from being annoyingly off-key. When the band heard the result, however, they were incensed – “this sounds way too good! Do it again!” The engineer went back to work, this time tuning “even more artistically,” going so far as to pull the singer’s original performance out of tune here and there to compensate for necessary macro-level tuning changes elsewhere.

"Melodyne screencap" by Flickr user Ethan Hein, CC BY-NC-SA 2.0

“Melodyne screencap” by Flickr user Ethan Hein, CC BY-NC-SA 2.0

The product of the tortuous process of tuning and re-tuning apparently satisfied the band, but the story left me puzzled… Why tune the track at all? If the band was so committed to not sounding overproduced, why go to such great lengths to make it sound like you didn’t mess with it? This, I was told, simply wasn’t an option. The engineer couldn’t in good conscience let the performance go un-tuned. Digital pitch correction, it seems, has become the rule, not the exception, so much so that the accepted solution for too much pitch correction is more pitch correction.

Since 1997, recording engineers have used Auto-Tune (or, more accurately, the growing pantheon of digital pitch correction plugins for which Auto-Tune, Kleenex-like, has become the household name) to fix pitchy vocal takes, lend T-Pain his signature vocal sound, and reveal the hidden vocal talents of political pundits. It’s the technology that can make the tone-deaf sing in key, make skilled singers perform more consistently, and make MLK sound like Akon. And at 17 years of age, “The Gerbil,” as some like to call Auto-Tune, is getting a little long in the tooth (certainly by meme standards.) The next U.S. presidential election will include a contingent of voters who have never drawn air that wasn’t once rippled by Cher’s electronically warbling voice in the pre-chorus of “Believe.” A couple of years after that, the Auto-Tune patent will expire and its proprietary status will dissolve into to the collective ownership of the public domain.

.

Growing pains aside, digital vocal tuning doesn’t seem to be leaving any time soon. Exact numbers are hard to come by, but it’s safe to say that the vast majority of commercial music produced in the last decade or so has most likely been digitally tuned. Future Music editor Daniel Griffiths has ballpark-estimated that, as early as 2010, pitch correction was used in about 99% of recorded music. Reports of its death are thus premature at best. If pitch correction is seems banal it doesn’t mean it’s on the decline; rather, it’s a sign that we are increasingly accepting its underlying assumptions and internalizing the habits of thought and listening that go along with them.

Headlines in tech journalism are typically reserved for the newest, most groundbreaking gadgets. Often, though, the really interesting stuff only happens once a technology begins to lose its novelty, recede into the background, and quietly incorporate itself into fundamental ways we think about, perceive, and act in the world. Think, for example, about all the ways your embodied perceptual being has been shaped by and tuned-in to, say, the very computer or mobile device you’re reading this on. Setting value judgments aside for a moment, then, it’s worth thinking about where pitch correction technology came from, what assumptions underlie the way it works and how we work with it, and what it means that it feels like “old news.”

"Anti-Tune symbol"

“Anti-Tune symbol”

As is often the case with new musical technologies, digital pitch correction has been the target for no small amount of controversy and even hate. The list of indictments typically includes the homogenization of music, the devaluation of “actual talent,” and the destruction of emotional authenticity. Suffice to say, the technological possibility of ostensibly producing technically “pitch-perfect” performances has wreaked a fair amount of havoc on conventional ways of performing and evaluating music. As Primus Luta reminded us in his SO! piece on the powerful-yet-untranscribable “blue notes” that emerged from the idiosyncrasies of early hardware samplers, musical creativity is at least as much about digging-into and interrogating the apparent limits of a technology as it is about the successful removal of all obstacles to total control of the end result.

Paradoxically, it’s exactly in this spirit that others have come to the technology’s defense: Brian Eno, ever open to the unexpected creative agency of perplexing objects, credits the quantized sound of an overtaxed pitch corrector with renewing his interest in vocal performances. SO!’s own Osvaldo Oyola, channeling Walter Benjamin, has similarly offered a defense of Auto-Tune as a democratizing technology, one that both destabilizes conventional ideas about musical ability and allows everyone to sing in-tune, free from the “tyranny of talent and its proscriptive aesthetics.”

"Audiodatenkompression: Manowar, The Power of Thy Sword" by Wikimedia user Moehre1992, CC BY-SA 3.0

“Audiodatenkompression: Manowar, The Power of Thy Sword” by Wikimedia user Moehre1992, CC BY-SA 3.0

Jonathan Sterne, in his book MP3, offers an alternative to normative accounts of media technology (in this case, narratives either of the decline or rise of expressive technological potential) in the form of “compression histories” – accounts of how media technologies and practices directed towards increasing their efficiency, economy, and mobility can take on unintended cultural lives that reshape the very realities they were supposed to capture in the first place. The algorithms behind the MP3 format, for example, were based in part on psychoacoustic research into the nature of human hearing, framed primarily around the question of how many human voices the telephone company could fit into a limited bandwidth electrical cable while preserving signal intelligibility. The way compressed music files sound to us today, along with the way in which we typically acquire (illegally) and listen to them (distractedly), is deeply conditioned by the practical problems of early telephony. The model listener extracted from psychoacoustic research was created in an effort to learn about the way people listen. Over time, however, through our use of media technologies that have a simulated psychoacoustic subject built-in, we’ve actually learned collectively to listen like a psychoacoustic subject.

Pitch-time manipulation runs largely in parallel to Sterne’s bandwidth compression story. The ability to change a recorded sound’s pitch independently of its playback rate had its origins not in the realm of music technology, but in efforts to time-compress signals for faster communication. Instead of reducing a signal’s bandwidth, pitch manipulation technologies were pioneered to reduce the time required to push the message through the listener’s ears and into their brain. As early as the 1920s, the mechanism of the rotating playback head was being used to manipulate pitch and time interchangeably. By spinning a continuous playback head relative to the motion of the magnetic tape, researchers in electrical engineering, educational psychology, and pedagogy of the blind found that they could increase playback rate of recorded voices without turning the speakers into chipmunks. Alternatively, they could rotate the head against a static piece of tape and allow a single moment of recorded sound to unfold continuously in time – a phenomenon that influenced the development of a quantum theory of information

In the early days of recorded sound some people had found a metaphor for human thought in the path of a phonograph’s needle. When the needle became a head and that head began to spin, ideas about how we think, listen, and communicate followed suit: In 1954 Grant Fairbanks, the director of the University of Illinois’ Speech Research Laboratory, put forth an influential model of the speech-hearing mechanism as a system where the speaker’s conscious intention of what to say next is analogized to a tape recorder full of instructions, its drive “alternately started and stopped, and when the tape is stationary a given unit of instruction is reproduced by a moving scanning head”(136). Pitch time changing was more a model for thinking than it was for singing, and its imagined applications were thus primarily non-musical.

Take for example the Eltro Information Rate Changer. The first commercially available dedicated pitch-time changer, the Eltro advertised its uses as including “pitch correction of helium speech as found in deep sea; Dictation speed testing for typing and steno; Transcribing of material directly to typewriter by adjusting speed of speech to typing ability; medical teaching of heart sounds, breathing sounds etc.by slow playback of these rapid occurrences.” (It was also, incidentally, used by Kubrick to produce the eerily deliberate vocal pacing of HAL 9000). In short, for the earliest “pitch-time correction” technologies, the pitch itself was largely a secondary concern, of interest primarily because it was desirable for the sake of intelligibility to pitch-change time-altered sounds into a more normal-sounding frequency range.

.

This coupling of time compression with pitch changing continued well into the era of digital processing. The Eventide Harmonizer, one of the first digital hardware pitch shifters, was initially used to pitch-correct episodes of “I Love Lucy” which had been time-compressed to free-up broadcast time for advertising. Similar broadcast time compression techniques have proliferated and become common in radio and television (see, for example, Davis Foster Wallace’s account of the “cashbox” compressor in his essay on an LA talk radio station.) Speed listening technology initially developed for the visually impaired has similarly become a way of producing the audio “fine print” at the end of radio advertisements.

"H910 Harmonizer" by Wikimedia user Nalzatron, CC BY-SA 3.0

“H910 Harmonizer” by Wikimedia user Nalzatron, CC BY-SA 3.0

Though the popular conversation about Auto-Tune often leaves this part out, it’s hardly a secret that pitch-time correction is as much about saving time as it is about hitting the right note. As Auto-Tune inventor Andy Hildebrand put it,

[Auto-Tune’s] largest effect in the community is it’s changed the economics of sound studios…Before Auto-Tune, sound studios would spend a lot of time with singers, getting them on pitch and getting a good emotional performance. Now they just do the emotional performance, they don’t worry about the pitch, the singer goes home, and they fix it in the mix.

Whereas early pitch-shifters aimed to speed-up our consumption of recorded voices, the ones now used in recording are meant to reduce the actual time spent tracking musicians in studio. One of the implications of this framing is that emotion, pitch, and the performer take on a very particular relationship, one we can find sketched out in the Auto-Tune patent language:

Voices or instruments are out of tune when their pitch is not sufficiently close to standard pitches expected by the listener, given the harmonic fabric and genre of the ensemble. When voices or instruments are out of tune, the emotional qualities of the performance are lost. Correcting intonation, that is, measuring the actual pitch of a note and changing the measured pitch to a standard, solves this problem and restores the performance. (Emphasis mine. Similar passages can be found in Auto-Tune’s technical documentation.)

In the world according to Auto-Tune, the engineer is in the business of getting emotional signals from place to place. Emotion is the message, and pitch is the medium. Incorrect (i.e. unexpected) pitch therefore causes the emotion to be “lost.” While this formulation may strike some people as strange (for example, does it mean that we are unable to register the emotional qualities of a performance from singers who can’t hit notes reliably? Is there no emotionally expressive role for pitched performances that defy their genre’s expectations?), it makes perfect sense within the current affective economy and division of labor and affective economy of the recording studio. It’s a framing that makes it possible, intelligible, and at least somewhat compulsory to have singers “express emotion” as a quality distinct from the notes they hit and have vocal producers fix up the actual pitches after the fact. Both this emotional model of the voice and the model of the psychoacoustic subject are useful frameworks for the particular purposes they serve. The trick is to pay attention to the ways we might find ourselves bending to fit them.

.

Owen Marshall is a PhD candidate in Science and Technology Studies at Cornell University. His dissertation research focuses on the articulation of embodied perceptual skills, technological systems, and economies of affect in the recording studio. He is particularly interested in the history and politics of pitch-time correction, cybernetics, and ideas and practices about sensory-technological attunement in general. 

Featured image: “Epic iPhone Auto-Tune App” by Flickr user Photo Giddy, CC BY-NC 2.0

tape reelREWIND!…If you liked this post, you may also dig:

“From the Archive #1: It is art?”-Jennifer Stoever

“Garageland! Authenticity and Musical Taste”-Aaron Trammell

“Evoking the Object: Physicality in the Digital Age of Music”-Primus Luta

Tomahawk Chopped and Screwed: The Indeterminacy of Listening

11298707205_8a3f48c56e_b

The Wobble Frequency2I’m happy to introduce the final post in Guest Editor Justin Burton‘s three part series for SO!, “The Wobble Continuum.” I’ll leave Justin to recap the series and reflect on it a little in his article below, but first I want to express our appreciation to him for his thoughtful curation of this exciting series, the first in the new Thursday stream at Sounding Out!. Thanks for getting the ball rolling!

Next month be sure to watch this space for a preview of sound at the upcoming Society for Cinema & Media Studies meeting in Seattle, and a new four part series on radio in Latin America by Guest Editor Tom McEnaney.

– Neil Verma, Special Editor for ASA/SCMS

I’m standing at a bus stop outside the Convention Center in downtown Indianapolis, whistling. The tune, “Braves,” is robust, a deep, oscillating comeuppance of the “Tomahawk Chop” melody familiar from my youth (the Braves were always on TBS). There’s a wobbly synthesizer down in the bass, a hi hat cymbal line pecking away at the Tomahawk Chop. This whistled remix of mine really sticks it to the original tune and the sports teams who capitalize on racist appropriations of indigenous cultures. All in all, it’s a sublime bit of musicality I’m bestowing upon the cold Indianapolis streets.

Until I become aware of the other person waiting for the bus. As I glance over at him, I can now hear my tune for what it is. The synthesizer and hi hat are all in my head, the bass nowhere to be heard. This isn’t the mix I intended, A Tribe Called Red’s attempt at defanging the Tomahawk Chop, at re-appropriating stereotypical sounds and spitting them back out on their own terms. Nope, this is just a guy on the street whistling those very stereotypes: it’s the Tomahawk Chop. I suddenly don’t feel like whistling anymore.

*****

As we conclude our Wobble Continuum guest series here at Sounding Out!, I want to think about the connective tissues binding together the previous posts from Mike D’Errico and Christina Giacona, joining A Tribe Called Red and the colonialist culture into which they release their music, and linking me to the guy at the bus stop who is not privy to the virtuosic sonic accompaniment in my head. In each case, I’ll pay attention to sound as material conjoining producers and consumers, and I’ll play with Karen Barad’s notion of performativity to hear the way these elements interact [Jason Stanyek and Ben Piekut also explore exciting possibilities from Barad in “Deadness” (TDR 54:1, 2010)].

"Sound Waves: Loud Volume" by Flickr user Tess Watson, CC BY 2.0

“Sound Waves: Loud Volume” by Flickr user Tess Watson, CC BY 2.0

Drawing from physicist Niels Bohr, Barad begins with the fact that matter is fundamentally indeterminate. This is formally laid out in the Heisenberg Uncertainty Principle, which notes that the more precisely we can determine (for instance) the position of a particle, the less we can say with certainty about its momentum (and vice versa). Barad points out that “‘position’ only has meaning when a rigid apparatus with fixed parts is used (eg, a ruler is nailed to a fixed table in the laboratory, thereby establishing a fixed frame of reference for establishing ‘position’)” (2003, 814).

This kind of indeterminacy is characteristic of sound, which vibrates along a cultural continuum, and which, in sliding back and forth along that continuum, allows us to tune into some information even as other information distorts or disappears. This can feel very limiting, but it can also be exhilarating, as what we are measuring are a variety of possibilities prepared to unfold before us as matter and sound become increasingly unpredictable and slippery. We can observe this continuum in the tissue connecting the previous posts in this series. In the first, Mike D’Errico tunes into the problematic hypermasculinity of brostep, pinpointing the ways music software interfaces can rehash tropes of control and dominance (Robin James has responded with productive expansions of these ideas), dropping some areas of music production right back into systems of patriarchy. In the second post, Giacona, in highlighting the anti-racist and anti-colonial work of A Tribe Called Red, speaks of the “impotence” visited upon the Tomahawk Chop by ATCR’s sonic interventions. Here, hypermasculinity is employed as a means of colonial reprimand for a hypermasculine, patriarchal culture. In sliding from one post to the other, we’ve tuned into different frequencies along a continuum, hearing the possibilities (both terrorizing and ameliorative) of patriarchal production methods unfolding before us.

"Skrillex at Forum, Copenhagen" by Flickr user Jacob Wang, CC-BY-SA-2.0

“Skrillex at Forum, Copenhagen” by Flickr user Jacob Wang, CC-BY-SA-2.0

Barad locates the performative upshot of this kind of indeterminacy in the fact that the scientist, the particle, and the ruler nailed to the table in the lab are all three bound together as part of a single phenomenon—they become one entity. To observe something is to become entangled with it, so that all of the unfolding possibilities of that particle become entwined with the unfolding possibilities of the scientist and the ruler, too. The entire phenomenon becomes indeterminate as the boundaries separating each entity bleed together, and these entities only detangle by performing—by acting out—boundaries among themselves.

Returning to Giacona’s discussion of “Braves,” it’s possible to mix and remix our components to perform them—to act them out—in more than one way. Giacona arranges it so that ATCR is the scientist, observing a particle that is a colonizing culture drunk on its own stereotypes. Here, “Braves” is the ruler that allows listeners to measure something about that culture. Is that something location? Direction? Even if we can hear clearly what Giacona leads us to—an uncovering of stereotypes so pernicious as to pervade, unchallenged, everyday activities—there’s an optimism available in indeterminacy. As we slide along the continuum to the present position of this colonialist culture, the certainty with which we can say anything about its trajectory lessens, opening the very possibility that motivates ATCR, namely the hope of something better.

"ATCR 1" by Flickr user MadameChoCho, CC BY-NC-SA 2.0

“ATCR 1″ by Flickr user MadameChoCho, CC BY-NC-SA 2.0

But listening and sounding are tricky things. As I think about my whistling of “Braves” in Indianapolis, it occurs to me that Giacona’s account is easily subverted. It could be that ATCR is the particle, members of a group of many different nations reduced to a single voice in a colonial present populated by scientists (continuing the analogy) who believe in Manifest Destiny and Johnny Depp. Now the ruler is not “Braves” but the Tomahawk Chop melody ATCR attempts to critique, and the group is measured by the same lousy standard colonizers always use. In this scenario, people attend ATCR shows in redface and headdresses, and I stand on the street whistling a war chant. We came to the right place, but we heard—or in my case, re-sounded—the wrong thing.

"Knob Twiddler" by Flickr user Jes, CC BY-SA 2.0

“Knob Twiddler” by Flickr user Jes, CC BY-SA 2.0

Jennifer Stoever-Ackerman’s “listening ear” is instructive here. Cultures as steeped in indigenous stereotypes as the United States and Canada have conditioned their ears to hear ATCR through whiteness, through colonialism, making it difficult to perceive the subversive nature of “Braves.” ATCR plays a dangerous game in which they are vulnerable to being heard as a war chant rather than a critique; their material must be handled with care. There’s a simple enough lesson for me and my whistling: some sounds should stay in my head. But Barad offers something more fundamental to what we do as listeners. By recognizing that 1). there are connective tissues deeply entangling the materiality of our selves, musicians, and music and 2). listening is a continuum revealing only some knowledge at any given moment, we can begin to imagine and perform the many possibilities that open up to us in the indeterminacy of listening.

If everything sounds certain to us when we listen, we’re doing it wrong. Instead, for music to function productively, we as listeners must find our places in a wobbly continuum whose tissues connect us to the varied appendages of music and culture. Once so entangled, we’ll ride those synth waves down to the low end as hi hats all the while tap out the infinite possibilities opening in front of us. 

Featured image: “a tribe called red_hall4_mozpics (2)_GF” by Flickr user Trans Musicales, CC BY-NC-ND 2.0

Justin Burton is a musicologist specializing in US popular music and culture. He is especially interested in hip hop and the ways it is sounded across regions, locating itself in specific places even as it expresses transnational and diasporic ideas.He is Assistant Professor of Music at Rider University, where he teaches in the school’s Popular Music and Culture program. He helped design the degree, which launched in the fall of 2012, and he is proud to be able to work in such a unique program.  His book-length project - Posthuman Pop - blends his interests in hip hop and technology by engaging contemporary popular music through the lens of posthuman theory.  Recent and forthcoming publications include an exploration of the Mozart myth as it is presented in Peter Shaffer’s Amadeus and then parodied in an episode of The Simpsons (Journal of Popular Culture 46:3, 2013), an examination of the earliest iPod silhouette commercials and the notions of freedom they are meant to convey (Oxford Handbook of Mobile Music Studies), and a long comparative review of Kanye and Jay Z’s Watch the Throne and the Roots’ Undun (Journal for the Society of American Music). He is also co-editing with Ali Colleen Neff a special issue of the Journal of Popular Music Studies titled “Sounding Global Southernness.”  He currently serves on the executive committee of the International Association for the Study of Popular Music-US Branch and is working on an oral history project of the organization. From June 2011 through May 2013, he served as Editor of the IASPM-US website, expanding the site’s offerings with the cutting edge work of popular music scholars from around the world.  You can contact him at justindburton [at] gmail [dot] com.

tape reelREWIND!…If you liked this post, you may also dig:

Musical Encounters and Acts of Audiencing: Listening Cultures in the American Antebellum-Daniel Cavicchi

Musical Objects, Variability and Live Electronic Performance-Primus Luta

Further Experiments in Agent-based Musical Composition”-Andreas Duus Pape

Going Hard: Bassweight, Sonic Warfare, & the “Brostep” Aesthetic

7469508574_a49cc23d59_o

The Wobble Frequency2

[Editor's Note 01/24/14 10:00 am: this post has been corrected. In response to a critique from DJ Rupture, the author has apologized for an initial misquoting of an article by Julianne Escobedo Shepherd, and edited the phrase in question. Please see Comments section for discussion]

Time to ring the bell: this year, Sounding Out! is opening a brand-new stream of content to run on Thursdays. Every few weeks, we’ll be bringing in a new Guest Editor to curate a series of posts on a particular theme that opens up new ground in areas of thought and practice where sound meets media. Most of our writers and editors will be new to the site, and many will be joining us from the ranks of the Sound Studies and Radio Studies Scholarly Interest Groups at the Society for Cinema and Media Studies, as well as from the Sound Studies Caucus from the American Studies Association. I’m overjoyed to come on board as SCMS/ASA Editor to help curate this material, working with my good friends here at SO!

For our first Guest series, let me welcome Justin Burton, Assistant Professor of Music at Rider University, where he teaches in the Popular Music and Culture program. Justin also serves on the executive committee of the International Association for the Study of Popular Music-US Branch. We’re honored to have Justin help us launch this new stream.

His series? He calls it The Wobble Continuum. Let’s follow him down into the low frequencies to learn more …

Neil Verma

Things have gotten wobbly. The cross-rhythms of low-frequency oscillations (LFO) pulsate through dance and pop music, bubbling up and dropping low across the radio dial. At its most extreme, the wobble both rends and sutures, tearing at the rhythmic and melodic fabric of a song at the same time that it holds it together on a structural level. In this three-part series, Mike D’Errico, Christina Giacona, and Justin D Burton listen to the wobble from a number of vantage points, from the user plugged into the Virtual Studio Technology (VST) of a Digital Audio Workstation (DAW) to the sounds of the songs themselves to the listeners awash in bass tremolos. In remixing these components—musician, music, audience—we trace the unlikely material activities of sounds and sounders.

In our first post, Mike will consider the ways a producer working with a VST is not simply inputting commands but is collaborating with an entire culture of maximalism, teasing out an ethics of brostep production outside the usual urge for transcendence. In the second post, Christina will listen to the song “Braves” by a Tribe Called Red (ATCR), which, through its play with racist signifiers, remixes performer and audience, placing ATCR and its listeners in an uncanny relationship. In the final post, Justin will work with Karen Barad’s theory of posthuman performativity to consider how the kind of hypermasculinist and racist signifiers discussed in Mike’s and Christina’s pieces embed themselves in listening bodies that become sounding bodies. In each instance, we wade into the wobble listening for the flow of activity among the entanglement of producer, sound, and listener while also keeping our ears peeled for the cross-rhythms of (hyper)masculinist and racist materials that course through and around the musical phenomena.

So hold on tight. It’s about to drop.

Justin Burton

As an electronic dance music DJ and producer, an avid video gamer, a cage fighting connoisseur, and a die-hard Dwayne “The Rock” Johnson fan, I’m no stranger to fist pumps, headshots, and what has become a general cultural sensibility of “hardness” associated with “bro” culture. But what broader affect lies behind this culture? Speaking specifically to recent trends in popular music, Simon Reynolds describes a “digital maximalism,” in which cultural practice involves “a hell of a lot of inputs, in terms of influences and sources, and a hell of a lot of outputs, in terms of density, scale, structural convolution, and sheer majesty” (“Maximal Nation”). We could broaden this concept of maximalism, both (1) to describe a wider variety of contemporary media (from film to video games and mobile media), and (2) to theorize it as a tool for transducing affect between various media, and among various industries within global capitalism. The goal of this essay is to tease out the ways in which maximalist techniques of one kind of digital media production—dubstep—become codified as broader social and political practices. Indeed, the proliferation of maximalism suggests that hypermediation and hypermasculinity have already become dominant aesthetic forms of digital entertainment.

"DJ Pauly D" by Flickr user Eva Rinaldi, CC-BY-SA-2.0

“DJ Pauly D” by Flickr user Eva Rinaldi, CC-BY-SA-2.0

More than any other electronic dance music (EDM) genre, dubstep—and the various hypermasculine cultures in which it has bound itself—has wholeheartedly embraced “digital maximalism” as its core aesthetic form. In recent years, the musical style has emerged as both the dominant idiom within EDM culture, as well as the soundtrack to various hypermasculine forms of entertainment, from sports such as football and professional wrestling to action movies and first-person shooter video games. As a result of the music’s widespread popularity within the specific cultural space of a post-Jersey Shore “bro” culture, the term “brostep” has emerged as an accepted title for the ultra-macho, adrenaline-pumping performances of masculinity that have defined contemporary forms of digital entertainment. This essay posits digital audio production practices in “brostep” as hypermediated forms of masculinity that exist as part of a broader cultural and aesthetic web of media convergence in the digital age.

CONVERGENCE CULTURES

Media theorist Henry Jenkins defines “convergence culture” as “the flow of content across multiple media platforms, the cooperation between multiple media industries, and the migratory behavior of media audiences who will go almost anywhere in search of the kinds of entertainment experiences they want” (Convergence Culture, 2). The most prominent use of “brostep” as a transmedial form comes from video game and movie trailers. From the fast-paced, neo-cyborg and alien action thrillers such as Transformers (2007-present), Cowboys & Aliens (2011), and G.I. Joe (2012), to dystopian first-person shooter video games such as Borderlands (2012), Far Cry 3 (2012), and Call of Duty: Black Ops 2 (2012), modulated oscillator wobbles and bass portamento drops consistently serve as sonic amplifiers of the male action hero at the edge.

Assault rifle barrages are echoed by quick rhythmic bass and percussion chops, while the visceral contact of pistol whips and lobbed grenades marks ruptures in time and space as slow motion frame rates mirror bass “drops” in sonic texture and rhythmic pacing. “Hardness” is the overriding affect here; compressed, gated kick and snare drum samples combine with coagulated, “overproduced” basslines made up of multiple oscillators vibrating at broad frequency ranges, colonizing the soundscape by filling every chasm of the frequency spectrum. The music—and the media forms with which it has become entwined—has served as the affective catalyst and effective backdrop for the emergence of an unabashedly assertive, physically domineering, and adrenaline-addicted “bro” culture.

Film theorist Lorrie Palmer argues for a relational link among gender, technology, and modes of production through hypermasculinity in these types of films and video games. Some definitive features of this convergence of hypermediation and hypermasculinity include an emphasis on “excess and spectacle, the centrality of surface over substance… ADHD cinema… transitory kinetic sensations that decenter spatial legibility… an impact aesthetic, [and] an ear-splitting, frenetic style” (Cranked Masculinity,” 7). Both Robin James and Steven Shaviro have defined the overall aesthetic of these practices as “post-cinematic”: a regime “centered on computer games” and emphasizing “the logic of control and gamespace, which is the dominant logic of entertainment programming today.” On a sonic level, “brostep” aligns itself with many of these cinematic descriptions. Julianne Escobedo Shepherd describes the style of Borgore, one particular dubstep DJ and producer, as “misogy blow-job beats.” Other commenters have made more obvious semiotic connections between filmic imagery and the music, as Nitsuh Abebe describes brostep basslines as conjuring “obviously cool images like being inside the gleaming metal torso of a planet-sized robot while it punches an even bigger robot.”

“Ultra Music Festival 2013″ by Wikimedia user Vinch, CC-BY-SA-3.0

“Ultra Music Festival 2013″ by Wikimedia user Vinch, CC-BY-SA-3.0

MASCULINITY AND DIGITAL AUDIO PRODUCTION

While the sound has developed gradually over at least the past decade, the ubiquity of the distinctive mid-range “brostep” wobble bass can fundamentally be attributed to a single instrument. Massive, a software synthesizer developed by the Berlin and Los Angeles-based Native Instruments, combines the precise timbral shaping capabilities of modular synthesizers with the real-time automation capabilities of digital waveform editors. As a VST (Virtual Studio Technology) plug-in, the device exemplifies the inherently transmedial nature of many digital tools, bridging studio techniques between digital audio workstations and analog synthesis, and acting as just one of many control signals within the multi-windowed world of digital audio production. In this way, Massive may be characterized as an intersonic control network in which sounds are controlled and modulated by other sounds through constantly shifting software algorithms. Through analysis of the intersubjective control network of a program such as Massive we are able to hear the convergence of hypermediation and hypermasculinity as aesthetic forms.

“Massive:Electronica” by Flickr user matt.searles, CC-BY-NC-SA-2.0

“Massive:Electronica” by Flickr user matt.searles, CC-BY-NC-SA-2.0

Media theorist Mara Mills details the notion of technical “scripts” embedded both within technological devices as well as user experiences. According to Mills, scripts are best defined as “the representation of users embedded within technology… Designers do not simply ‘project’ users into [technological devices]; these devices are inscribed with the competencies, tolerances, desires, and psychoacoustics of users” (“Do Signals Have Politics?” 338). In short, electroacoustic objects have politics, and in the case of Massive, the politics of the script are quite conventional and historically familiar. The rhythmic and timbral control network of the software aligns itself with what Tara Rodgers describes as a long history of violent masculinist control logics in electronic music, from DJs “battling” to producers “triggering” a sample with a “controller” or “executing” a programming “command” or typing a “bang” to send a signal” (“Towards a Feminist Historiography of Electronic Music,” 476).

In Massive, the primary control mechanism is the LFO (low frequency oscillator), an infrasonic electronic signal whose primary purpose is to modulate various parameters of a synthesizer tone. Dubstep artists most frequently apply the LFO to a low-pass filter, generating a control algorithm in which an LFO filters and masks specific frequencies at a periodic rate (thus creating a “wobbling” frequency effect), which, in turn, modulates the cutoff frequency of up to three oscillating frequencies at a time (maximizing the “wobble”). When this process is applied to multiple oscillators simultaneously—each operating at disparate levels of the frequency spectrum—the effect is akin to a spectral and spatial form of what Julian Henriques calls “sonic dominance.” Massive allows the user to record “automations” on the rhythm, tempo, and quantization level of the bass wobble, effectively turning the physical gestures initially required to create and modulate synthesizer sounds—such as knob-turning and fader-sliding—into digitally-inscribed algorithms.

SONIC WARFARE AND THE ETHICS OF VIRTUALITY

By positing the logic of digital audio production within a broader network of control mechanisms in digital culture, I am not simply presenting a hermeneutic metaphor. Convergence media has not only shaped the content of various multimedia but has redefined digital form, allowing us to witness a clear—and potentially dangerous—virtual politics of viral capitalism. The emergence of a Military Entertainment Complex (MEC) is the most recent instance of this virtual politics of convergence, as it encompasses broad phenomena including the use of music as torture, the design of video games for military training (and increasing collaboration between military personnel and video game designers in general), and drone warfare. The defining characteristic of this political and virtual space is a desire to simultaneously redefine the limits of the physical body and overcome those very limitations. The MEC, as well as broader digital convergence cultures, has molded this desire into a coherent hegemonic aesthetic form.

Following videogame theorist Jane McGonigal, virtual environments push the individual to “work at the very limits of their ability” in a state of infinite self-transition (Reality is Broken, 24). Yet, automation and modular control networks in the virtual environments of digital audio production continue to encourage the historical masculinist trope of “mastery,” thus further solidifying the connection between music and military technologies sounded in the examples above. In detailing hypermediation and hypermasculinity as dominant aesthetic forms of digital entertainment, it is not my goal to simply reiterate the Adornian nightmare of “rhythm as coercion,” or the more recent Congressional fears over the potential for video games and other media to cause violence. The fact that music and video games in the MEC are simultaneously being used to reinscribe the systemic violence of the Military Industrial Complex, as well as to create virtual and actual communities (DJ culture and the proliferation of online music and gaming communities), pinpoints precisely its hegemonic capabilities.

“Gear porn” by Flickr user Matthew Trentacoste, CC-BY-NC-ND-2.0

“Gear porn” by Flickr user Matthew Trentacoste, CC-BY-NC-ND-2.0

In the face of the perennial “mastery” trope, I propose that we must develop a relational ethics of virtuality. While it seems to offer the virtue of a limitless infinity for the autonomous (often male) individual, technological interfaces form the skin of the ethical subject, establishing the boundaries of a body both corporeal and virtual. In the context of digital audio production, then, the producer is not struggling against the technical limitations of the material interface, but rather emerging from the multiple relationships forming at the interface between one’s actual and virtual self and embracing a contingent and liminal identity; to quote philosopher Adriana Cavarero, “a fragile and unmasterable self” (Relating Narratives, 84).

Featured Image:  Skrillex – Hovefestivalen 2012 by Flickr User NRK P3

Mike D’Errico is a PhD student in the UCLA Department of Musicology and a researcher at the Center for Digital Humanities. His research interests and performance activities include hip-hop, electronic dance music, and sound design for software applications. He is currently working on a dissertation that deals with digital audio production across media, from electronic dance music to video games and mobile media. Mike is the web editor and social media manager for the US branch of the International Association for the Study of Popular Music, as well as two UCLA music journals, Echo: a music-centered journal and Ethnomusicology Review.

tape reelREWIND! . . .If you liked this post, you may also dig:

Toward a Practical Language for Live Electronic Performance-Primus Luta

Music Meant to Make You Move: Considering the Aural Kinesthetic-Dr. Imani Kai Johnson

Listening to Robots Sing: GarageBand on the iPad-Aaron Trammell

Live Electronic Performance: Theory And Practice

3493272789_8c7302c8fa_z

This is the third and final installment of a three part series on live Electronic music.  To review part one, “Toward a Practical Language for Live Electronic Performance” click here. To peep part two, “Musical Objects, Variability and Live Electronic Performance” click here.

“So often these laptop + controller sets are absolutely boring but this was a real performance – none of that checking your emails on stage BS. Dude rocked some Busta, Madlib, Aphex Twin, Burial and so on…”

This quote, from a blogger about Flying Lotus’ 2008 Mutek set, speaks volumes about audience expectations of live laptop performances. First, this blogger acknowledges that the perception of laptop performances is that they are generally boring, using the “checking your email” adage to drive home the point. He goes to express what he perceived to set Lotus’s performance apart from that standard. Oddly enough, it isn’t the individualism of his sound, rather it was Lotus’s particular selection and configuration of other artists’ work into his mix – a trademark of the DJ.

Contrasting this with the review of the 2011 Flying Lotus set that began this series, both reveal how context and expectations are very important to the evaluation of live electronic performance.  While the 2008 piece praises Lotus for a DJ like approach to his live set, the 2011 critique was that the performance was more of a DJ set rather than a live electronic performance. What changed in the years between these two sets was the familiarity with the style of performance (from Lotus and the various other artists on the scene with similar approaches) providing a shift in expectations. What they both lack, however, is a language to provide the musical context for their praise or critique; a language which this series has sought to elucidate.

To put live electronic performances into the proper musical context, one must determine what type of performance is being observed. In the last part of this series, I arrive at four helpful distinctions to compare and describe live electronic performance, continuing this project of developing a usable aesthetic language and enabling a critical conversation about the artform.  The first of the four distinctions between different types of live electronic music performance concerns the manipulation of fixed pre-recorded sound sources into variable performances. The second distinction cites the physical manipulation of electronic instruments into variable performances. The third distinction demarcates the manipulation of electronic instruments into variable performances by the programming of machines. The last one is an integrated category that can be expanded to include any and all combinations of the previous three.

Essential to all categories of live electronic music performance, however, is the performance’s variability, without which music—and its concomitant listening practices–transforms from  a “live” event to a fixed musical object. The trick to any analysis of such performance however, is to remember that, while these distinctions are easy to maintain in theory, in performance they quickly blur one into the other, and often the intensity and pleasure of live electronic music performance comes from their complex combinations.

6250416351_d5ca1fc1f3_b

Flying Lotus at Treasure Island, San Francisco on 10-15-2011, image by Flickr User laviddichterman

For example, an artist who performs a set using solely vinyl with nothing but two turntables and a manual crossfading mixer, falls in the first distinction between live electronic music performances. Technically, the turntables and manual crossfading mixer are machines, but they are being controlled manually rather than performing on their own as machines.  If the artist includes a drum machine in the set, however, it becomes a hybrid (the fourth distinction), depending on whether the drum machine is being triggered by the performer (physical manipulation) or playing sequences (machine manipulation) or both. Furthermore, if the drum machine triggers samples, it becomes machine manipulation (third distinction) of fixed pre-recorded sounds (first distinction) If the drum machine is used to playback sequences while the artist performs a turntablist routine, the turntable becomes the performance instrument while the drum machine holds as a fixed source. All of these relationships can be realized by a single performer over the course of a single performance, making the whole set of the hybrid variety.

While in practice the hybrid set is perhaps the most common, it’s important to understand the other three distinctions as each of them comes with their own set of limitations which define their potential variability.  Critical listening to a live performance includes identifying when these shifts happen and how they change the variability of the set.  Through the combination their individual limitations can be overcome increasing the overall variability of the performance. One can see a performer playing the drum machine with pads and correlate that physicality of it with the sound produced and then see them shift to playing the turntable and know that the drum machine has shifted to a machine performance. In this example the visual cues would be clear indicators, but if one is familiar with the distinctions the shifts can be noticed just from the audio.

This blending of physical and mechanical elements in live music performance exposes the modular nature of live electronic performance and its instruments. teaching us that the instruments themselves shouldn’t be looked at as distinction qualifiers but rather their combination in the live rig, and the variability that it offers. Where we typically think of an instrument as singular, within live electronic music, it is perhaps best to think of the individual components (eg turntables and drum machines) as the musical objects of the live rig as instrument.

Flying Lotus at the Echoplex, Los Angeles, Image by Flickr User  sunny_J

Flying Lotus at the Echoplex, Los Angeles, Image by Flickr User sunny_J

Percussionists are a close acoustic parallel to the modular musical rig of electronic performers. While there are percussion players who use a single percussive instrument for their performances, others will have a rig of component elements to use at various points throughout a set. The electronic performer inherits such a configuration from keyboardists, who typically have a rig of keyboards, each with different sounds, to be used throughout a set. Availing themselves of a palette of sounds allows keyboardists to break out of the limitations of timbre and verge toward the realm of multi-instrumentalists.  For electronic performers, these limitations in timbre only exist by choice in the way the individual artists configure their rigs.

From the perspective of users of traditional instruments, a multi-instrumentalist is one who goes beyond the standard of single instrument musicianship, representing a musician well versed at performing on a number of different instruments, usually of different categories.  In the context of electronic performance, the definition of instrument is so changed that it is more practical to think not of multi-instrumentalists but multi-timbralists.  The multi-timbralist can be understood as the standard in electronic performance.  This is not to say there are not single instrument electronic performers, however  it is practical to think about the live electronic musician’s instrument not as a singular musical object, but rather a group of musical objects (timbres) organized into the live rig.  Because these rigs can be comprised of a nearly infinite number of musical objects, the electronic performer has the opportunity to craft a live rig that is uniquely their own. The choices they make in the configuration of their rig will define not just the sound of their performance, but the degrees of variability they can control.

Because the electronic performer’s instrument is the live rig comprised of multiple musical objects, one of the primary factors involved in the configuration of the rig is how the various components interact with one another over the time dependent course of a performance. In a live tape loop performance, the musician may use a series of cassette players with an array of effects units and a small mixer. In such a rig, the mixer is the primary means of communication between objects. In this type of rig however, the communication isn’t direct. The objects cannot directly communicate with each other, rather the artist is the mediator. It is the artist that determines when the sound from any particular tape loop is fed to an effect or what levels the effects return sound in relation to the loops. While watching a performance such as this, one would expect the performer to be very involved in physically manipulating the various musical objects. We can categorize this as an unsynchronized electronic performance meaning that the musical objects employed are not locked into the same temporal relations.

Big Tape Loops, Image by Flickr User  choffee

Big Tape Loops, Image by Flickr User choffee

The key difference between an unsynchronized and s synchronized performance rigs is the amount of control over the performance that can be left to the machines. The benefit of synchronized performance rigs is that they allow for greater complexity either in configuration or control. The value of unsynchronized performance rigs is they have a more natural and less mechanized feel, as the timing flows from the performer’s physical body. Neither could be understood as better than the other, but in general they do make for separate kinds of listening experiences, which the listener should be aware of in evaluating a performance. Expectations should shift depending upon whether or not a performance rig is synchronized.

This notion of a synchronized performance rig should not only be understood as an inter-machine relationship. With the rise of digital technology, many manufacturers developed workstation style hardware which could perform multiple functions on multiple sound sources with a central synchronized control. The Roland SP-404 is a popular sampling workstation, used by many artists in a live setting. Within this modest box you get twelve voices of sample polyphony, which can be organized with the internal sequencer and processed with onboard effects. However, a performer may choose not to utilize a sequencer at all and as such, it can be performed unsynchronized, just triggering the pads. In fact, in recent years there has been a rise of drum pad players or finger drummers who perform using hardware machines without synchronization. Going back to our three distinctions a performance such as this would be a hybrid of physical manipulation of fixed sources with the physical manipulation of an electronic instrument.  From this qualification, we know to look for extensive physical effort in such performances as indicators of the the artists agency on the variable performance.

Now that we’ve brought synchronization into the discussion it makes sense to talk about what is often the main means of communication in the live performance rig – the computer. The ultimate benefit of a computer is the ability to process a large number of calculations per computational cycle. Put another way, it allows users to act on a number of musical variables in single functions. Practically, this means the ability to store, organize recall and even perform a number of complex variables. With the advent of digital synthesizers, computers were being used in workstations to control everything from sequencing to the patch sound design data. In studios, computers quickly replaced mixing consoles and tape machines (even their digital equivalents like the ADAT) becoming the nerve center of the recording process. Eventually all of these functions and more were able to fit into the small and portable laptop computer, bringing the processing power in a practical way to the performance stage.

Flying Lotus and his Computer, Image by Flickr User  jaswooduk

Flying Lotus and his Computer, All Tomorrow’s Parties 2011, Image by Flickr User jaswooduk

A laptop can be understood as a rig in and of itself, comprised of a combination of software and plugins as musical objects, which can be controlled internally or via external controllers. If there were only two software choices and ten plugins available for laptops, there would be over seven million permutations possible. While it is entirely possible (and for many artists practical) for the laptop to be the sole object of a live rig, the laptop is often paired with one or more controllers. The number of controllers available is nowhere near the volume of software on the market, but the possible combinations of hardware controllers take the laptop + controller + software combination possibilities toward infinity. With both hardware and software there is also the possibility of building custom musical objects that add to the potential uniqueness of a rig.

Unfortunately, quite often it is impossible to know exactly what range of tools are being utilized within a laptop strictly by looking at an artist on stage. This is what leads to probably the biggest misnomer about the performing laptop musician. As common as the musical object may look on the stage, housed inside of it can be the most unique and intricate configurations music (yes all of music) has ever seen. The reductionist thought that laptop performers aren’t “doing anything but checking email” is directly tied to the acousmatic nature of the objects as instruments. We can hear the sounds, but determining the sources and understanding the processes required to produce them is often shrouded in mystery. Technology has arrived at the point where what one performs live can precisely replicate what one hears in recorded form, making it easy to leap to the conclusion that all laptop musicians do is press play.

Indeed some of them do, but to varying degrees a large number of artists are actively doing more during their live sets. A major reasons for this is that one of the leading Digital Audio Workstations (DAW) of today also doubles as a performance environment. Designed with the intent of taking the DAW to the stage, Ableton Live allows artists to have an interface that facilitates the translation of electronic concepts from the studio to the stage. There are a world of things that are possible just by learning the Live basics, but there’s also a rabbit hole of advanced functions all the way to the modular Max for Live environment which lies on the frontier discovering new variables for sound manipulation. For many people, however, the software is powerful enough at the basic level of use to create effective live performances.

Sample Screenshot from a performer's Ableton Live set up for an "experimental and noisy performance" with no prerecorded material, Image by Flickr User Furibond

Sample Screenshot from a performer’s Ableton Live set up for an “experimental and noisy performance” with no prerecorded material, Image by Flickr User Furibond

In its most basic use case, Ableton Live is set up much like a DJ rig, with a selection of pre-existing tracks queued up as clips which the performer blends into a uniform mix, with transitions and effects handled within the software. The possibilities expand out from that: multi-track parts of a song separated into different clips so the performer can take different parts in and out over the course of the performance; a plugin drum machine providing an additional sound source on top of the track(s), or alternately the drum machine holding a sequence while track elements are laid on top of it. With the multitude of plugins available just the combination of multi-track Live clips with a single soft synth plugin, lends itself to near infinite combinations. The variable possibilities of this type of set, even while not exploiting the breadth of variable possibilities presented by the gear, clearly points to the artist’s agency in performance.

Within the context of both the DJ set and the Ableton Live set, synchronization plays a major role in contextualization. Both categories of performance can be either synchronized or unsynchronized. The tightest of unsynchronized sets will sound synchronized, while the loosest of synchronized sets will sound unsynchronized. This plays very much into audience perception of what they are hearing and the performers’ choice of synchronization and tightness can be heavily influenced by those same audience expectations.

A second performance screen capture by the same artist, this time using pre-recorded midi sequences, Image by Flickr User Furibond

A second performance screen capture by the same artist, this time using pre-recorded midi sequences, Image by Flickr User Furibond

A techno set is expected to maintain somewhat of a locked groove, indicative of a synchronized performance. A synchronized rig either on the DJ side (Serato utilizing automated beat matching) or on the Ableton side (time stretch and auto bpm detection sync’d to midi) can make this a non factor for the physical performance, and so listening to such a performance it would be the variability of other factors which reveals the artist’s control over the performance. For the DJ, the factors would include the selection, transitions and effects use. For the Ableton user, it can include all of those things as well as the control over the individual elements in tracks and potentially other sound sources.

On the unsychronized end of the spectrum, a vinyl DJ could accomplish the same mix as the synchronized DJ set but it would physically require more effort on their part to keep all of the selections in time. This would mean they might have to limit exerting control on the other variables. An unsychronized Live set would be utilizing the software primarily as a sound source, without MIDI, placing the timing in the hands of the performer. With the human element added to the timing it would be more difficult to produce the machine-like timing of the other sets. This doesn’t mean that it couldn’t be effective, but there would be an audible difference in this type of set compared to the others.

What we’ve established is that through the modular nature of the electronic musician’s rig as an instrument, from synthesizer keyboards to Ableton Live, every variable consideration combines to produce infinite possibilities. Where the trumpet is limited in timbre, range and dynamics, the turntable is has infinite timbres; the range is the full spectrum of human hearing; and the dynamics directly proportional to the output. The limitations of the electronic musician’s instrument appear only in electrical current constraints, processor speed limits, the selection of components and the limitations of the human body.

Flying Lotus at Electric Zoo, 2010, Image by Flickr User TheMusic.FM

Flying Lotus at Electric Zoo, 2010, Image by Flickr User TheMusic.FM

Within these constraints however, we have only begun touching the surface of possibilities. There are combinations that have never happened, variables that haven’t come close to their full potential, and a multitude of variables that have yet to be discovered. One thing that the electronic artist can learn from jazz toward maximizing that potential is the notion of play, as epitomized with jazz improvisation. For jazz, improvisation opened up the possibilities of the form which impacted, performance and composition. I contend that the electronic artist can push the boundaries of variable interaction by incorporating the ability to play from the rig both in their physical performance and giving the machine its own sense of play. Within this play lie the variables which I believe can push electronic into the jazz of tomorrow.

Featured Image by Flickr User Juha van ‘t Zelfde

Primus Luta is a husband and father of three. He is a writer and an artist exploring the intersection of technology and art, and their philosophical implications. In 2014 he will be releasing an expanded version of this series as a book entitled “Toward a Practical Language: Live Electronic Performance”. He is a regular guest contributor to the Create Digital Music website, and maintains his own AvantUrb site. Luta is a regular presenter for theRhythm Incursions Podcast series with his monthly showRIPL. As an artist, he is a founding member of the live electronic music collectiveConcrète Sound System, which spun off into a record label for the exploratory realms of sound in 2012.

tape reelREWIND! . . .If you liked this post, you may also dig:

Evoking the Object: Physicality in the Digital Age of Music--Primus Luta

Experiments in Agent-based Sonic Composition–Andreas Pape

Calling Out To (Anti)Liveness: Recording and the Question of PresenceOsvaldo Oyola

%d bloggers like this: