Archive | Live Music RSS for this section

SO! Reads: David Novak’s Japanoise: Music at the Edge of Circulation

MasonnaLive

We are living in strange times. Our experiences, especially our musical experiences, have become fragmented and odd. The album has been declared dead, live concerts are now silent, listened to on headphones, and some of our favorite performers exist in a faux-holographic space between life and death. The fragmentation of our musical experiences is indicative of a larger set of changes that encourage sound studies to pay attention to fragmented, outlying, and diffuse sonic phenomenon. In his new book Japanoise: Music at the Edge of Circulation, part of Jonathan Sterne and Lisa Gitelman’s “Sign, Storage, Transmission” series at Duke University Press, David Novak pays attention to one such fragmented and outlying realm, Noise music. Novak’s contribution to sound studies is to encourage us to deal with the fragmented complexity of sonic environments and contexts, especially those where noise plays a crucial part.

The past decade has seen growing public attention to noise as pollution, as problem, and as a poison. Examples of noise as a social issue needing immediate response abound, but one letter to the editor from the Wisconsin Rapid Tribune epitomizes the way that noise is sometimes read as a problem to be overcome. Novak’s book is one a few recent books, including Eldritch Priest’s Boring Formless Nonsense, Greg Hainge’s Noise Matters, and Joseph Nechvatal’s Immersion Into Noise, that complicate the idea of noise as problem. What sets Novak’s book apart from these is how his ethnographic approach allows him to approach Noise music from both the macro-perspective of its historical context and the micro-lens of his personal relationship to it.

BookCoverNoiseFor Novak, Noise music is a trans-cultural, transnational interaction that is both material and abstract. His analysis of it works to blur the boundary between both large-scale networks of exchange and the highly individuated experience. Novak relates the story of Noise music as originating in Japan in the 1980s. Noise musicians working separately caught the ears of American fans. Some of these fans were well-known musicians themselves, who brought Noise recordings and eventually the performers themselves to a wider U.S. audience. At the time, Noise was generally understood as taking one of two binary positions. Either Noise music was understood as a uniquely Japanese cultural expression, or it was instead theorized as a product of the Western imaginary motivated by the production of Japan as the anti-subject within modernity (24). Novak wisely recognizes the limited nature of these two positions, and seeks a more sophisticated method of understanding the circulation that creates Noise music, contributing, ultimately, a theory of feedback. Here the transnational circulation of materials, ideas, and expressions constitutes a culture itself, one that is not distinct from either the Japanese or the U.S. manifestations of Noise music (17). This a welcome contribution to compositional and intersectional perspectives on cultural exhange.

If Noise music is a circulation, a set of experiences and contexts, flows and scapes, ecologies and environments, then genre boundaries can not adequately describe the contextual and historical exchange of sound. Though genre must be considered, Japanoise does not find Novak searching particularly rigorously. He chooses two key Noise musicians, The Nihilist Spasm Band from Canada and Merzbow from Japan and describes their historical context, reception, and influence. But other than those descriptive basics, he is unsuccessful in finding anything new to say about Noise as genre. Concluding the chapter, he casually states that the existence of Noise threatens the boundaries of other musical genres. Though this fascinating statement would have been worthy of a chapter, and certainly foundational to his central idea (that Noise music is diffuse), Novak misses an opportunity to better support these connections in his chapter on genre.

Most interesting, however, is Novak’s focus on the material conditions of the production of Noise music. In describing the diffuse flows and scapes of Noise music, he addresses a plurality of experience: from the technological to the spatial to the private dimensions of listening. These concerns put him in conversation with Louise Meintjes’ Sound of Africa and Julian Henriques’ Sonic Bodies. Like these scholars, Novak refuses to locate the material conditions of production as solely economic, technological, or cultural. Instead, Noise music results from of an assemblage of conditions and possibilities. This is best exemplified by how Novak distinguishes the live music experience from the recorded. Here, Novak resists the neat distinction, long established in musicology, that hears the live experience as collective and interactive, and recorded music as individuated and passive. Instead, Novak suggests “liveness” and “deadness.” Liveness and deadness are not bounded to the dichotomy of the live performance with the recording, but rather two qualities that float through and with both experiences. “Liveness is about the connection between performance and embodiment… deadness, in turn, helps remote listeners recognize their affective experiences…” The experience of live Noise music, according to Novak, often challenges the boundaries of what is often expected when hearing live music. I have seen this in my own experiences standing in an audience surrounded by shrieking, booming, droning noises.

Truth be known, I’m as taken with Noise music as Novak. If his book confesses to being written by a critical but vehement fan, then I ought to confess the same of the music which I love so dearly. I had the chance to see Merzbow perform in Raleigh in August. He strummed obviously homemade instruments, turned fader pots, and concentrated intently on his laptop. A fan was crawling through the crowd on their hands and knees; occasionally they stood to sway, then returned to crawling. I thought that this behavior might seem odd, but in the context of Merzbow’s performance, it was as legitimate as any other. Through these odd behaviors, the fan demonstrated Novak’s conception of individuation within Noise music. The material conditions of the performance, the screeching Noise, made it impossible for me to ask the person what they were doing or feeling. I experienced the fan and the Noise in a tension from which there is no resolution. We were both uncomfortably located in a multiplicity of experiences. These experiences don’t resolve to a whole, but rather pulsate and echo and feed back into each other, intertwining with expectations of behavior, material conditions, and embodiment(s).

Japanoise raises many important questions. What social processes lead us to foreground the sonic experiences in our lives? And further, how does a critical understanding of these processes help to advance the work of understanding the power and politics of sound? But, for me, Novak’s work serves best to remind me of how much value is found in fragmented, diffuse, outlier experiences, like Noise music. Because sound occupies a crucial role in our social and political lives, Novak encourages us not to resolve tensions, rather to exist amongst them and hear them as lively and productive.

MerzCassette

For those readers who might be unfamiliar with the music Novak describes, the book’s website has a fantastic collection of supplemental media for you to enjoy: http://www.japanoise.com/media/.

-

Seth Mulliken is a Ph.D. candidate in the Communication, Rhetoric, and Digital Media program at NC State. He does ethnographic research about the co-constitutive relationship between sound and race in public space. Concerned with ubiquitous forms of sonic control, he seeks to locate the variety of interactions, negotiations, and resistances through individual behavior, community, and technology that allow for a wide swath of racial identity productions. He is convinced ginger is an audible spice, but only above 15khz.

tape reelREWIND! . . .If you liked this post, you may also dig:

Tofu, Steak, and a Smoke Alarm: The Food Network’s Chopped & the Sonic Art of Cooking–Seth Mulliken

SO! Reads: Jonathan Sterne’s MP3: The Meaning of a Format–Aaron Trammell

The Noise of SB 1070: or Do I Sound Illegal to You?–Jennifer Stoever-Ackerman

Going Hard: Bassweight, Sonic Warfare, & the “Brostep” Aesthetic

7469508574_a49cc23d59_o

The Wobble Frequency2

[Editor's Note 01/24/14 10:00 am: this post has been corrected. In response to a critique from DJ Rupture, the author has apologized for an initial misquoting of an article by Julianne Escobedo Shepherd, and edited the phrase in question. Please see Comments section for discussion]

Time to ring the bell: this year, Sounding Out! is opening a brand-new stream of content to run on Thursdays. Every few weeks, we’ll be bringing in a new Guest Editor to curate a series of posts on a particular theme that opens up new ground in areas of thought and practice where sound meets media. Most of our writers and editors will be new to the site, and many will be joining us from the ranks of the Sound Studies and Radio Studies Scholarly Interest Groups at the Society for Cinema and Media Studies, as well as from the Sound Studies Caucus from the American Studies Association. I’m overjoyed to come on board as SCMS/ASA Editor to help curate this material, working with my good friends here at SO!

For our first Guest series, let me welcome Justin Burton, Assistant Professor of Music at Rider University, where he teaches in the Popular Music and Culture program. Justin also serves on the executive committee of the International Association for the Study of Popular Music-US Branch. We’re honored to have Justin help us launch this new stream.

His series? He calls it The Wobble Continuum. Let’s follow him down into the low frequencies to learn more …

Neil Verma

Things have gotten wobbly. The cross-rhythms of low-frequency oscillations (LFO) pulsate through dance and pop music, bubbling up and dropping low across the radio dial. At its most extreme, the wobble both rends and sutures, tearing at the rhythmic and melodic fabric of a song at the same time that it holds it together on a structural level. In this three-part series, Mike D’Errico, Christina Giacona, and Justin D Burton listen to the wobble from a number of vantage points, from the user plugged into the Virtual Studio Technology (VST) of a Digital Audio Workstation (DAW) to the sounds of the songs themselves to the listeners awash in bass tremolos. In remixing these components—musician, music, audience—we trace the unlikely material activities of sounds and sounders.

In our first post, Mike will consider the ways a producer working with a VST is not simply inputting commands but is collaborating with an entire culture of maximalism, teasing out an ethics of brostep production outside the usual urge for transcendence. In the second post, Christina will listen to the song “Braves” by a Tribe Called Red (ATCR), which, through its play with racist signifiers, remixes performer and audience, placing ATCR and its listeners in an uncanny relationship. In the final post, Justin will work with Karen Barad’s theory of posthuman performativity to consider how the kind of hypermasculinist and racist signifiers discussed in Mike’s and Christina’s pieces embed themselves in listening bodies that become sounding bodies. In each instance, we wade into the wobble listening for the flow of activity among the entanglement of producer, sound, and listener while also keeping our ears peeled for the cross-rhythms of (hyper)masculinist and racist materials that course through and around the musical phenomena.

So hold on tight. It’s about to drop.

Justin Burton

As an electronic dance music DJ and producer, an avid video gamer, a cage fighting connoisseur, and a die-hard Dwayne “The Rock” Johnson fan, I’m no stranger to fist pumps, headshots, and what has become a general cultural sensibility of “hardness” associated with “bro” culture. But what broader affect lies behind this culture? Speaking specifically to recent trends in popular music, Simon Reynolds describes a “digital maximalism,” in which cultural practice involves “a hell of a lot of inputs, in terms of influences and sources, and a hell of a lot of outputs, in terms of density, scale, structural convolution, and sheer majesty” (“Maximal Nation”). We could broaden this concept of maximalism, both (1) to describe a wider variety of contemporary media (from film to video games and mobile media), and (2) to theorize it as a tool for transducing affect between various media, and among various industries within global capitalism. The goal of this essay is to tease out the ways in which maximalist techniques of one kind of digital media production—dubstep—become codified as broader social and political practices. Indeed, the proliferation of maximalism suggests that hypermediation and hypermasculinity have already become dominant aesthetic forms of digital entertainment.

"DJ Pauly D" by Flickr user Eva Rinaldi, CC-BY-SA-2.0

“DJ Pauly D” by Flickr user Eva Rinaldi, CC-BY-SA-2.0

More than any other electronic dance music (EDM) genre, dubstep—and the various hypermasculine cultures in which it has bound itself—has wholeheartedly embraced “digital maximalism” as its core aesthetic form. In recent years, the musical style has emerged as both the dominant idiom within EDM culture, as well as the soundtrack to various hypermasculine forms of entertainment, from sports such as football and professional wrestling to action movies and first-person shooter video games. As a result of the music’s widespread popularity within the specific cultural space of a post-Jersey Shore “bro” culture, the term “brostep” has emerged as an accepted title for the ultra-macho, adrenaline-pumping performances of masculinity that have defined contemporary forms of digital entertainment. This essay posits digital audio production practices in “brostep” as hypermediated forms of masculinity that exist as part of a broader cultural and aesthetic web of media convergence in the digital age.

CONVERGENCE CULTURES

Media theorist Henry Jenkins defines “convergence culture” as “the flow of content across multiple media platforms, the cooperation between multiple media industries, and the migratory behavior of media audiences who will go almost anywhere in search of the kinds of entertainment experiences they want” (Convergence Culture, 2). The most prominent use of “brostep” as a transmedial form comes from video game and movie trailers. From the fast-paced, neo-cyborg and alien action thrillers such as Transformers (2007-present), Cowboys & Aliens (2011), and G.I. Joe (2012), to dystopian first-person shooter video games such as Borderlands (2012), Far Cry 3 (2012), and Call of Duty: Black Ops 2 (2012), modulated oscillator wobbles and bass portamento drops consistently serve as sonic amplifiers of the male action hero at the edge.

Assault rifle barrages are echoed by quick rhythmic bass and percussion chops, while the visceral contact of pistol whips and lobbed grenades marks ruptures in time and space as slow motion frame rates mirror bass “drops” in sonic texture and rhythmic pacing. “Hardness” is the overriding affect here; compressed, gated kick and snare drum samples combine with coagulated, “overproduced” basslines made up of multiple oscillators vibrating at broad frequency ranges, colonizing the soundscape by filling every chasm of the frequency spectrum. The music—and the media forms with which it has become entwined—has served as the affective catalyst and effective backdrop for the emergence of an unabashedly assertive, physically domineering, and adrenaline-addicted “bro” culture.

Film theorist Lorrie Palmer argues for a relational link among gender, technology, and modes of production through hypermasculinity in these types of films and video games. Some definitive features of this convergence of hypermediation and hypermasculinity include an emphasis on “excess and spectacle, the centrality of surface over substance… ADHD cinema… transitory kinetic sensations that decenter spatial legibility… an impact aesthetic, [and] an ear-splitting, frenetic style” (Cranked Masculinity,” 7). Both Robin James and Steven Shaviro have defined the overall aesthetic of these practices as “post-cinematic”: a regime “centered on computer games” and emphasizing “the logic of control and gamespace, which is the dominant logic of entertainment programming today.” On a sonic level, “brostep” aligns itself with many of these cinematic descriptions. Julianne Escobedo Shepherd describes the style of Borgore, one particular dubstep DJ and producer, as “misogy blow-job beats.” Other commenters have made more obvious semiotic connections between filmic imagery and the music, as Nitsuh Abebe describes brostep basslines as conjuring “obviously cool images like being inside the gleaming metal torso of a planet-sized robot while it punches an even bigger robot.”

“Ultra Music Festival 2013″ by Wikimedia user Vinch, CC-BY-SA-3.0

“Ultra Music Festival 2013″ by Wikimedia user Vinch, CC-BY-SA-3.0

MASCULINITY AND DIGITAL AUDIO PRODUCTION

While the sound has developed gradually over at least the past decade, the ubiquity of the distinctive mid-range “brostep” wobble bass can fundamentally be attributed to a single instrument. Massive, a software synthesizer developed by the Berlin and Los Angeles-based Native Instruments, combines the precise timbral shaping capabilities of modular synthesizers with the real-time automation capabilities of digital waveform editors. As a VST (Virtual Studio Technology) plug-in, the device exemplifies the inherently transmedial nature of many digital tools, bridging studio techniques between digital audio workstations and analog synthesis, and acting as just one of many control signals within the multi-windowed world of digital audio production. In this way, Massive may be characterized as an intersonic control network in which sounds are controlled and modulated by other sounds through constantly shifting software algorithms. Through analysis of the intersubjective control network of a program such as Massive we are able to hear the convergence of hypermediation and hypermasculinity as aesthetic forms.

“Massive:Electronica” by Flickr user matt.searles, CC-BY-NC-SA-2.0

“Massive:Electronica” by Flickr user matt.searles, CC-BY-NC-SA-2.0

Media theorist Mara Mills details the notion of technical “scripts” embedded both within technological devices as well as user experiences. According to Mills, scripts are best defined as “the representation of users embedded within technology… Designers do not simply ‘project’ users into [technological devices]; these devices are inscribed with the competencies, tolerances, desires, and psychoacoustics of users” (“Do Signals Have Politics?” 338). In short, electroacoustic objects have politics, and in the case of Massive, the politics of the script are quite conventional and historically familiar. The rhythmic and timbral control network of the software aligns itself with what Tara Rodgers describes as a long history of violent masculinist control logics in electronic music, from DJs “battling” to producers “triggering” a sample with a “controller” or “executing” a programming “command” or typing a “bang” to send a signal” (“Towards a Feminist Historiography of Electronic Music,” 476).

In Massive, the primary control mechanism is the LFO (low frequency oscillator), an infrasonic electronic signal whose primary purpose is to modulate various parameters of a synthesizer tone. Dubstep artists most frequently apply the LFO to a low-pass filter, generating a control algorithm in which an LFO filters and masks specific frequencies at a periodic rate (thus creating a “wobbling” frequency effect), which, in turn, modulates the cutoff frequency of up to three oscillating frequencies at a time (maximizing the “wobble”). When this process is applied to multiple oscillators simultaneously—each operating at disparate levels of the frequency spectrum—the effect is akin to a spectral and spatial form of what Julian Henriques calls “sonic dominance.” Massive allows the user to record “automations” on the rhythm, tempo, and quantization level of the bass wobble, effectively turning the physical gestures initially required to create and modulate synthesizer sounds—such as knob-turning and fader-sliding—into digitally-inscribed algorithms.

SONIC WARFARE AND THE ETHICS OF VIRTUALITY

By positing the logic of digital audio production within a broader network of control mechanisms in digital culture, I am not simply presenting a hermeneutic metaphor. Convergence media has not only shaped the content of various multimedia but has redefined digital form, allowing us to witness a clear—and potentially dangerous—virtual politics of viral capitalism. The emergence of a Military Entertainment Complex (MEC) is the most recent instance of this virtual politics of convergence, as it encompasses broad phenomena including the use of music as torture, the design of video games for military training (and increasing collaboration between military personnel and video game designers in general), and drone warfare. The defining characteristic of this political and virtual space is a desire to simultaneously redefine the limits of the physical body and overcome those very limitations. The MEC, as well as broader digital convergence cultures, has molded this desire into a coherent hegemonic aesthetic form.

Following videogame theorist Jane McGonigal, virtual environments push the individual to “work at the very limits of their ability” in a state of infinite self-transition (Reality is Broken, 24). Yet, automation and modular control networks in the virtual environments of digital audio production continue to encourage the historical masculinist trope of “mastery,” thus further solidifying the connection between music and military technologies sounded in the examples above. In detailing hypermediation and hypermasculinity as dominant aesthetic forms of digital entertainment, it is not my goal to simply reiterate the Adornian nightmare of “rhythm as coercion,” or the more recent Congressional fears over the potential for video games and other media to cause violence. The fact that music and video games in the MEC are simultaneously being used to reinscribe the systemic violence of the Military Industrial Complex, as well as to create virtual and actual communities (DJ culture and the proliferation of online music and gaming communities), pinpoints precisely its hegemonic capabilities.

“Gear porn” by Flickr user Matthew Trentacoste, CC-BY-NC-ND-2.0

“Gear porn” by Flickr user Matthew Trentacoste, CC-BY-NC-ND-2.0

In the face of the perennial “mastery” trope, I propose that we must develop a relational ethics of virtuality. While it seems to offer the virtue of a limitless infinity for the autonomous (often male) individual, technological interfaces form the skin of the ethical subject, establishing the boundaries of a body both corporeal and virtual. In the context of digital audio production, then, the producer is not struggling against the technical limitations of the material interface, but rather emerging from the multiple relationships forming at the interface between one’s actual and virtual self and embracing a contingent and liminal identity; to quote philosopher Adriana Cavarero, “a fragile and unmasterable self” (Relating Narratives, 84).

Featured Image:  Skrillex – Hovefestivalen 2012 by Flickr User NRK P3

Mike D’Errico is a PhD student in the UCLA Department of Musicology and a researcher at the Center for Digital Humanities. His research interests and performance activities include hip-hop, electronic dance music, and sound design for software applications. He is currently working on a dissertation that deals with digital audio production across media, from electronic dance music to video games and mobile media. Mike is the web editor and social media manager for the US branch of the International Association for the Study of Popular Music, as well as two UCLA music journals, Echo: a music-centered journal and Ethnomusicology Review.

tape reelREWIND! . . .If you liked this post, you may also dig:

Toward a Practical Language for Live Electronic Performance-Primus Luta

Music Meant to Make You Move: Considering the Aural Kinesthetic-Dr. Imani Kai Johnson

Listening to Robots Sing: GarageBand on the iPad-Aaron Trammell

Musical Encounters and Acts of Audiencing: Listening Cultures in the American Antebellum

Jenny Lind

Sound in the 19th3Editor’s Note: Sound Studies is often accused of being a presentist enterprise, too fascinated with digital technologies and altogether too wed to the history of sound recording. Sounding Out!‘s last forum of 2013, “Sound in the Nineteenth Century,” addresses this critique by showcasing the cutting edge work of three scholars whose diverse, interdisciplinary research is located soundly in the era just before the advent of sound recording: Mary Caton Lingold (Duke), Caitlin Marshall (Berkeley), and Daniel Cavicchi (Rhode Island School of Design). In examining nineteenth century America’s musical practices, listening habits, and auditory desires through SO!‘s digital platform, Lingold, Marshall, and Cavicchi perform the rare task of showcasing how history’s sonics had a striking resonance long past their contemporary vibrations while performing the power of the digital medium as a tool through which to, as Early Modern scholar Bruce R. Smith dubs it, “unair” past auditory phenomena –all the while sharing unique methodologies that neither rely on recording nor bemoan their lack. The series began with Mary Caton Lingold‘s exploration of the materialities of Solomon Northup’s fiddling as self-represented in 12 Years a SlaveLast week, Caitlin Marshall treated us to a fascinating new take on Harriet Beecher Stowe’s listening practice and dubious rhetorical remixing of black sonic resistance with white conceptions of revolutionary independence.  Daniel Cavicchi closes out “Sound in the Nineteenth Century” and 2013 with an excellent meditation on listening as vibrant and shifting historical entity.  Enjoy! –Jennifer Stoever-Ackerman, Editor-in-Chief

 —

“To listen” is straightforward enough verb, signifying a kind of hearing that is directed or attentive. Add an “er” suffix, however, and “listen” moves into a whole new realm: it is no longer something one does, an attentive response to stimuli, but rather something one is, a sustained role or occupation, even an identity. Everybody listens from time to time, but only some people adopt the distinct social category of “listener.”

And yet listeners have emerged in diverse historical and social contexts. Arnold Hunt, in his recent book The Art of Hearing, for example, points to the congregants of the Church of England in the late sixteenth and early seventeenth centuries, whose sermon-gadding and intense repetitive listening to preachers became a form of popular culture. Shane White and Graham White, in The Sounds of Slavery, argue that early nineteenth-century black slaves adopted listening, or “acting soundly,” as a way of being that gave everyday sounds—conversation, cries of exertion, hymns—multiple layers of meaning and a power unknown to white overseers. Jonathan Sterne, in The Audible Past, describes the post-Civil War culture of sound telegraphy, in which young working class men trained themselves to employ “audile technique” for bureaucratic purposes, rendering their hearing objective, standardized, and networked.

Physical manifestations of the growing standardization of listening, Dodge's Institute of Telegraphy, circa 1910 - Valparaiso, Indiana, Image by Flickr User Mr. Shook

Physical manifestations of the growing standardization of listening, Dodge’s Institute of Telegraphy, circa 1910 – Valparaiso, Indiana, Image by Flickr User Mr. Shook

We might add our own contemporary iPod era to these examples. We live in a time, after all, when it entirely acceptable to appear alone in public, ears connected to an iPod, head bobbing to the grooves of a vast archive of recorded music. Sampling, playlists, streaming–thanks to playback technologies, the U.S. has become a nation of obsessive listeners, and the power to “capture” a sound and re-hear it, something that began with the phonograph, remains a time-bending drama that can awaken people to their own aurality. Technologized listening, in fact, has spawned many of the icons of music discourse in the past 100 years: Edison’s tone testers in 1910s, record-collecting jitterbugs in the 1930s, audiophiles of the Hi-Fidelity era in the 1950s, Beatles fans with their bedroom record players in the 1960s, the “chair guy” in Memorex’s famous ad campaign of 1980, dancing listeners silhouetted in iPod posters since 2003.

But I think also that phonograph-centric narratives have obscured earlier, equally powerful cultures of listeners. The focus of my recent research, for example, has been the world of antebellum concert audiences. Between 1830 and 1860, the United States developed concentrated population centers filled with boosters and recent migrants eager to embrace a life based on new kinds of economic opportunity. Shaping much of the urban experience was a growing commercialization of culture that generated new and multiple means of musical performance, including parades, museum exhibitions, pleasure gardens, band performances, and concerts. Together, these performances significantly enhanced the act of listening: for people used to having to make music for themselves in order to hear it, a condition common to most Americans before 1830, access to public performances by others provided an opportunity for working and middle-class whites (women, African Americans, and the poor were another matter) to stop worrying about making music and, with the purchase of a ticket, to solely, and at length, assume an audience role.

A young George Templeton Strong, Image from CUNY Baruch

A young George Templeton Strong, Image from CUNY Baruch

The odd circumstance of purchasing the experience of listening provided class-striving urbanites with new possibilities for self-transformation. For many young, rural, white men, for example, arriving to the city for the first time to take clerking jobs in burgeoning merchant houses, being able to hear diverse performances of music was associated with a cosmopolitanism that brimmed with social possibility. Thus, for instance, Nathan Beekley, a young clerk, recently arrived in Philadelphia in 1849, found himself attending multiple performances of music several nights a week, including more and more appearances at the opera as a way to avoid “rowdies.” In New York City during the 1840s, George Templeton Strong, a young lawyer in Manhattan, derided his own musical abilities and instead attended every public musical event he could find, carefully chronicling his listening experiences and analyzing his reactions in a multi-volume journal. Walt Whitman, a young man on the make in Brooklyn and New York between 1838 and 1853, regularly attended every sound amusement he could, including the Bowery Theatre, dime museums, temperance lectures, political rallies, and opera, writing in Leaves of Grass, “I think I will do nothing for a long time but listen/And accrue what I hear into myself.”

This culture of listening was, in many ways, very much unlike ours. Despite an expanded access to performance, for instance, professional concerts before the mid-1850s were often understood as part of a wider ecology of sound. Very few listened to music in ways that we might expect today–focused on a “work,” in a concert hall, without distraction. Listening, in fact, was as much a matter of local happenstance as personal selection—a passing marching band, echoes of evening choir practice at a nearby church, an impromptu singing performance at a party. Such experiences were marked by the momentary thrill of spontaneity and discovery rather than the studied appreciation of familiarity; in any moment of hearing, it was difficult to know how long the encounter might be, or even what sounds, exactly, were being heard. Cities like Boston and New York were especially rich with such surprise encounters.

Thomas Benecke's lithograph “Sleighing in New York” from 1855, which shows musicians performing on the balcony of Barnum's Museum on the corner of Broadway and Ann Street.

Thomas Benecke’s lithograph “Sleighing in New York” from 1855, which, among many other sounds, depicts musicians performing on the balcony of Barnum’s Museum on the corner of Broadway and Ann Street.

Francis Bennett, a young arrival to Boston in 1854, for example, encountered, in his first night in the city, a band concert and the “cries” from a “Negro meeting house,” and within weeks became enamored of fife and drum bands, often leaving work to follow one and then another as far as he dared. Young writer J. T. Trowbridge was more stationary but equally enthusiastic about what he heard from his New York rooming house in 1847: “The throngs of pedestrians mingled below, moving (marvelous to conceive) each to his or her ‘separate business and desire;’ the omnibuses and carriages rumbled and rattled past; while, over all, those strains of sonorous brass built their bridge of music, from the high café balcony to my still higher window ledge, spanning joy and woe, sin and sorrow, past and future….”

Music listeners were also often listeners of other forms of commercial sound, especially theater, oratory, and church services, which, together, comprised a complex sonic culture. This was especially reinforced by the physical spaces in which they shared such diverse aural experiences. In a rapidly-growing society, there often was not time or immediate resources to construct buildings dedicated to specific uses; instead, existing structures–typically a “hall” or “opera house”–served mixed uses.

Metropolitan Hall in New York City, where concert singer Elizabeth Taylor Greenfield debuted in 1853.

Metropolitan Hall in New York City, where concert singer Elizabeth Taylor Greenfield debuted in 1853. It also hosted abolitionist meetings, talks on women’s rights, and various other activities.

As historian Jean Kilde noted in When Church Became Theater, evangelists in the Second Great Awakening often rented urban theaters for services; and congregations, in turn, rented churches to drama troupes, ventriloquists, and musicians to raise money. This “mixed-use” of buildings was reinforced by hearers, who often engaged in their own “mixed-use” understandings of what they heard. They evaluated sermons as they would a theatrical performance or found church choirs thrillingly entertaining rather than piously inspirational. Conversely, they listened to symphonic concerts with a religious solemnity.

This culture of antebellum, middle-class urban listeners didn’t last long, succumbing to the class sorting by post-Civil War social reformers, who mocked the indiscriminate over-exuberance of antebellum listeners as a kind of “mania” and a form of social disorder. As Lawrence Levine explains in Highbrow Lowbrow, over the course of the nineteenth century, developing a “musical ear” became increasingly paramount, reverence for great works of art shaped audience response, and listening became a specific skill to be learned. Music became something to appreciate not simply hear. By the 1890s, a true listener was someone who, in the words of critic Henry Edward Krehbiel (in his enormously popular How to Listen to Music, from 1897), “will bring his fancy into union with that of the composer” (51).

 “Man With the Musical Ear.” Arthur’s Home Magazine (September 1853): 167.

“Man With the Musical Ear.” Arthur’s Home Magazine (September 1853): 167.

In many ways, the controlled silent listening favored by reformers directly paved the way for music technologies, like the phonograph, that similarly sought to control and manipulate listening. But it was the urban music listeners of the 1840s and 1850s who were responsible, in the first place, for identifying and accentuating the joys and possibilities of “just listening.”

Featured Image: Etching of Jenny Lind Singing at Castle Garden in New York City, 1851

Daniel Cavicchi is Dean of Liberal Arts and Professor of History, Philosophy, and the Social Sciences at Rhode Island School of Design. He is author of Listening and Longing: Music Lovers in the Age of Barnum and Tramps Like Us: Music and Meaning Among Springsteen Fans, and co-editor of My Music: Explorations of Music in Daily Life. His public work has included Songs of Conscience, Sounds of Freedom, an inaugural exhibit for the Grammy Museum in Los Angeles; the curriculum accompanying Martin Scorcese’s The Blues film series; and other projects with the Public Broadcasting System and the National Park Service. He is currently the editor of the Music/Interview series from Wesleyan University Press and serves on the editorial boards of American Music and Participations: the Journal of Audience Research

tape reelREWIND! . . .If you liked this post, you may also dig:

“Como Now?: Marketing ‘Authentic’ Black Music,” –J. Stoever-Ackerman

Hearing the Tenor of the Vendler/Dove Conversation: Race, Listening, and the “Noise” of Texts –Christina Sharpe

How Svengali Lost His Jewish Accent--Gayle Wald

Live Electronic Performance: Theory And Practice

3493272789_8c7302c8fa_z

This is the third and final installment of a three part series on live Electronic music.  To review part one, “Toward a Practical Language for Live Electronic Performance” click here. To peep part two, “Musical Objects, Variability and Live Electronic Performance” click here.

“So often these laptop + controller sets are absolutely boring but this was a real performance – none of that checking your emails on stage BS. Dude rocked some Busta, Madlib, Aphex Twin, Burial and so on…”

This quote, from a blogger about Flying Lotus’ 2008 Mutek set, speaks volumes about audience expectations of live laptop performances. First, this blogger acknowledges that the perception of laptop performances is that they are generally boring, using the “checking your email” adage to drive home the point. He goes to express what he perceived to set Lotus’s performance apart from that standard. Oddly enough, it isn’t the individualism of his sound, rather it was Lotus’s particular selection and configuration of other artists’ work into his mix – a trademark of the DJ.

Contrasting this with the review of the 2011 Flying Lotus set that began this series, both reveal how context and expectations are very important to the evaluation of live electronic performance.  While the 2008 piece praises Lotus for a DJ like approach to his live set, the 2011 critique was that the performance was more of a DJ set rather than a live electronic performance. What changed in the years between these two sets was the familiarity with the style of performance (from Lotus and the various other artists on the scene with similar approaches) providing a shift in expectations. What they both lack, however, is a language to provide the musical context for their praise or critique; a language which this series has sought to elucidate.

To put live electronic performances into the proper musical context, one must determine what type of performance is being observed. In the last part of this series, I arrive at four helpful distinctions to compare and describe live electronic performance, continuing this project of developing a usable aesthetic language and enabling a critical conversation about the artform.  The first of the four distinctions between different types of live electronic music performance concerns the manipulation of fixed pre-recorded sound sources into variable performances. The second distinction cites the physical manipulation of electronic instruments into variable performances. The third distinction demarcates the manipulation of electronic instruments into variable performances by the programming of machines. The last one is an integrated category that can be expanded to include any and all combinations of the previous three.

Essential to all categories of live electronic music performance, however, is the performance’s variability, without which music—and its concomitant listening practices–transforms from  a “live” event to a fixed musical object. The trick to any analysis of such performance however, is to remember that, while these distinctions are easy to maintain in theory, in performance they quickly blur one into the other, and often the intensity and pleasure of live electronic music performance comes from their complex combinations.

6250416351_d5ca1fc1f3_b

Flying Lotus at Treasure Island, San Francisco on 10-15-2011, image by Flickr User laviddichterman

For example, an artist who performs a set using solely vinyl with nothing but two turntables and a manual crossfading mixer, falls in the first distinction between live electronic music performances. Technically, the turntables and manual crossfading mixer are machines, but they are being controlled manually rather than performing on their own as machines.  If the artist includes a drum machine in the set, however, it becomes a hybrid (the fourth distinction), depending on whether the drum machine is being triggered by the performer (physical manipulation) or playing sequences (machine manipulation) or both. Furthermore, if the drum machine triggers samples, it becomes machine manipulation (third distinction) of fixed pre-recorded sounds (first distinction) If the drum machine is used to playback sequences while the artist performs a turntablist routine, the turntable becomes the performance instrument while the drum machine holds as a fixed source. All of these relationships can be realized by a single performer over the course of a single performance, making the whole set of the hybrid variety.

While in practice the hybrid set is perhaps the most common, it’s important to understand the other three distinctions as each of them comes with their own set of limitations which define their potential variability.  Critical listening to a live performance includes identifying when these shifts happen and how they change the variability of the set.  Through the combination their individual limitations can be overcome increasing the overall variability of the performance. One can see a performer playing the drum machine with pads and correlate that physicality of it with the sound produced and then see them shift to playing the turntable and know that the drum machine has shifted to a machine performance. In this example the visual cues would be clear indicators, but if one is familiar with the distinctions the shifts can be noticed just from the audio.

This blending of physical and mechanical elements in live music performance exposes the modular nature of live electronic performance and its instruments. teaching us that the instruments themselves shouldn’t be looked at as distinction qualifiers but rather their combination in the live rig, and the variability that it offers. Where we typically think of an instrument as singular, within live electronic music, it is perhaps best to think of the individual components (eg turntables and drum machines) as the musical objects of the live rig as instrument.

Flying Lotus at the Echoplex, Los Angeles, Image by Flickr User  sunny_J

Flying Lotus at the Echoplex, Los Angeles, Image by Flickr User sunny_J

Percussionists are a close acoustic parallel to the modular musical rig of electronic performers. While there are percussion players who use a single percussive instrument for their performances, others will have a rig of component elements to use at various points throughout a set. The electronic performer inherits such a configuration from keyboardists, who typically have a rig of keyboards, each with different sounds, to be used throughout a set. Availing themselves of a palette of sounds allows keyboardists to break out of the limitations of timbre and verge toward the realm of multi-instrumentalists.  For electronic performers, these limitations in timbre only exist by choice in the way the individual artists configure their rigs.

From the perspective of users of traditional instruments, a multi-instrumentalist is one who goes beyond the standard of single instrument musicianship, representing a musician well versed at performing on a number of different instruments, usually of different categories.  In the context of electronic performance, the definition of instrument is so changed that it is more practical to think not of multi-instrumentalists but multi-timbralists.  The multi-timbralist can be understood as the standard in electronic performance.  This is not to say there are not single instrument electronic performers, however  it is practical to think about the live electronic musician’s instrument not as a singular musical object, but rather a group of musical objects (timbres) organized into the live rig.  Because these rigs can be comprised of a nearly infinite number of musical objects, the electronic performer has the opportunity to craft a live rig that is uniquely their own. The choices they make in the configuration of their rig will define not just the sound of their performance, but the degrees of variability they can control.

Because the electronic performer’s instrument is the live rig comprised of multiple musical objects, one of the primary factors involved in the configuration of the rig is how the various components interact with one another over the time dependent course of a performance. In a live tape loop performance, the musician may use a series of cassette players with an array of effects units and a small mixer. In such a rig, the mixer is the primary means of communication between objects. In this type of rig however, the communication isn’t direct. The objects cannot directly communicate with each other, rather the artist is the mediator. It is the artist that determines when the sound from any particular tape loop is fed to an effect or what levels the effects return sound in relation to the loops. While watching a performance such as this, one would expect the performer to be very involved in physically manipulating the various musical objects. We can categorize this as an unsynchronized electronic performance meaning that the musical objects employed are not locked into the same temporal relations.

Big Tape Loops, Image by Flickr User  choffee

Big Tape Loops, Image by Flickr User choffee

The key difference between an unsynchronized and s synchronized performance rigs is the amount of control over the performance that can be left to the machines. The benefit of synchronized performance rigs is that they allow for greater complexity either in configuration or control. The value of unsynchronized performance rigs is they have a more natural and less mechanized feel, as the timing flows from the performer’s physical body. Neither could be understood as better than the other, but in general they do make for separate kinds of listening experiences, which the listener should be aware of in evaluating a performance. Expectations should shift depending upon whether or not a performance rig is synchronized.

This notion of a synchronized performance rig should not only be understood as an inter-machine relationship. With the rise of digital technology, many manufacturers developed workstation style hardware which could perform multiple functions on multiple sound sources with a central synchronized control. The Roland SP-404 is a popular sampling workstation, used by many artists in a live setting. Within this modest box you get twelve voices of sample polyphony, which can be organized with the internal sequencer and processed with onboard effects. However, a performer may choose not to utilize a sequencer at all and as such, it can be performed unsynchronized, just triggering the pads. In fact, in recent years there has been a rise of drum pad players or finger drummers who perform using hardware machines without synchronization. Going back to our three distinctions a performance such as this would be a hybrid of physical manipulation of fixed sources with the physical manipulation of an electronic instrument.  From this qualification, we know to look for extensive physical effort in such performances as indicators of the the artists agency on the variable performance.

Now that we’ve brought synchronization into the discussion it makes sense to talk about what is often the main means of communication in the live performance rig – the computer. The ultimate benefit of a computer is the ability to process a large number of calculations per computational cycle. Put another way, it allows users to act on a number of musical variables in single functions. Practically, this means the ability to store, organize recall and even perform a number of complex variables. With the advent of digital synthesizers, computers were being used in workstations to control everything from sequencing to the patch sound design data. In studios, computers quickly replaced mixing consoles and tape machines (even their digital equivalents like the ADAT) becoming the nerve center of the recording process. Eventually all of these functions and more were able to fit into the small and portable laptop computer, bringing the processing power in a practical way to the performance stage.

Flying Lotus and his Computer, Image by Flickr User  jaswooduk

Flying Lotus and his Computer, All Tomorrow’s Parties 2011, Image by Flickr User jaswooduk

A laptop can be understood as a rig in and of itself, comprised of a combination of software and plugins as musical objects, which can be controlled internally or via external controllers. If there were only two software choices and ten plugins available for laptops, there would be over seven million permutations possible. While it is entirely possible (and for many artists practical) for the laptop to be the sole object of a live rig, the laptop is often paired with one or more controllers. The number of controllers available is nowhere near the volume of software on the market, but the possible combinations of hardware controllers take the laptop + controller + software combination possibilities toward infinity. With both hardware and software there is also the possibility of building custom musical objects that add to the potential uniqueness of a rig.

Unfortunately, quite often it is impossible to know exactly what range of tools are being utilized within a laptop strictly by looking at an artist on stage. This is what leads to probably the biggest misnomer about the performing laptop musician. As common as the musical object may look on the stage, housed inside of it can be the most unique and intricate configurations music (yes all of music) has ever seen. The reductionist thought that laptop performers aren’t “doing anything but checking email” is directly tied to the acousmatic nature of the objects as instruments. We can hear the sounds, but determining the sources and understanding the processes required to produce them is often shrouded in mystery. Technology has arrived at the point where what one performs live can precisely replicate what one hears in recorded form, making it easy to leap to the conclusion that all laptop musicians do is press play.

Indeed some of them do, but to varying degrees a large number of artists are actively doing more during their live sets. A major reasons for this is that one of the leading Digital Audio Workstations (DAW) of today also doubles as a performance environment. Designed with the intent of taking the DAW to the stage, Ableton Live allows artists to have an interface that facilitates the translation of electronic concepts from the studio to the stage. There are a world of things that are possible just by learning the Live basics, but there’s also a rabbit hole of advanced functions all the way to the modular Max for Live environment which lies on the frontier discovering new variables for sound manipulation. For many people, however, the software is powerful enough at the basic level of use to create effective live performances.

Sample Screenshot from a performer's Ableton Live set up for an "experimental and noisy performance" with no prerecorded material, Image by Flickr User Furibond

Sample Screenshot from a performer’s Ableton Live set up for an “experimental and noisy performance” with no prerecorded material, Image by Flickr User Furibond

In its most basic use case, Ableton Live is set up much like a DJ rig, with a selection of pre-existing tracks queued up as clips which the performer blends into a uniform mix, with transitions and effects handled within the software. The possibilities expand out from that: multi-track parts of a song separated into different clips so the performer can take different parts in and out over the course of the performance; a plugin drum machine providing an additional sound source on top of the track(s), or alternately the drum machine holding a sequence while track elements are laid on top of it. With the multitude of plugins available just the combination of multi-track Live clips with a single soft synth plugin, lends itself to near infinite combinations. The variable possibilities of this type of set, even while not exploiting the breadth of variable possibilities presented by the gear, clearly points to the artist’s agency in performance.

Within the context of both the DJ set and the Ableton Live set, synchronization plays a major role in contextualization. Both categories of performance can be either synchronized or unsynchronized. The tightest of unsynchronized sets will sound synchronized, while the loosest of synchronized sets will sound unsynchronized. This plays very much into audience perception of what they are hearing and the performers’ choice of synchronization and tightness can be heavily influenced by those same audience expectations.

A second performance screen capture by the same artist, this time using pre-recorded midi sequences, Image by Flickr User Furibond

A second performance screen capture by the same artist, this time using pre-recorded midi sequences, Image by Flickr User Furibond

A techno set is expected to maintain somewhat of a locked groove, indicative of a synchronized performance. A synchronized rig either on the DJ side (Serato utilizing automated beat matching) or on the Ableton side (time stretch and auto bpm detection sync’d to midi) can make this a non factor for the physical performance, and so listening to such a performance it would be the variability of other factors which reveals the artist’s control over the performance. For the DJ, the factors would include the selection, transitions and effects use. For the Ableton user, it can include all of those things as well as the control over the individual elements in tracks and potentially other sound sources.

On the unsychronized end of the spectrum, a vinyl DJ could accomplish the same mix as the synchronized DJ set but it would physically require more effort on their part to keep all of the selections in time. This would mean they might have to limit exerting control on the other variables. An unsychronized Live set would be utilizing the software primarily as a sound source, without MIDI, placing the timing in the hands of the performer. With the human element added to the timing it would be more difficult to produce the machine-like timing of the other sets. This doesn’t mean that it couldn’t be effective, but there would be an audible difference in this type of set compared to the others.

What we’ve established is that through the modular nature of the electronic musician’s rig as an instrument, from synthesizer keyboards to Ableton Live, every variable consideration combines to produce infinite possibilities. Where the trumpet is limited in timbre, range and dynamics, the turntable is has infinite timbres; the range is the full spectrum of human hearing; and the dynamics directly proportional to the output. The limitations of the electronic musician’s instrument appear only in electrical current constraints, processor speed limits, the selection of components and the limitations of the human body.

Flying Lotus at Electric Zoo, 2010, Image by Flickr User TheMusic.FM

Flying Lotus at Electric Zoo, 2010, Image by Flickr User TheMusic.FM

Within these constraints however, we have only begun touching the surface of possibilities. There are combinations that have never happened, variables that haven’t come close to their full potential, and a multitude of variables that have yet to be discovered. One thing that the electronic artist can learn from jazz toward maximizing that potential is the notion of play, as epitomized with jazz improvisation. For jazz, improvisation opened up the possibilities of the form which impacted, performance and composition. I contend that the electronic artist can push the boundaries of variable interaction by incorporating the ability to play from the rig both in their physical performance and giving the machine its own sense of play. Within this play lie the variables which I believe can push electronic into the jazz of tomorrow.

Featured Image by Flickr User Juha van ‘t Zelfde

Primus Luta is a husband and father of three. He is a writer and an artist exploring the intersection of technology and art, and their philosophical implications. In 2014 he will be releasing an expanded version of this series as a book entitled “Toward a Practical Language: Live Electronic Performance”. He is a regular guest contributor to the Create Digital Music website, and maintains his own AvantUrb site. Luta is a regular presenter for theRhythm Incursions Podcast series with his monthly showRIPL. As an artist, he is a founding member of the live electronic music collectiveConcrète Sound System, which spun off into a record label for the exploratory realms of sound in 2012.

tape reelREWIND! . . .If you liked this post, you may also dig:

Evoking the Object: Physicality in the Digital Age of Music--Primus Luta

Experiments in Agent-based Sonic Composition–Andreas Pape

Calling Out To (Anti)Liveness: Recording and the Question of PresenceOsvaldo Oyola

%d bloggers like this: