Author’s note: In line with the ethics of listening considered below, I’ve chosen not to embed the videos of police violence that I discuss. But I’ve linked to them when available for readers who’d like to see/hear their content.–Alex Werth
“I’m scared to death of these police.” Dave Chappelle’s voice—pitched down, but nonetheless recognizable—calls from the speakers, cutting through the darkness of Oakland, CA’s Starline Social Club. It’s closing night of the 2016 Matatu Festival of Stories, an annual celebration of Black diasporic narratives, technologies, and futures routed through the San Francisco Bay Area. King Britt—an eclectic electronic pioneer and producer, and former DJ for Digable Planets—has landed with the third version of “To Unprotect and Subserve: A Sonic Response.” (It was first performed after a march for Mike Brown in Ferguson in 2014.) I can barely see Britt, his solemn look bathed in the dim glow of electronic consoles and the red-and-blue pulse of police lights. “First money I got,” Chappelle continues, “I went out and bought me a police scanner. I just listen to these mothafuckas before I go out, just to make sure everything’s cool. ‘Cause you hear shit on there: ‘Calling all cars, calling all cars. Be on the lookout for a Black male between 4’7” and 6’8”.’” With this double invocation, Britt invites us to listen. Specifically, à la Chappelle, he invites us to listen back—to attune to the agents of a racialized security state that, from ShotSpotter to CIA surveillance, profile and police the world’s sonic landscapes.
This essay considers the ethical effects/affects in King Britt’s work of sampling what I call the sonic archive of police violence. From Oakland to Ferguson, the Movement for Black Lives has raised critical questions about the mass surveillance of Black and Brown communities, the undemocratic control of data in cases of police misconduct, and the use of smart phones and other recording devices as means to hold the state accountable. But the failure to indict or even discipline cops in police killings where audio/video evidence was not only available but overwhelming, from Eric Garner to Tamir Rice, casts doubt upon the emancipatory power of simply recording our race-based system of criminal (in)justice. And when re-presented ad nauseum on the news and social media, these recordings can retraumatize those most vulnerable to racist state violence. Indeed, at a discussion among Black artists at Matatu, each panelist admitted to limiting their exposure to what poet Amir Sulaiman called “e-lynching.”
What, then, can we learn from Britt about the praxis and politics of listening back when the circulation of what KRS One dubbed the “sound of da police” is now daily, digital, and ubiquitous? How can we make sense of audio recording when it’s come to signal repression, resistance, and painful reprisal all at the same time?
Back in the darkness of the club, Chappelle’s voice dissolves into a conversation between Darrin Wilson and a dispatcher from the Ferguson Police, who sends him to find the body of Mike Brown—a “Black male in a white t-shirt,” reportedly “running toward QuikTrip” with a stolen box of Swishers. The optimistic waves of sound that open the piece resolve into a throbbing pulse of 1/32nd notes that sounds like a helicopter. Britt begins to loop in other elements: a low bass tone, a syncopated stab. With kicks and reverb-heavy snares, he builds a slow, head-nodding beat (60 bpm) that coalesces around the vocal sample—swaddling, softening, and ultimately subsuming it with high-pitched legato tones. The synths are sorrowful. But the mesmerizing beat embraces listeners in their mourning.
This act of listening to the state differs from the one parodied at the start. Chappelle attends to the police scanner as a form of precaution, checking whether it’s safe for him to enter a realm where he can be marked as criminal (“Staying in the crib tonight! Fuck that!” he concludes). But Britt’s sonic bricolage is more therapeutic than protective. He uses repetition, reverb, and improvised melody to score a sonic altar—to open space, rather than control time—where we can meditate on the archive of police violence with the intention to heal. “Sometimes to push through the trauma we need to experience it in a different context,” he tells me over email. “There is room for healing within the chords and sounds that are carefully curated.” Britt thus reactivates the pathos buried inside this archive—reclaiming what Susan Sontag, in “On Photography,” recognizes as an “ethical content” of representational form that can fade from careless repetition (21).
After removing the loops one-by-one, until the helicopter sound is all that remains, Britt releases a new sample into the mix. It’s audio from a cell-phone video taken in 2013 by two Black men as they’re harassed by White cops during a stop-and-frisk in Philadelphia (Britt’s hometown). He scores the somber scene with dissonant organs and an offbeat percussive note that reminds me of stress-induced arrhythmia—a heartbeat out-of-place, aggravated, precarious . Vibrating with anxiety, the soundscape temporarily snatches listeners from mourning, demanding that we listen in witness, instead.
The video reveals that the police tear the two men apart, pinning them to the cruiser. But the violence of the encounter is verbal as much as physical. The cops’ language and tone become increasingly abusive as the men contest their treatment in a sounding of agency that Regina Bradley, writing about Black women, calls “sonic disrespectability.” Philip Nace, the more audible of the officers, embodies a double bind built into what Jennifer Lynn Stoever calls the “sonic color line.” He threatens one of the men when he speaks out (“You’re gonna be in violation if you keep running your mouth when I split your wig open.”). But he turns around and ridicules him when, instead, the man refuses to speak (“You don’t know what we know…Right? Right?! What, you don’t hear now?”). As Stoever notes, the demand that African Americans speak when spoken to, but in a way that sounds their submission to Whites, is a feature of anti-Black oppression stemming from the “racial etiquette” of slavery (30-32).
Britt’s manipulation of vocals speaks to the centrality of sampling in hip-hop. According to Tricia Rose, hip-hop artists have long prioritized the sample as a way to recognize and renovate a communal repertoire of songs and sounds (79). And given the realities of anti-Black oppression in the U.S., this repertoire has often entailed the “sound(s) of da police.” From sirens to skits to verses, rappers and producers have remixed the sounds of the state to characterize, caricature, and critique the country’s criminal justice system. But Britt’s trespass on the state’s sonic sovereignty differs from classics like “Fuck tha Police,” in which N.W.A. conducts a mock trial of “the system.” Whereas N.W.A. reappropriates the rituals of legal testimony and judgment to condemn the police (“The jury has found you guilty of being a redneck, white-bread, chicken-shit mothafucka.”), Britt’s musical re-mediation of police violence favors grief over moralizing, dirge over indictment.
In this vein, the musical/ethical demand to witness waxes but then wanes. The soundscape becomes more and more dissonant until the vocals are consumed by a thunderous sound. Suddenly, the storm clears. Britt hits a pre-loaded drum track (136 bpm) with driving double-time congas and chimes over a steady sway of half-time kicks. He starts to improvise on the synth in an angelic register, revealing the impact of his early encounters with Sun Ra on his aesthetic. The catharsis of the scene is accentuated by the sporadic sound of exhalation. This sense of freedom dissolves when the beat runs out of gas…or is pulled over. In its stead, Britt introduces audio from the dashboard camera of Brian Encinia, the Texas State Trooper who arrested Sandra Bland. Encinia and Bland’s voices are pitched down and filtered through an echo delay, lending an intense sense of dread to his enraged orders (“Get out of the car! I will light you up!”).
Here, I sense the affective resonance of dub. Like the musicians on rotation in Michael Veal’s Dub, Britt manipulates the timbre and texture of voices in a way that demands a different sort of attention from listeners who, like me, may be desensitized to the sonic violence of the racialized security state as it’s vocalized and circulated in and between Ferguson, Philly, and Prairie View. Britt reworks the character and context of the vocals into a looping soundscape, and that soundscape sends me into a meditative space—one in which the vibes of humiliation and malice “speak” to me more than Encinia’s individual utterances as an agent of the state. According to Veal, the pioneers of dub developed a sound that, while reverberating with the severity of the Jamaican postcolony, “transport[ed] their listeners to dancefloor nirvana” and “the far reaches of the cultural and political imagination” (13). Now, conducting our Matatu, Britt is both an engineer and a medicine man. Rather than simply diagnose the state of anti-Black police violence in the American (post)colony, he summons a space where we can reconnect with the voices (and lives) lost to the archives of police violence amid what Veal refers to as dub’s Afro-sonic repertoire of “reverb, remembrance, and reverie” (198).
What Sontag once wrote about war photography no doubt holds for viral videos (and the less-recognized soundscapes that animate them). Namely, when used carelessly or even for gain, the documentary-style reproduction of the sonic archive of police violence can work to inure or even injure listeners. But in Britt’s care-full bricolage, sampling serves to literally re-mediate the violence of racialized policing and its reverberations throughout our everyday landscapes of listening. It’s not the fact of repetition, then, but the modality, that matters. And Britt draws upon deep traditions of scoring, hip-hop, and dub to sonically construct what he calls a “space to breathe.”
Featured image of King Britt’s performance courtesy of Eli Jacobs-Fantauzzi for the Matatu Festival of Stories.
Alex Werth is a doctoral candidate in the Department of Geography at UC Berkeley. His research looks at the routine regulation of expressive culture, especially music and dance, within the apparatuses of public nuisance and safety as a driver of cultural foreclosure in Oakland, CA. It also considers how some of those same cultural practices enable forms of coordination and collectivity that run counter to the notions of “the public” written into law, plan, and property. In 2016, he was a member of the curatorial cohort for the Matatu Festival of Stories and is currently a Public Imagination Fellow at Yerba Buena Center for the Arts in San Francisco. He lives in Oakland, where he dances samba and DJs as Wild Man.
REWIND!…If you liked this post, you may also dig:
Music to Grieve and Music to Celebrate: A Dirge for Muñoz— Johannes Brandis
I’m fortunate to have quite a few friends with eclectic musical tastes, who continually expose me some of the best, albeit often obscure, sources for inspiration. They arrive as random selections sent with a simple “you’d appreciate this” note attached. Good friends that they are, they rarely miss the mark. Most intriguing is when a cluster of things from different people carry a similar theme, converging to a need on my part for some sort of musical action.
A few years back I received a huge dump of gigabytes of audio and video. Within it were some concert footage and performances this friend and I had been discussing; I consumed those quickly in an effort to keep that conversation going. Tucked amidst that dump however, was a copy of the movie Liquid Sky. I asked the friend about it because the description of the plot–“heroin-pushing aliens invade 80’s New York”–led me to believe it wasn’t really my thing (not a big fan of needles). Although my friend insisted I’d enjoy it, it took me several months if not a whole year before I finally pressed play.
Even though Liquid Sky was not my favorite movie by any measure, it was immediately apparent to my ears why my friend insisted I check it out. The film’s score was performed completely on a Fairlight CMI, capturing the synthesized undercurrent of the early 80’s New York music scene, more popularly seen in the cult classic Downtown 81, starring Jean Michel Basquiat. While the performances in that movie are perhaps closer to my tastes, none of them compare to one scene from Liquid Sky that I fell in love with, instantly:
The song grabbed me so much, I quickly churned out a cover version.Primus Luta “Me & My Rhythm Box (V1)”
While felt good to make, there remained something less than satisfying about it. The cover had captured my sound, but at a moment of transition. More specifically, the means by which I was trying to achieve my sound at the time had shifted from a DAW-in-the-box aesthetic to a live performance feel, one that I had already begun writing about here on Sounding Out! in 2013. Interestingly, the inspiration to cover the song pushed me back to my in-the-box comfort zone.
It was good, but I knew I could do more.
As I said, these inspirations tend to group around a theme. Prior to receiving the Liquid Sky dump, I had received an email out of the blue from Hank Shocklee, producer and member of the Bomb Squad. I’ve been a longtime fan, and we had the opportunity to meet a few years prior. Since then he’s played a bit of a mentoring role for me. In the email he asked if I wanted to join an experimental electronic jazz project he was pulling together as the drummer.
I was taken aback. Hank Shocklee asking me to be his drummer. Honestly, I was shook.
Not that I didn’t know why he might think to ask me, but immediately I started to question whether I was good enough. Rather than dwell on those feelings, though, I started stepping up my game. While the project itself never came to fruition, Shocklee’s email led me to building my drmcrshr set of digital instruments.
A year or so later, I ran into Shocklee again when he was in Philadelphia for King Britt’s Afrofuturism event with mutual friend artist HPrizm. By this time I had already recorded the “Me and My Rhythm Box” cover. Serendipitously, HPrizm ended up dropping a sample from it in the midst of his set that night. A month or so later, HPrizm and I met up in the studio with longtime collaborator Takuma Kanaiwa to record a live set on which I played my drmcrshr instruments.Primus Luta x HPrizm x Takuma Kanaiwa – “Excerpt”
Not too long after, I received an email from NYC-based electronic musician Elucid, saying he was digging for samples on this awesome soundtrack. . .Liquid Sky.
The final convergence point had been hanging over my head for a while. Having finished the first part of my “Toward a Practical Language series on Live Performance” series, I knew I wanted the next part to focus on electronic instruments, but wasn’t yet sure how to approach it. I had an inkling about a practicum on the actual design and development of an electronic instrument, but I didn’t yet have a project in mind.
As all of these things, people, and sounds came together–Liquid Sky, Shocklee, HPrizm, Elucid–it became clear that I needed to build a rhythm box.
What stands out in Paula Sheppard’s performance from Liquid Sky is the visual itself. She stands in the warehouse performance space surrounded by 80’s scenesters posing with one hand in the air, mic in the other while strapped to her side is her rhythm box, the Roland CR-78, wires dangling from it to connect to the venue’s sound system. She hits play to start the beat launching into the ode for the rhythm machine.
Contextually, it’s far more performance art than music performance. There isn’t much evidence from the clip that the CR-78 is any more than a prop, as the synthesizer lines indicate the use of a backing track. The commentary in the lyrics however, hone in on an intent to present the rhythm box as the perfect musical companion, reminiscent of comments Raymond Scott often made about his desire to make a machine to replace musicians.
My rhythm box is sweet
Never forgets a beat
It does its rule
Do you want to know why?
It is pre-programmed
Rhythm machines such as the CR-78 were originally designed as accompaniment machines, specifically for organ players. They came pre-programmed with a number of traditional rhythm patterns–the standards being rock, swing, waltz and samba–though the CR-78 had many more variations. Such machines were not designed to be instruments themselves, rather musicians would play other instruments to them.
In 1978 when the CR-78 was introduced, rhythm machines were becoming quite sophisticated. The CR-78 included automatic fills that could be set to play at set intervals, providing natural breaks for songs. As with a few other machines, selecting multiple rhythms could combine patterns into new rhythms. The CR-78 also had mute buttons and a small mixer, which allowed slight customization of patterns, but what truly set the CR-78 apart was the fact that users could program their own patterns and even save them.
By the time it appeared in Liquid Sky, the CR-78 had already been succeeded by other CR lines culminating in the CR-8000. Roland also had the TR series including the TR-808 and the TR-909, which was released in 1982, the same year Liquid Sky premiered.
In 1980 however, Roger Linn’s LM-1 premiered. What distinguished the LM-1 from other drum machines was that it used drum samples–rather than analog sounds–giving it more “real” sounding drum rhythms (for the time). The LM-1 and its predecessor, the Linn Drum both had individual drum triggers for its sounds that could be programmed into user sequences or played live. These features in particular marked the shift from rhythm machines to drum machines.
In the post-MIDI decades since, we’ve come to think less and less about rhythm machines. With the rise of in-the-box virtual instruments, the idea of drum programming limitations (such as those found on most rhythm machines) seems absurd or arcane to modern tastes. People love the sounds of these older machines, evidenced by the tons of analog drum samples and virtual and hardware clones/remakes on the market, but they want the level of control modern technologies have grown them accustomed to.Controlling the Roland CR-5000 from an Akai MPC-1000 using a custom built converter
The general assumption is that rhythm machines aren’t traditionally playable, and considering how outdated their rhythms tend to seem, lacking in the modern sensibility. My challenge thus, became clearer: I sought out to build a rhythm machine that would challenge this notion, while retaining the spirit of the traditional rhythm box.
Challenges and Limitations
At the outset, I wanted to base my rhythm machine on analog circuitry. I had previously built a number of digital drum machines–both sample and synthesis-based–for my Heads collection. Working in the analog arena allowed me to approach the design of my instrument in a way that respected the limitations my rhythm machine predecessors worked with and around.
By this time I had spent a couple of years mentoring with Jeff Blenkinsopp at The Analog Lab in New York, a place devoted to helping people from all over the world gain “further understanding the inner workings of their musical equipment.” I had already designed a rather complex analog signal processor, so I felt comfortable in the format. However, I hadn’t truly honed my skills around instrument design. In many ways, I wanted this project to be the testing ground for my own ability to create instruments, but prior experience taught me that going into such a complex project without the proper skills would be self defeating. Even more, my true goal was centered more around functionality rather than details like circuit board designs for individual sounds.
To avoid those rabbit holes–at least temporarily, I’ve since gone full circuit design on my analog sound projects–I chose to use DIY designs from the modular synth community as the basis for my rhythm box. That said, I limited myself to designs that featured analog sound sources, and only allowed myself to use designs that were available as PCB only. I would source all my own parts, solder all of my boards and configure them into the rhythm machine of my dreams.
The wonderful thing about the modular synth community is that there is a lot of stuff out there. The difficult thing about the modular synth community is that there’s a lot of stuff out there. If you’ve got enough rack space, you can pretty much put together a modular that will perform whatever functionality you want. How modules patch together fundamentally defines your instrument, making module selection the most essential process. I was aiming to build a more semi-modular configuration, forgoing the patch cables, but that didn’t make my selection any easier. I wanted to have three sound sources (nominally: kick, snare and hi-hat), a sequencer and some sort of filter, which would all flow into a simple monophonic mixer design of my own.
For the sounds I chose a simple kick module from Barton, and the Jupiter Storm unit from Hex Inverter. The sound of the kick module was rooted enough in the classic analog sound while offering enough modulation points to make it mutable. The triple square wave design of the Jupiter Storm really excited me as It had the range to pull off hi-hat and snare sounds in addition to other percussive and drone sounds, plus it featured two outputs giving me all three of my voices on in two pcb sets.
Filters are often considered the heart of a modular set up, as they way they shape the sound tends to define its character. In choosing one for my rhythm machine the main thing I wanted was control over multiple frequency bands. Because there would be three different sound sources I needed to be able to tailor the filter for a wide spectrum of sounds. As such I chose the AM2140 Resonant Filter.
The AMS2140 PCB layout, based on the classic eMu filter
I had no plans to include triggers for the sounds on my rhythm machine so the sequencer was going to be the heart of the performance as it would be responsible for any and all triggering of sounds. Needing to control three sounds simultaneously without any stored memory was quite a tall order, but fortunately I found the perfect solution in the amazing Turing Machine modules. With its expansion board the Turing machine can put out four different patterns based on it’s main pattern creator which can create fully random patterns or patterns that mutate as they progress.
I spent a couple of weeks after getting all the pcb’s parts and hardware together, wiring and rewiring connections until I got comfortable with how all of these parts were interacting with each other. I was fortunate to happen upon a vintage White Instruments box, which formally housed an attenuation meter, that was perfect for my machine. After testing with cardboard I laid out my own faceplates, which and put everything in the box. As soon as I plugged it in and started playing, I knew I had succeeded.Early test of RIDM before it went in the Box
I call it the RIDM Box (Rhythmically Intelligent Drum Machine Box). I’ve been playing it now for over two years, to the point where today I would say it is my primary instrument. Almost immediately afterward I built a companion piece called the Snare Bender which works both as a standalone and as a controller for the RIDM Box. That one I did from scratch hand wired with no layouts.
My current live rig with the RIDM Box and the Snare Bender (on the right)
While this is by no means a standard approach to modern electronic instrument design (if a standard approach even exists), what I learned through the process is really the value of looking back. With so much of modern technology being future forward in its approach, the assumption is that we’re at better starting positions for innovation than our predecessors. While we have so many more resources at our disposal, I think the limitations of the past were often more conductive to truly innovative approaches. By exploring those limitations with modern eyes a doorway opened up for me, the result of which is an instrument like no other, past or present.
I will probably continue playing the two of these instruments together for a while, but ultimately I’m leaning toward a new original design which takes the learnings from these projects and fully flushes out the performing instrument aspect of analog design. In the meantime, my process would not be complete if I did not return to the original inspiration. So I’ll leave you with the RIDM Box version of “Me & My Rhythm Box”—available on my library sessions release for the instrument.
Primus Luta is a husband and father of three. He is a writer and an artist exploring the intersection of technology and art, and their philosophical implications.
REWIND!…If you liked this post, you may also dig:
Heads: Reblurring The Lines–Primus Luta
“Playing the Medieval Lyric”: Remixing, Sampling and Remediating “Head Like a Hole” and “Call Me Maybe”
Each of the essays in this month’s “Medieval Sound” forum focuses on sound as it, according to Steve Goodman’s essay “The Ontology of Vibrational Force,” in The Sound Studies Reader, “comes to the rescue of thought rather than the inverse, forcing it to vibrate, loosening up its organized or petrified body (70). These investigations into medieval sound lend themselves to a variety of presentation methods loosening up the “petrified body” of academic presentation. Each essay challenges concepts of how to hear the Middle Ages and how the sounds of the Middle Ages continue to echo in our own soundscapes.
The posts in this series begins an ongoing conversation about medieval sound in Sounding Out!. Our opening gambit in April 2016, “Multimodality and Lyric Sound,” reframes how we consider the lyric from England to Spain, from the twelfth through the sixteenth centuries, pushing ideas of openness, flexibility, and productive creativity. We will post several follow-ups throughout the rest of 2016 focusing on “Remediating Medieval Sound.” And, HEAR YE!, in April 2017, look for a second series on Aural Ecologies of noise! –Guest Editors Dorothy Kim and Christopher Roman
In 2013, a user named pomDeter posted a sound file on the social news and entertainment site Reddit that went viral in the form of YouTube videos, Facebook posts, and tweets: a mashup remediation of Nine Inch Nails’s “Head Like a Hole” with Carly Rae Jepsen’s “Call Me Maybe.” Reactions ranged from outrage on the part of Nine Inch Nails’s drummer to declarations that it is a work of “genius” by the Los Angeles Times.
To understand the significance of this surprising piece of pop culture, we should recall that the iconic industrial rock anthem with sadomasochistic overtones “Head Like a Hole” is a track from NIN’s debut Pretty Hate Machine (1989), while “Call Me Maybe”–the title track to Jepsen’s eponymous first album—is a anthem about the exhilaration of a crush. Simply put, these two musical universes are not usually mentioned in the same breath, much less remixed into the same track. The result? “Call Me a Hole.”
The commentary about this musical clash of cultures has been vigorous and multi-sided. Listeners have unabashedly loved it, absolutely hated it, been very disturbed that they loved it, and been deeply distressed by what they see as the diluting of pure rock rage.
“Call Me a Hole” is a good example of both the theoretical underpinnings and the experimental possibilities of the issues at stake in my discussion of the medieval lyric. My post not only allows you to play the remix, but also to visualize the remixed song and read the second-by-second stream of commentary from a wide range of listeners. In this way, this remediated, multimodal, multimedia moment perfectly encapsulates the possibilities of experimentation, the participatory culture of multimodal productions, and the simultaneous discomfort and seduction these experimental remixings may engender. The commentary on Soundcloud is a perfect record of all these things: this cultural production is “awesome, creepy, bittersweet, disturbing, strange, brilliant, genius.”
“Playing the Medieval English Lyric” briefly examines the emergence of the lyric form in 13th-century miscellanies—an emergence that, in many ways, mirrors the development of mashups like “Call Me a Hole.” This work dovetails with larger critical issues apparent in deeply examining the material culture of miscellanies—medieval anthologies—how they were made, their quire formations, their marginalia, their scribes, their audiences. But I juxtapose this investigation with the insights of more recent theoretical ideas about multimodality, remediation, and mashup. Both recent digital rhetoric and medieval rhetorical theory can help us think through the place of music in the emergence of new literary genres and contextualize the creation of new technologies of sound.
The mashup as a new musical genre especially dependent on the affordances of a digital platform transforms the place of the audience from mere “consumer” to “producer,” according to Ragnhild Brøvig-Hanssen’s entry “Justin Bieber Featuring Slipknot: Consumption as Mode of Production,” in The Oxford Handbook of Music and Virtuality (268). Mashups also switch the relation of the producers/composers of music into the role of consumers/listeners. Brøvig-Hanssen goes on to argue that “it is often [mashup’s] experiential doubling of the music as simultaneously congruent (sonically, it sounds like a band performing together) and incongruent (it periodically subverts socially constructed conceptions of identities) that produces the richness in meaning and paradoxical effects of successful mashups (270).
These incongruent, yet congruent juxtapositions that produce rich meaning also form the pattern I see in the emergence of medieval English lyric. In essence, I hope to show how a discussion about digital mashups in today’s musical ecosystem can help us reframe the emergence of the medieval lyric in 13th-century medieval Britain. How do different media platforms—manuscript and digital—spur on certain parallel forms of sonic media play and creativity?
In particular, I am interested in how the sampling, mixing, and palimpsestic juxtaposition of mixed-language manuscripts (usually including Latin, Anglo-Norman French, and Middle English) have created a space for new linguistic and sonic remixes and new genres to play and form. In this article, I reconsider the (really) “old” media of the manuscript page as a recording and playing interface existing at a particularly dynamic juncture when new experimental forms abound for the emergence, revision, and recombination of literary oeuvres, genres, and technologies of sound.
Multilingualism and the Medieval English Lyric Scene
Just as a screen shot of the “Call Me a Hole” website would preserve in static form such external references as links to articles discussing the mashup (and would allow future readers to add new information), I similarly argue that a medieval manuscript page has creative, annotated possibilities that stop it from being a fixed, “literate” page. Readers throughout the centuries may add marginal notes, make annotations, cross out sections, or add new sections. More visually-oriented readers may include marginal drawings, even going so far as to animate narratives by drawing a series of connected images or to add interactive features like flaps, or rotae. If the manuscript page in question includes lyrics, notes, or both, it may have served as the inspiration for a wide variety of dramatic or musical performances, whether public or private. Finally, the physical book itself may have been broken apart, recombined with other books, or reused as endpapers for other books. In this way, I advocate for an understanding of the manuscript medium as a dynamic media zone like the digital screen. The manuscript page is thus, a mise-en-système: a dynamic reading/recording interface.
Much of the vernacular English literary production in the 13th and into the first half of the 14th century is preserved in multilingual manuscripts. I think Tim Machan said it best at 2008’s Multilingualism in the Middle Ages conference, particularly in situating Middle English in England, when he talks about “the ordinariness of multilingualism” and how much it is “the background noise” in the Middle Ages in Britain. The thirteenth-century multilingual matrix included verbal and written forms of the following languages: Old English, Middle English, Latin, Greek, Anglo-Norman French, Continental French, Irish, Welsh, Cornish, Hebrew, Flemish, and Arabic. If multilingualism is the background noise, then it’s a background concert in which all those linguistic sounds perform simultaneously. The medieval manuscript’s ocularcentrism has given readers visual cues, but the cues ask us to remix, reinterpret, and reinvent the materials. They do not ask us to just see these signs—whether music, art, text—as separate, hermetically sealed universes working as solo acts.
Intersecting this active ferment was the creative flux and reinvention of English musical notation and distinctly regional styles. These forces, I believe, helped create an experimental dynamism that may explain what the manuscript record reveals about the emergence of the Middle English lyric. The notation of Western music as we now understand it, which would begin to emerge in the 9th century, focused heavily on religious music, on chant. And Latin chant can be syllabically texted to any music. Likewise, Anglo-Norman French poetry focused on a form using syllable counts—octasyllabic couplets. This poetic form also was easily translated into Western musical notation. What, however, do you do with English poetry that is alliterative or uses another kind of stressed poetic meter? These problems are, in fact, probably why it took some time for Middle English to produce lyrics that were texted to music.
Another reason for this phenomenon lies with the state of musical notation itself. In the entry on “musical notation” in the New Grove Dictionary of Music (http://www.oxfordmusiconline.com/public/book/omo_gmo), an entire history of the creation of musical notation in the Middle Ages—and specifically its experimental and regional varieties—are mapped out. In addition, Carl Parrish’s classic text, The Notation of Medieval Music, a standard in all musical paleography classes, shows the minute shifts in the construction of medieval music from century to century and from region to region. And finally, the development of the stave line as “new technology” would spur further writing of music “without additional aural support.” What these histories of medieval musical notation have in common is their emphasis on the constantly shifting paradigms that this new technology of writing to record sound showed across different regions and different centuries.
From the second quarter of the 13th century to the mid-14th century, a number of multilingual miscellanies have survived, preserving an astonishing breadth of poetic work in Latin, Middle English, and Anglo-Norman French. These books circulated in relation to each other and in conversation with each other and were collected and compiled in miscellanies. There are a fair number of multilingual miscellanies that contain a range of lyrical poetry in Middle English and Anglo-Norman French. My list here is not exhaustive, but I would like to point to a few (pulled from Laing and Deeming’s work) that we will discuss: Oxford, Jesus College MS 29; Oxford, Bodleian library MS Digby 86; London, British Library MS Arundel 248; London, British Library MS Egerton 613; London, British Library MS Egerton 2253; London, British Library MS Harley 978; Kent, Maidstone Museum A. 13; Cambridge, St. John’s College MS E.8; London, British Library MS Royal 12 E.i; Oxford, Bodleian Library MS Rawlinson G18.
With the exception of the criticism of Harley 978, the manuscript containing the famous “summer canon,” which only contains one piece of Middle English poetry, the scholarly discussion of the miscellanies’ lyrics rarely touches on music (with the notable exception of Helen Deeming’s assiduous work). This critical silence is particularly disconcerting because several of these manuscripts either have notes or signs of music in them, or their lyrical texts have music attached to them in other manuscripts. What I would like to propose, then, is a narrow sampling that will give us a wider picture of what the lyrical record may reveal.
Poetic and Musical Samplings of the Lyric Page
The manuscript layouts of a sample of these medieval multilingual musical miscellanies reveal how musical notes and letters were, at times, considered in the same category. The mise-en-système of these manuscripts also reveal the fluidity, creativity, and cues for audience/listener/reader’s (including the scribal compiler) ability to mashup the multilingual musical matrix. In the manuscript Arundel 248, music is attached to the lyrics, but it is laid out in quite an unusual way. Arundel 248 contains mostly Latin religious texts, including several tracts on sin. However, near the end, there are several lyrical texts that also appear in Digby 86; Jesus 29; and Rawlinson G. 18. On f. 154r, the entire page has Latin, Anglo-Norman, and English verse texted to music. What is interesting about this page, especially in comparison to other musical pages in the MS and the standard layout of thirteenth-century English music, is the folio’s mise-en-page. The scribe has literally laid out a series of lines (particularly in the top half) that then places English poetry, Anglo-Norman French poetry, and Latin poetry with English musical notation. The lack of specific stave lines (though they appear in other parts of this manuscript and at the bottom of this folio) means that the technology of this particular form of sound recording has allowed all these different things—English musical notes, English vernacular notation, Latin notation, and Anglo-Norman vernacular notation—equal space and play. They all have been squeezed onto these black ruled lines (at the top). The mise-en-page, then, allows linguistic differences to be on par with differences in sonic styles. What this manuscript ultimately creates is a miscellany of sound.
The page’s layout cues a sonic palimpsest—or, in contemporary terms, a sonic mash-up—and suggests potentially simultaneous performance. In fact, this vocal performance becomes even more complex in the next slide, f. 154v, because the texted English lyric is a polyphonic piece that also, in many of its manuscripts, has Latin lyrics. This is “Jesu Cristes Milde Moder,” (DIMEV 2831 http://www.dimev.net/record.php?recID=2831) which comes with English lyrics and texted music. It appears to be a version of “Stabat Iuxta Christi Crucem,” though the standard version with music appears in St. John’s College, MS E.8 with the standard English lyrics underneath the melody (DIMEV 5030 http://www.dimev.net/record.php?recID=5030 ). The latter English lyric connected to “Stabat Iuxta” is “Stand wel moder” and there are several versions of the lyric without music—Digby 86, Harley 2253, Royal 8 F.ii, Trinity College, MS Dublin 301. Another manuscript, Royal MS 12 E.i, contains the same music for “Stand wel moder.” Deeming has noted that the lyrics of this version, “Jesu Cristes Milde Moder” correspond closely to “Stabat Iuxta” and could be sung with the standard melody (as seen in Royal 8 F.ii) as a contrafactum. However, you can do a mashup of “Jesu Cristes Milde Moder” and its accompanying music juxtaposed with the text and music of “Stabat Iuxta Christi Crucem.” As an experiment, I had Camerata, the early Music Group at Vassar College, record this piece, “Jesu Cristes Milde Moder” from Arundel 248 with the musical version of “Stabat Iuxta” from St. John’s College MS E.8 (Found in Deeming’s Songs in British Sources (196, 201, 210-211). Other than direction on how to pronounce Middle English (as well as the accompanying contemporary editions of each lyric), I left the performance details to the group themselves to figure out. This is what they recorded – this is the medieval mashup.
The performance shows the creative possibilities of the page, and how music is a very distinct kind of sound player. What song—and particularly multi-part song—has done, is to generate sonic harmony out of linguistic babel. There is a pattern of circulation that intertwines music (both monophony and polyphony) with multilingual lyrics, and this manuscript especially demonstrates those sonic possibilities. These pages demonstrate a diverse soundscape that records and imagines an interesting multimedia and multilingual voice at play. In essence, Arundel 248 displays the different possibilities a reader could have in switching between or layering different modes of sound.
Featured image “staff” by Arko Sen @Flickr CC BY-NC-ND
Dorothy Kim is an Assistant Professor of English at Vassar College. She is a medievalist, digital humanist, and feminist. She has been a Fulbright Fellow, a Ford Foundation Fellow, a Frankel Fellow at the University of Michigan. She has been awarded grants from the National Endowment for the Humanities, Social Science and Humanities Research Council of Canada, and the Mellon Foundation. She is a Korean American who grew up in Los Angeles in and around Koreatown.
REWIND! . . .If you liked this post, you may also dig:
The Blue Notes of Sampling–Primus Luta
Remixing Girl Talk: The Poetics and Aesthetics of Mashups–Aram Sinnreich
A Tribe Called Red Remixes Sonic Stereotypes–Christina Giacona
Guest Editors’ Note: Welcome to Sounding Out!‘s December forum entitled “Sound, Improvisation and New Media Art.” This series explores the nature of improvisation and its relationship to appropriative play cultures within new media art and contemporary sound practice. Here, we engage directly with practitioners, who either deploy or facilitate play and improvisation through their work in sonic new media cultures.
For our second piece in the series, we have interviewed New York City based performance duo foci + loci (Chris Burke and Tamara Yadao). Treating the map editors in video games as virtual sound stages, foci + loci design immersive electroacoustic spaces that can be “played” as instruments. Chris and Tamara bring an interdisciplinary lens to their work, having worked in various sonic and game-related cultures including, popular, electroacoustic and new music, chiptune, machinima (filmmaking using video game engines), and more.
As curators, we have worked with foci + loci several times over the past few years, and have been fascinated with their treatment of popular video game environments as tools for visual and sonic exploration. Their work is highly referential, drawing on artistic legacies of the Futurists, the Surrealists, and the Situationists, among others. In this interview, we discuss the nature of their practice(s), and it’s relationship to play, improvisation and the co-constituative nature of their work in relation to capital and proprietary technologies.
— Guest Editors Skot Deeming and Martin Zeilinger
1. Can you take a moment to describe your practice to our readers? What kind of work do you produce, what kind of technologies are involved, and what is your creative process?
foci + loci mostly produce sonic and visual video game environments that are played in live performance. We have been using Little Big Planet (LBP) on the Playstation 3 for about 6 years.
When we perform, we normally have two PS3s running the game with a different map in each. We have experimented with other platforms such as Minecraft and we sometimes incorporate spoken word, guitars, effects pedals, multiple game controllers (more than 1 each) and Game Boys.
Our creative process proceeds from discussions about the ontological differences between digital space and cinematic space, as well as the freeform or experimental creation of music and sound art that uses game spaces as its medium. When we are in “Create Mode” in LBP, these concepts guide our construction of virtual machines, instruments and performance systems.
[Editor’s Note: Little Big Planet’s has several game modes. Create Mode is the space within the game where users can create their own LBP levels and environments. As player’s progress through LBP’s Story Mode, players unlock and increasing number of game assets, which can be used in Create Mode.]
2. Tell us about your background in music? Can you situate your current work in relation to the musical traditions and communities that you were previously a part of?
CB: I have composed for film, TV, video games and several albums (sample based, collage and electronic). Since 2001 I’ve been active in the chipmusic scene, under the name glomag. Around the same time I discovered machinima and you could say that my part in foci + loci is the marriage of these two interests – music and visual. Chipmusic tends to be high energy and the draw centers around exciting live performances. It’s immensely fun and rewarding but I felt a need to step back and make work that drew from more cerebral pursuits. foci + loci is more about these persuits for me: both my love of media theory and working with space and time.
TY: I’m an interdisciplinary artist and composer. I studied classical piano and percussion during my childhood years. I went on to study photography, film, video, sound, digital media and guitar in college and after. I’ve primarily been involved with the electroacoustic improv and chipmusic scenes, both in NYC. I’ve been improvising since 2005, and I’ve been writing chipmusic since 2011 under the moniker Corset Lore.
My work in foci + loci evolved out of the performance experience I garnered in the electroacoustic improv scene. My PS3 replaced my laptop. LBP replaced Ableton Live and VDMX. I think I felt LBP had more potential as a sonic medium because an interface could be created from scratch. Eventually, the game’s plasticity and setting helped to underscore its audiovisual aspect by revealing different relationships between sound and image.
3. Would you describe your work as a musical practice or an audio-visual performance practice?
FL: We have always felt that in game space, it is more interesting to show the mechanism that makes the sound as well as the image. These aspects are programmed, of course, but we try to avoid things happening “magically,” and instead like to give our process some transparency. So, while it is often musical, sound and image are inextricably linked. And, in certain cases, the use of a musical score (including game controller mappings) has been important to how our performance unfolds either through improvisation or timed audiovisual events. The environment is the musical instrument, so using the game controller is like playing a piano and wielding a construction tool at the same time. It has also been important in some contexts to perform in ‘Create Mode’ in order to simply give the audience visual access to LBP‘s programming backend. In this way, causal relationships between play and sound may be more firmly demonstrated.
4. There are many communities of practice that have adopted obsolete or contemporary technologies to create new, appropriative works and forms. Often, these communities recontextualize our/their relationships to technologies they employ. To what extent do you see you work in relation to communities of appropriation-based creative expression?
CB: In the 80s-90s I was an active “culture jammer,” making politically motivated sound montage works for radio and performance and even dabbling in billboard alterations. Our corporate targets were selling chemical weapons and funding foreign wars while our media targets were apologists for state-sanctioned murder. Appropriating their communications (sound bites, video clips, broadcasting, billboards) was an effort to use their own tools against them. In the case of video game publishers and console manufacturers, there is much to criticize: sexist tropes in game narratives, skewed geo-political subtexts, anti-competitive policies, and more. Despite these troubling themes, the publishers (usually encouraged by the game developers) have occasionally supported the “pro-sumer” by opening up their game environments to modding and other creative uses. This is a very positive shift from, say, the position of the RIAA or the MPAA, where derivative works are much more frequently shut down. My previous game-related series, This Spartan Life, was more suited to tackling these issues. As for foci + loci, it’s hard to position work that uses extensively developed in-game tools as being “appropriative,” but I do think using a game engine to explore situationist ideas or the ontology of game space, as we do in our work, is a somewhat radical stance on art. We hope that it encourages more players to creatively express their ideas in similar ways.
TY: Currently, the ‘us vs. them’ attitude that characterized the 80s and 90s is no longer as relevant as it once was because corporations are now giving artists technology for their own creative use. However, they undermine this sense of benevolence by claiming that consumers could be the next Picasso if they buy said piece of technology in their marketing—as if the tool is more important than the artist/artwork. Little Big Planet is marketed this way. On the whole, I think these issues complicate artists’ relationships with their media.
Often our work tends to be included in hacker community events, most recently the ‘Music Games Hackathon’ at Spotify (NYC), because, while we don’t necessarily hack the hardware or software, our approach is a conceptual hack or subversion. At this event, there were a variety of conceptual connections made between music, hacks and games; Double Dutch, John Zorn’s Game Pieces, Fluxus, Xenakis and Stockhausen were all compared to one another. I gave a talk at the Hackers on Planet Earth Conference in 2011 about John Cage, Marcel Duchamp, Richard Stallman and the free software movement. In Stallman’s essay ‘On Hacking,’ he cited John Cage’s ‘4’33″‘ as an early example of a music hack. In my discussion, I pointed to Marcel Duchamp, a big influence on Cage, whose readymades were essentially hacked objects through their appropriation and re-contextualization. I think this conceptual approach informs foci + loci’s current work.
[Editors’ note: Recently celebrating its 10th anniversary, This Spartan Life is a machinima talk show that takes place within the multiplayer game space of the First Person Shooter game Halo. This Spartan Life was created by Chris Burke in 2005. The show has featured luminaries including Malcolm McClaren, Peggy Awesh, and many more.]
5. You mention the ontological differences between game spaces and cinematic spaces. Can you clarify what you mean by this? Why is this such as important distinction and how does it drive the work?
CB: We feel that there is a fundamental difference in the way space is represented in cinema through montage and the way it’s simulated in a video game engine. To use Eisenstein’s terms, film shots are “cells” which collide to synthesize an image in the viewers mind. Montage builds the filmic space shot by shot. Video game space, being a simulation, is coded mathematically and so has a certain facticity. We like the way the mechanized navigation of this continuous space can create a real time composition. It’s what we call a “knowable” space.
6. Your practice is sound-based but relies heavily on the visual interface that you program in the gamespace. How do you view this relationship between the sonic and the visual in your work?
TY: LBP has more potential as a creative medium because it is audiovisual. The sound and image are inextricably linked in some cases, where one responds to the other. These aspects of interface function like the system of instruments we (or the game console) are driving. Since a camera movement can shape a sound within the space, the performance of an instrument can be codified to yield a certain effect. This goes back to our interest in the ontology of game space.
7. Sony (and other game developers) have been criticized for commodifying play as work – players produce and upload levels for free, and this free labour populates the Little Big Planet ecology. How would you position the way you use LBP in this power dynamic between player and IP owner?
CB: We are certainly more on the side of the makers than the publishers, but personally I think the “precarious labor” argument is a stretch with regard to LBP. Are jobs being replaced (International Labor Rights definition of precarious work)? Has a single modder or machinima maker suggested they should be compensated by the game developer or publisher for their work? Compensation actually does happen occasionally. This Spartan Life was, for a short time, employed by Microsoft to make episodes of the show for the developer’s Halo Waypoint portal. I have known a number of creators from the machinima community who were hired by Bioware, Blizzard, Bungie, 343 Industries and other developers. Then there’s the famous example of Minh Le and Jess Cliffe, who were hired by Valve to finish their Half-Life mod, Counterstrike. However, compensating every modder and level maker would clearly not be a supportable model for developers or publishers.
Having said all that, I think our work does not exactly fit into Sony’s idea of what LBP users should be creating. We are resisting, in a sense, by providing a more art historical example of what gamers can do with this engine beyond making endless game remakes, side-scrollers and other overrepresented forms. We want players to open our levels and say “WTF is this? How do I play it?” Then we want them to go into create mode and author LBP levels that contain more of their own unique perspectives and less of the game.
[Corset Lore is Tamara Yadao’s chiptune project.]
8. What does it mean to improvise with new interfaces? Has anything ever gone horribly wrong during a moment of improvisation? Is there a tension between improvisation and culture jamming, or do the two fit naturally together?
CB: It’s clear that improvising with new interfaces is freer and sometimes this means our works in progress lack context and have to be honed to speak more clearly. This freedom encourages a spontaneous reaction to the systems we build that often provokes the exploitation of weaknesses and failure. Working within a paradigm of exploitation seems appropriate to us, considering our chosen medium. In play, there is always the possibility of failure, or in a sense, losing to the console. When we design interfaces within console and game parameters we build in fail-safes while also embracing mechanisms that encourage failure during our performance/play.
In an elemental way, culture jamming is a more targeted approach, whereas improvisation seems to operate with a looser agenda. Improvisation is already a critical approach to the structures of game narrative. Improvising with a video game opens up the definition of what a game space is, or can be.
All images used with permission by foci + loci.
foci + loci are Chris Burke and Tamara Yadao.
Chris Burke came to his interest in game art via his work as a composer, sound designer and filmmaker. As a sound designer and composer he has worked with, among others, William Pope L., Jeremy Blake, Don Was, Tom Morello and Björk. In 2005 he created This Spartan Life which transformed the video game Halo into a talk show. Within the virtual space of the game, he has interviewed McKenzie Wark, Katie Salen, Malcolm McLaren, the rock band OK Go and others. This and other work in game art began his interest in the unique treatment of space and time in video games. In 2012, he contributed the essay “Beyond Bullet Time” to the “Understanding Machinima” compendium (2013, Continuum).
Tamara Yadao is an interdisciplinary artist and composer who works with gaming technology, movement, sound, and video. In Fall 2009, at Diapason Gallery, she presented a lecture on “the glitch” called “Post-Digital Music: The Expansion of Artifacts in Microsound and the Aesthetics of Failure in Improvisation.” Current explorations include electro-acoustic composition in virtual space, 8-bit sound in antiquated game technologies (under the moniker Corset Lore), movement and radio transmission as a live performance tool and the spoken word. Her work has been performed and exhibited in Europe and North America, and in 2014, Tamara was the recipient of a commissioning grant by the Jerome Fund for New Music through the American Composers Forum.
REWIND! . . .If you liked this post, you may also dig:
Improvisation and Play in New Media, Games, and Experimental Sound Practices — Skot Deeming and Martin Zeilinger
Sounding Out! Podcast #41: Sound Art as Public Art — Salomé Voegelin
Sounding Boards and Sonic Styles — Josh Ottum