Education is never politically neutral. Many of us advocate for social justice when we’re outside of the classroom but struggle to continue that work inside as well, especially with issues that appear on the surface largely unrelated to our disciplines. This inaction maintains the centering of the white experience, continuing to normalize and prioritize it at the expense of all others. Marginalized voices remain marginalized. We don’t need our own students to be directly impacted by policies to advocate on behalf of those who are. This is work we all must do.
While social issues have made important inroads within musicology and ethnomusicology, they rarely make an appearance in music theory or composition, especially in a classroom setting. To begin these conversations, we must expand the scope beyond the purely technical and examine the ways in which music is a social and cultural phenomenon. Understanding how a triad functions, for example, is only part of the story. We must also recognize that any musical activity involves a network of people who might be engaged in any combination of producing, performing, buying, selling, listening, analyzing, teaching, institutionalizing, and so on. Discussing these networks means discussing their persistent systemic inequalities and power differentials, and understanding that these are social and not just musical issues. Cultivating this awareness is crucial in the development of our students as critical thinkers who can question the society in which they live, who can locate injustice and fight to advance social good. Abstract music theory is important, but music theory combined with a social awareness is vital.
Georgetown University hosts an annual Let Freedom Ring! initiative, a recurring project to honor the legacy of Dr. Martin Luther King. “Teach The Speech,” in particular, is a cross-campus curriculum project where interested faculty and staff incorporate that year’s selected work by Dr. King in our courses and workshops, sparking campus-wide conversations rooted in themes of social justice. The first time I joined the “Teach the Speech” efforts, I redesigned my basic theory class to include guiding principles from King’s entire body of work. In addition to covering the expected chords, scales, and other technical material, we discussed the disparity in representation faced by women and POC within music, viable modes of protest in music, and the possible roles of government sponsorship and censorship of artists. We rooted these issues in the real-life examples of the Grammy’s, the Women’s March, and the threats by the Trump administration to cut funding to the NEA and the NEH. Final projects based on these bigger-picture topics provided students further opportunity to reflect on the ways in which these and similar topics manifest in their own lives, transcending a preoccupation with “notes on a page.”
My second time participating in the “Teach the Speech” initiative, I used a recording of Dr. King delivering “I Have Been to The Mountaintop” as part of a module on sampling for my DJing and production class. Students had to create short tracks using this recording as the only permissible sound source. Anything resembling a kick, snare, hi-hat, melody, or harmony had to be constructed from a sample. Using something we don’t typically consider to be music as the sound source for creating music demonstrates the power of the studio and illustrates just how far creative slicing, dicing, and processing can take us. Beyond these important practical applications, though, the use of speech provides us with a framework for discussing why context matters. Do context and history always travel alongside the immediate acoustic phenomenon of sound? Can we identify something as “the music itself”? Through wrestling with these and related questions, students begin to understand sample-based composition as both a musical and a moral undertaking.
The process of sampling is largely a process of curation, involving a responsibility not just for the product but also for the source. If a student chooses to sample a large-enough portion of Dr. King’s speech, so that one can recognize words, phrases, even full sentences, then her choice includes the layers of extra-musical meaning attached to those words in addition to their musical qualities. “Violence,” for example, has a particular sonic profile and meaning that most listeners understand. How we actually interpret this word depends on many factors, including the context in which it is used in the original source, the identity of the speaker, and any audio processing that students might apply. The addition of distortion, for example, will influence the impact of that word on and its reception by the listener. The sampled word might be a fragment of a larger word, “violence” snipped from “nonviolence,” and never appear in its own right in the source. These and other complex issues involved in the process of sampling exist whether or not the student chooses to engage with them.
If the student samples an extremely small fragment of the Dr. King speech, obscuring the source and working with sound on an almost molecular level, then perhaps these questions go away. Can we still discuss the attendant connotations and denotations of indecipherable fractions of words or slices of the ambient hiss between the words? In this situation, is the origin of the sample still relevant for the work being done? When the ties connecting a heavily processed source to the finished product are untraceable, does it matter where we sampled from? Is white noise simply white noise?
Arriving at these kinds of questions is largely the point of the exercise. With a little deliberation, students realize that there is a very clear distinction between sampling the word “violence” from a speech by Trump and from a speech by MLK. There is a context, a lineage, and a history to samples that lives outside the phenomenon of pure sound, and this holds true even at the molecular level. This is crucial for students to understand, and its implications extend far beyond a music class.
We can, for example, ask students to consider the related question about whether or not it’s possible to separate art from the artist. Can we ever listen to pre-MAGA Kanye with the same ears? How do we interpret a post-MAGA Kanye song about uplift and resilience? What does it mean to watch a film where Harvey Weinstein had a major role in producing? A minor role? Moral dilemmas form a part of every media interaction we have, and similar questions comprise other aspects of our lives. Can we continue to allow the misappropriation of Dr. King’s “I have a dream that my four little children will one day live in a nation where they will not be judged by the color of their skin, but by the content of their character” without acknowledging the “radical” Dr. King? Can we reconcile a country built on expropriation, slavery, and genocide with one whose propaganda extolls the principles of equality and freedom? These are indeed crucial lines of moral inquiry, and our pretending otherwise enables current systems to remain in place. Sampling King’s speech enables my students to engage with those lines of inquiry from an angle they have not considered before: at the level of sound.
This is work we all must do. Within academia, we need to combat injustice inside the classroom as well as outside to bend the arc of the moral universe toward justice. One way we can engage is through careful attention both to the examples we choose and the way we contextualize them. Students and educators alike need to understand the political nature of education that is too often a means of upholding the power structures within society that position whites at the top, and white males at the very top. These largely invisible systems have very real impacts on our lives, and the only way we can evolve to a more just society is by questioning their seeming inevitability. We must foster dialogue that transcends the classroom. We must engage with social problems. We must look beyond the accumulation of knowledge as an end in itself. We must, in short, to do good. This is work we all must do.
Featured image: “Martin Luther King, Jr. Memorial” by Flickr user Cocoabiscuit, CC BY-NC-ND 2.0
Dave Molk teaches composition and theory at Georgetown University. He’s close friends with producer Olde Dirty Beathoven, a founding member of District New Music Coalition, and a board member of New Works for Percussion Project. Outside of music, Dave is a leader of CCON, an organization devoted to supporting undocumented communities in higher ed in the DMV. Find him online at https://www.molkmusic.com/ and @DaveMolkMusic.
REWIND!…If you liked this post, you may also dig:
A Listening Mind: Sound Learning in a Literature Classroom–Nicole Furlonge
Guest Editors’ Note: Welcome to Sounding Out!‘s December forum entitled “Sound, Improvisation and New Media Art.” This series explores the nature of improvisation and its relationship to appropriative play cultures within new media art and contemporary sound practice. Here, we engage directly with practitioners, who either deploy or facilitate play and improvisation through their work in sonic new media cultures.
For our second piece in the series, we have interviewed New York City based performance duo foci + loci (Chris Burke and Tamara Yadao). Treating the map editors in video games as virtual sound stages, foci + loci design immersive electroacoustic spaces that can be “played” as instruments. Chris and Tamara bring an interdisciplinary lens to their work, having worked in various sonic and game-related cultures including, popular, electroacoustic and new music, chiptune, machinima (filmmaking using video game engines), and more.
As curators, we have worked with foci + loci several times over the past few years, and have been fascinated with their treatment of popular video game environments as tools for visual and sonic exploration. Their work is highly referential, drawing on artistic legacies of the Futurists, the Surrealists, and the Situationists, among others. In this interview, we discuss the nature of their practice(s), and it’s relationship to play, improvisation and the co-constituative nature of their work in relation to capital and proprietary technologies.
— Guest Editors Skot Deeming and Martin Zeilinger
1. Can you take a moment to describe your practice to our readers? What kind of work do you produce, what kind of technologies are involved, and what is your creative process?
foci + loci mostly produce sonic and visual video game environments that are played in live performance. We have been using Little Big Planet (LBP) on the Playstation 3 for about 6 years.
When we perform, we normally have two PS3s running the game with a different map in each. We have experimented with other platforms such as Minecraft and we sometimes incorporate spoken word, guitars, effects pedals, multiple game controllers (more than 1 each) and Game Boys.
Our creative process proceeds from discussions about the ontological differences between digital space and cinematic space, as well as the freeform or experimental creation of music and sound art that uses game spaces as its medium. When we are in “Create Mode” in LBP, these concepts guide our construction of virtual machines, instruments and performance systems.
[Editor’s Note: Little Big Planet’s has several game modes. Create Mode is the space within the game where users can create their own LBP levels and environments. As player’s progress through LBP’s Story Mode, players unlock and increasing number of game assets, which can be used in Create Mode.]
2. Tell us about your background in music? Can you situate your current work in relation to the musical traditions and communities that you were previously a part of?
CB: I have composed for film, TV, video games and several albums (sample based, collage and electronic). Since 2001 I’ve been active in the chipmusic scene, under the name glomag. Around the same time I discovered machinima and you could say that my part in foci + loci is the marriage of these two interests – music and visual. Chipmusic tends to be high energy and the draw centers around exciting live performances. It’s immensely fun and rewarding but I felt a need to step back and make work that drew from more cerebral pursuits. foci + loci is more about these persuits for me: both my love of media theory and working with space and time.
TY: I’m an interdisciplinary artist and composer. I studied classical piano and percussion during my childhood years. I went on to study photography, film, video, sound, digital media and guitar in college and after. I’ve primarily been involved with the electroacoustic improv and chipmusic scenes, both in NYC. I’ve been improvising since 2005, and I’ve been writing chipmusic since 2011 under the moniker Corset Lore.
My work in foci + loci evolved out of the performance experience I garnered in the electroacoustic improv scene. My PS3 replaced my laptop. LBP replaced Ableton Live and VDMX. I think I felt LBP had more potential as a sonic medium because an interface could be created from scratch. Eventually, the game’s plasticity and setting helped to underscore its audiovisual aspect by revealing different relationships between sound and image.
3. Would you describe your work as a musical practice or an audio-visual performance practice?
FL: We have always felt that in game space, it is more interesting to show the mechanism that makes the sound as well as the image. These aspects are programmed, of course, but we try to avoid things happening “magically,” and instead like to give our process some transparency. So, while it is often musical, sound and image are inextricably linked. And, in certain cases, the use of a musical score (including game controller mappings) has been important to how our performance unfolds either through improvisation or timed audiovisual events. The environment is the musical instrument, so using the game controller is like playing a piano and wielding a construction tool at the same time. It has also been important in some contexts to perform in ‘Create Mode’ in order to simply give the audience visual access to LBP‘s programming backend. In this way, causal relationships between play and sound may be more firmly demonstrated.
4. There are many communities of practice that have adopted obsolete or contemporary technologies to create new, appropriative works and forms. Often, these communities recontextualize our/their relationships to technologies they employ. To what extent do you see you work in relation to communities of appropriation-based creative expression?
CB: In the 80s-90s I was an active “culture jammer,” making politically motivated sound montage works for radio and performance and even dabbling in billboard alterations. Our corporate targets were selling chemical weapons and funding foreign wars while our media targets were apologists for state-sanctioned murder. Appropriating their communications (sound bites, video clips, broadcasting, billboards) was an effort to use their own tools against them. In the case of video game publishers and console manufacturers, there is much to criticize: sexist tropes in game narratives, skewed geo-political subtexts, anti-competitive policies, and more. Despite these troubling themes, the publishers (usually encouraged by the game developers) have occasionally supported the “pro-sumer” by opening up their game environments to modding and other creative uses. This is a very positive shift from, say, the position of the RIAA or the MPAA, where derivative works are much more frequently shut down. My previous game-related series, This Spartan Life, was more suited to tackling these issues. As for foci + loci, it’s hard to position work that uses extensively developed in-game tools as being “appropriative,” but I do think using a game engine to explore situationist ideas or the ontology of game space, as we do in our work, is a somewhat radical stance on art. We hope that it encourages more players to creatively express their ideas in similar ways.
TY: Currently, the ‘us vs. them’ attitude that characterized the 80s and 90s is no longer as relevant as it once was because corporations are now giving artists technology for their own creative use. However, they undermine this sense of benevolence by claiming that consumers could be the next Picasso if they buy said piece of technology in their marketing—as if the tool is more important than the artist/artwork. Little Big Planet is marketed this way. On the whole, I think these issues complicate artists’ relationships with their media.
Often our work tends to be included in hacker community events, most recently the ‘Music Games Hackathon’ at Spotify (NYC), because, while we don’t necessarily hack the hardware or software, our approach is a conceptual hack or subversion. At this event, there were a variety of conceptual connections made between music, hacks and games; Double Dutch, John Zorn’s Game Pieces, Fluxus, Xenakis and Stockhausen were all compared to one another. I gave a talk at the Hackers on Planet Earth Conference in 2011 about John Cage, Marcel Duchamp, Richard Stallman and the free software movement. In Stallman’s essay ‘On Hacking,’ he cited John Cage’s ‘4’33″‘ as an early example of a music hack. In my discussion, I pointed to Marcel Duchamp, a big influence on Cage, whose readymades were essentially hacked objects through their appropriation and re-contextualization. I think this conceptual approach informs foci + loci’s current work.
[Editors’ note: Recently celebrating its 10th anniversary, This Spartan Life is a machinima talk show that takes place within the multiplayer game space of the First Person Shooter game Halo. This Spartan Life was created by Chris Burke in 2005. The show has featured luminaries including Malcolm McClaren, Peggy Awesh, and many more.]
5. You mention the ontological differences between game spaces and cinematic spaces. Can you clarify what you mean by this? Why is this such as important distinction and how does it drive the work?
CB: We feel that there is a fundamental difference in the way space is represented in cinema through montage and the way it’s simulated in a video game engine. To use Eisenstein’s terms, film shots are “cells” which collide to synthesize an image in the viewers mind. Montage builds the filmic space shot by shot. Video game space, being a simulation, is coded mathematically and so has a certain facticity. We like the way the mechanized navigation of this continuous space can create a real time composition. It’s what we call a “knowable” space.
6. Your practice is sound-based but relies heavily on the visual interface that you program in the gamespace. How do you view this relationship between the sonic and the visual in your work?
TY: LBP has more potential as a creative medium because it is audiovisual. The sound and image are inextricably linked in some cases, where one responds to the other. These aspects of interface function like the system of instruments we (or the game console) are driving. Since a camera movement can shape a sound within the space, the performance of an instrument can be codified to yield a certain effect. This goes back to our interest in the ontology of game space.
7. Sony (and other game developers) have been criticized for commodifying play as work – players produce and upload levels for free, and this free labour populates the Little Big Planet ecology. How would you position the way you use LBP in this power dynamic between player and IP owner?
CB: We are certainly more on the side of the makers than the publishers, but personally I think the “precarious labor” argument is a stretch with regard to LBP. Are jobs being replaced (International Labor Rights definition of precarious work)? Has a single modder or machinima maker suggested they should be compensated by the game developer or publisher for their work? Compensation actually does happen occasionally. This Spartan Life was, for a short time, employed by Microsoft to make episodes of the show for the developer’s Halo Waypoint portal. I have known a number of creators from the machinima community who were hired by Bioware, Blizzard, Bungie, 343 Industries and other developers. Then there’s the famous example of Minh Le and Jess Cliffe, who were hired by Valve to finish their Half-Life mod, Counterstrike. However, compensating every modder and level maker would clearly not be a supportable model for developers or publishers.
Having said all that, I think our work does not exactly fit into Sony’s idea of what LBP users should be creating. We are resisting, in a sense, by providing a more art historical example of what gamers can do with this engine beyond making endless game remakes, side-scrollers and other overrepresented forms. We want players to open our levels and say “WTF is this? How do I play it?” Then we want them to go into create mode and author LBP levels that contain more of their own unique perspectives and less of the game.
[Corset Lore is Tamara Yadao’s chiptune project.]
8. What does it mean to improvise with new interfaces? Has anything ever gone horribly wrong during a moment of improvisation? Is there a tension between improvisation and culture jamming, or do the two fit naturally together?
CB: It’s clear that improvising with new interfaces is freer and sometimes this means our works in progress lack context and have to be honed to speak more clearly. This freedom encourages a spontaneous reaction to the systems we build that often provokes the exploitation of weaknesses and failure. Working within a paradigm of exploitation seems appropriate to us, considering our chosen medium. In play, there is always the possibility of failure, or in a sense, losing to the console. When we design interfaces within console and game parameters we build in fail-safes while also embracing mechanisms that encourage failure during our performance/play.
In an elemental way, culture jamming is a more targeted approach, whereas improvisation seems to operate with a looser agenda. Improvisation is already a critical approach to the structures of game narrative. Improvising with a video game opens up the definition of what a game space is, or can be.
All images used with permission by foci + loci.
foci + loci are Chris Burke and Tamara Yadao.
Chris Burke came to his interest in game art via his work as a composer, sound designer and filmmaker. As a sound designer and composer he has worked with, among others, William Pope L., Jeremy Blake, Don Was, Tom Morello and Björk. In 2005 he created This Spartan Life which transformed the video game Halo into a talk show. Within the virtual space of the game, he has interviewed McKenzie Wark, Katie Salen, Malcolm McLaren, the rock band OK Go and others. This and other work in game art began his interest in the unique treatment of space and time in video games. In 2012, he contributed the essay “Beyond Bullet Time” to the “Understanding Machinima” compendium (2013, Continuum).
Tamara Yadao is an interdisciplinary artist and composer who works with gaming technology, movement, sound, and video. In Fall 2009, at Diapason Gallery, she presented a lecture on “the glitch” called “Post-Digital Music: The Expansion of Artifacts in Microsound and the Aesthetics of Failure in Improvisation.” Current explorations include electro-acoustic composition in virtual space, 8-bit sound in antiquated game technologies (under the moniker Corset Lore), movement and radio transmission as a live performance tool and the spoken word. Her work has been performed and exhibited in Europe and North America, and in 2014, Tamara was the recipient of a commissioning grant by the Jerome Fund for New Music through the American Composers Forum.
REWIND! . . .If you liked this post, you may also dig:
Improvisation and Play in New Media, Games, and Experimental Sound Practices — Skot Deeming and Martin Zeilinger
Sounding Out! Podcast #41: Sound Art as Public Art — Salomé Voegelin
Sounding Boards and Sonic Styles — Josh Ottum
John Cage’s “Music of Changes,” which was composed using a random component from the iChing.
I perform and write music, normally acoustic, and usually for a single guitar, harmonica, and voice. I am traditional in my choice of instruments, they are basically “old” technology. On the other hand, I am also fascinated by the idea of robotics in music. The idea of artificial, autonomous music creators that work alongside human musicians. John Cage used the iChing to make choices about musical form in some of his compositions, including “Music of Changes” above, which has some of that flavor. It is music that is composed, not just performed, by a partially artificial means–by a non-human actor, the iChing.
In my work as an economist, I develop autonomous software programs that simulate economic actors in a process called agent-based modeling – the construction of independent pieces of software, which simulate real agents in the world, that interact and form patterns that transcend any single agent’s behavior. Recently I realized that agent-based modeling might be able to be applied to the construction of music: creating individual artificial decision makers which might together construct a piece of music that transcends what any one of them can do.
Think of a swarm of bees or a school of fish. Once biologists thought that schools of fish had a `leader fish,’ a single fish that would direct how the school would move. Biologists also once thought that the queen bee was the `leader’ of the hive, that it directed behavior of the bees in the hive. Both of these beliefs have been shown to be false. There is no leader in a school of fish. On the contrary, each fish responds to local information and then the co-ordination which arises on the school level emerges from this system of individual choices. The same with bees…the queen plays a part in the hive, like all the bees play parts, but there is no sense in which she directs the others. There is no bee that is in charge.
Here is a video of my colleague Hiroki Sayama’s `Swarm Chemistry’ in action. The specks you see on the screen are individual agents, dumb agents, who react to their environment, which is other local agents. There are no leaders here, there is only group behavior.
In this clip, you can see the swarms which emerge. The music is incidental in this clip; not a result of the swarm behavior.
I have begun an experiment in agent-based sonic composition with the idea of emergent behavior and agent-based modeling in mind. In this video I show my initial foray into this world:
The agents in this video are small triangles that seek a well, and eventually learn (sometimes more effectively, sometimes less effectively) where that well is. What I have done to add a sonic component is to assign each agent an instrument, and assign the agent’s proximity to the well to the pitch of the note they create.
“Random” sounds created by a computer are nothing new. And, frankly, I find them uninteresting. No depth, no humanity. But I think agent-based sonic composition might be something different. These agents are not simply random (although indeed, their behavior has something of a random component, or seemingly random component). They are goal-seeking, they are purposeful, and the sound they generate is a function of their effectiveness and path in pursuing that goal. I think this purposefulness can be heard in the sound the create. There certainly isn’t a melody, but there is a story being told, some kind of struggle being documented.
Swarms, too, are not simply random. Though swarms may be composed of elements have that have randomness in them, they are also structured. If Music is sound with structure, and complex systems is the study of emergent structure, there could be a genuinely interesting music that might emerge from a well-constructed agent-based approach to sonic composition.
I’m not convinced what I have is there yet. There are not interesting interactions between these agents, and there is not a structure to their sound that has depth – yet. Perhaps the next step is to tie the goals of the agents more explicitly to music making. Perhaps there can be melodic agent who moves on a predetermined path, and the other agents try to follow that agent, and hence the sound that comes out documents their struggle. Maybe the agents’ notes should be restricted to scales, so that it sounds less chromatic. Or, perhaps, as I suggest in the video, there can be some agents which control rhythm and others that control pitch.
To be clear: I wouldn’t just listen to this. I don’t know if I would call it “music” yet. But I think it may get there some day.
Andreas Duus Pape: is an economist and a musician. As an economist, he studies microeconomic theory and game theory—that is, the analysis of strategy and the construction of models to understand social phenomena—and the theory of individual choice, including how to forecast the behavior of agents who construct models of social phenomena. As a musician, he plays folk in the tradition of Dylan and Guthrie, blues in the tradition of Williamson and McTell, and country in the tradition of Nelson and Cash. He plays acoustic guitar, harmonica, and voice: although the technology of his musical production is a hundred years old, his ideas are often quite modern, and he covers songs as old as early last century and as recent as this one. Pape is also an assistant Professor in the department of Economics at Binghamton University, where he teaches microeconomic theory at the undergraduate and graduate level. He is a faculty member of the Collective Dynamics of Complex Systems (CoCo) Research Group: http://coco.binghamton.edu and considers complex systems and agent-based modeling to be central to his research
Three weeks ago I got to meet one of my musical heroes. I went to an 8-bit game design workshop at NYU focused around programming games for developing nations. It was organized into a series of tutorials, each focusing on a different element of the game design process. The tutorial on music design was hosted by 8bitpeople’s Nullsleep, Jeremiah Johnson, one of my two favorite chiptune artists! As he instructed the room on the finer points of using the Famitracker software to script authentic 8-bit music, I was struck by some of the nuance in his process. Creativity is a messy and fluid endeavor where mistakes and successes remain ambiguous until they can be contextualized within a final draft.
When Jeremiah programmed the Famitracker, his instrument, I watched as he pushed notes around, made arbitrary decisions and deliberately turned his attention from some tasks which became too arduous. His demo was still awesome, but I was struck by how unstructured his creative process seemed. Famitracker is a music scripting instrument, the notes are organized and prearranged, despite this formal quality there remains a good deal of negotiation between the artist and its interface. I have forever stereotyped music composition as a fairly sterile and surgical art, far away from the authentic feedback between an artist and their instrument. I always had imagined live music as the moment of the authentic, and pigeonholed studio compositions as somehow stale. Watching Jeremiah work helped me to see that all artists hold a unique relationship to their instrument no matter how mechanical, electronic, or mundane that instrument might seem. Even static compositions bring with them history, negotiation and risk. These were liberating ideas, when it came time for me to compose a song on Famitracker, I was able to rip in and rapidly sift ideas from my mind to the canvas.
Eventually, I tried to program in a portamento effect (think: keyboard intro,The Cars, “My Best Friend’s Girl”), and needed some help. Jeremiah came over and started to fiddle with the options, but he was having trouble getting it to work as well. It took about five minutes of trial and error before we figured out how to get the effect just right. These mistakes, bad notes, even misspelled words are all part of the creative process and they inscribe themselves into the larger work, even if they only remain in spirit. Understanding these hiccups and nuances let me view composition from a new perspective where I could recognize all of the skirmishes and textures which have been made invisible in the final product. Live music is often constructed as a space of possibility, where these odd textures and negotiations are given the opportunity to appear. How is this presumption challenged if studio compositions can be read as a series of mistakes leading to an arbitrary but coherent whole?
My big song is called Clever Fishies (Click to hear it!) it will be the soundtrack to a game called Math Shark.
Check out Nullsleep’s Her Lazer Light Eyes to hear why I’m so psyched!