Guest Editors’ Note: Welcome to Sounding Out!‘s December forum entitled “Sound, Improvisation and New Media Art.” This series explores the nature of improvisation and its relationship to appropriative play cultures within new media art and contemporary sound practice. Here, we engage directly with practitioners, who either deploy or facilitate play and improvisation through their work in sonic new media cultures.
For our second piece in the series, we have interviewed New York City based performance duo foci + loci (Chris Burke and Tamara Yadao). Treating the map editors in video games as virtual sound stages, foci + loci design immersive electroacoustic spaces that can be “played” as instruments. Chris and Tamara bring an interdisciplinary lens to their work, having worked in various sonic and game-related cultures including, popular, electroacoustic and new music, chiptune, machinima (filmmaking using video game engines), and more.
As curators, we have worked with foci + loci several times over the past few years, and have been fascinated with their treatment of popular video game environments as tools for visual and sonic exploration. Their work is highly referential, drawing on artistic legacies of the Futurists, the Surrealists, and the Situationists, among others. In this interview, we discuss the nature of their practice(s), and it’s relationship to play, improvisation and the co-constituative nature of their work in relation to capital and proprietary technologies.
— Guest Editors Skot Deeming and Martin Zeilinger
1. Can you take a moment to describe your practice to our readers? What kind of work do you produce, what kind of technologies are involved, and what is your creative process?
foci + loci mostly produce sonic and visual video game environments that are played in live performance. We have been using Little Big Planet (LBP) on the Playstation 3 for about 6 years.
When we perform, we normally have two PS3s running the game with a different map in each. We have experimented with other platforms such as Minecraft and we sometimes incorporate spoken word, guitars, effects pedals, multiple game controllers (more than 1 each) and Game Boys.
Our creative process proceeds from discussions about the ontological differences between digital space and cinematic space, as well as the freeform or experimental creation of music and sound art that uses game spaces as its medium. When we are in “Create Mode” in LBP, these concepts guide our construction of virtual machines, instruments and performance systems.
[Editor’s Note: Little Big Planet’s has several game modes. Create Mode is the space within the game where users can create their own LBP levels and environments. As player’s progress through LBP’s Story Mode, players unlock and increasing number of game assets, which can be used in Create Mode.]
2. Tell us about your background in music? Can you situate your current work in relation to the musical traditions and communities that you were previously a part of?
CB: I have composed for film, TV, video games and several albums (sample based, collage and electronic). Since 2001 I’ve been active in the chipmusic scene, under the name glomag. Around the same time I discovered machinima and you could say that my part in foci + loci is the marriage of these two interests – music and visual. Chipmusic tends to be high energy and the draw centers around exciting live performances. It’s immensely fun and rewarding but I felt a need to step back and make work that drew from more cerebral pursuits. foci + loci is more about these persuits for me: both my love of media theory and working with space and time.
TY: I’m an interdisciplinary artist and composer. I studied classical piano and percussion during my childhood years. I went on to study photography, film, video, sound, digital media and guitar in college and after. I’ve primarily been involved with the electroacoustic improv and chipmusic scenes, both in NYC. I’ve been improvising since 2005, and I’ve been writing chipmusic since 2011 under the moniker Corset Lore.
My work in foci + loci evolved out of the performance experience I garnered in the electroacoustic improv scene. My PS3 replaced my laptop. LBP replaced Ableton Live and VDMX. I think I felt LBP had more potential as a sonic medium because an interface could be created from scratch. Eventually, the game’s plasticity and setting helped to underscore its audiovisual aspect by revealing different relationships between sound and image.
3. Would you describe your work as a musical practice or an audio-visual performance practice?
FL: We have always felt that in game space, it is more interesting to show the mechanism that makes the sound as well as the image. These aspects are programmed, of course, but we try to avoid things happening “magically,” and instead like to give our process some transparency. So, while it is often musical, sound and image are inextricably linked. And, in certain cases, the use of a musical score (including game controller mappings) has been important to how our performance unfolds either through improvisation or timed audiovisual events. The environment is the musical instrument, so using the game controller is like playing a piano and wielding a construction tool at the same time. It has also been important in some contexts to perform in ‘Create Mode’ in order to simply give the audience visual access to LBP‘s programming backend. In this way, causal relationships between play and sound may be more firmly demonstrated.
4. There are many communities of practice that have adopted obsolete or contemporary technologies to create new, appropriative works and forms. Often, these communities recontextualize our/their relationships to technologies they employ. To what extent do you see you work in relation to communities of appropriation-based creative expression?
CB: In the 80s-90s I was an active “culture jammer,” making politically motivated sound montage works for radio and performance and even dabbling in billboard alterations. Our corporate targets were selling chemical weapons and funding foreign wars while our media targets were apologists for state-sanctioned murder. Appropriating their communications (sound bites, video clips, broadcasting, billboards) was an effort to use their own tools against them. In the case of video game publishers and console manufacturers, there is much to criticize: sexist tropes in game narratives, skewed geo-political subtexts, anti-competitive policies, and more. Despite these troubling themes, the publishers (usually encouraged by the game developers) have occasionally supported the “pro-sumer” by opening up their game environments to modding and other creative uses. This is a very positive shift from, say, the position of the RIAA or the MPAA, where derivative works are much more frequently shut down. My previous game-related series, This Spartan Life, was more suited to tackling these issues. As for foci + loci, it’s hard to position work that uses extensively developed in-game tools as being “appropriative,” but I do think using a game engine to explore situationist ideas or the ontology of game space, as we do in our work, is a somewhat radical stance on art. We hope that it encourages more players to creatively express their ideas in similar ways.
TY: Currently, the ‘us vs. them’ attitude that characterized the 80s and 90s is no longer as relevant as it once was because corporations are now giving artists technology for their own creative use. However, they undermine this sense of benevolence by claiming that consumers could be the next Picasso if they buy said piece of technology in their marketing—as if the tool is more important than the artist/artwork. Little Big Planet is marketed this way. On the whole, I think these issues complicate artists’ relationships with their media.
Often our work tends to be included in hacker community events, most recently the ‘Music Games Hackathon’ at Spotify (NYC), because, while we don’t necessarily hack the hardware or software, our approach is a conceptual hack or subversion. At this event, there were a variety of conceptual connections made between music, hacks and games; Double Dutch, John Zorn’s Game Pieces, Fluxus, Xenakis and Stockhausen were all compared to one another. I gave a talk at the Hackers on Planet Earth Conference in 2011 about John Cage, Marcel Duchamp, Richard Stallman and the free software movement. In Stallman’s essay ‘On Hacking,’ he cited John Cage’s ‘4’33″‘ as an early example of a music hack. In my discussion, I pointed to Marcel Duchamp, a big influence on Cage, whose readymades were essentially hacked objects through their appropriation and re-contextualization. I think this conceptual approach informs foci + loci’s current work.
[Editors’ note: Recently celebrating its 10th anniversary, This Spartan Life is a machinima talk show that takes place within the multiplayer game space of the First Person Shooter game Halo. This Spartan Life was created by Chris Burke in 2005. The show has featured luminaries including Malcolm McClaren, Peggy Awesh, and many more.]
5. You mention the ontological differences between game spaces and cinematic spaces. Can you clarify what you mean by this? Why is this such as important distinction and how does it drive the work?
CB: We feel that there is a fundamental difference in the way space is represented in cinema through montage and the way it’s simulated in a video game engine. To use Eisenstein’s terms, film shots are “cells” which collide to synthesize an image in the viewers mind. Montage builds the filmic space shot by shot. Video game space, being a simulation, is coded mathematically and so has a certain facticity. We like the way the mechanized navigation of this continuous space can create a real time composition. It’s what we call a “knowable” space.
6. Your practice is sound-based but relies heavily on the visual interface that you program in the gamespace. How do you view this relationship between the sonic and the visual in your work?
TY: LBP has more potential as a creative medium because it is audiovisual. The sound and image are inextricably linked in some cases, where one responds to the other. These aspects of interface function like the system of instruments we (or the game console) are driving. Since a camera movement can shape a sound within the space, the performance of an instrument can be codified to yield a certain effect. This goes back to our interest in the ontology of game space.
7. Sony (and other game developers) have been criticized for commodifying play as work – players produce and upload levels for free, and this free labour populates the Little Big Planet ecology. How would you position the way you use LBP in this power dynamic between player and IP owner?
CB: We are certainly more on the side of the makers than the publishers, but personally I think the “precarious labor” argument is a stretch with regard to LBP. Are jobs being replaced (International Labor Rights definition of precarious work)? Has a single modder or machinima maker suggested they should be compensated by the game developer or publisher for their work? Compensation actually does happen occasionally. This Spartan Life was, for a short time, employed by Microsoft to make episodes of the show for the developer’s Halo Waypoint portal. I have known a number of creators from the machinima community who were hired by Bioware, Blizzard, Bungie, 343 Industries and other developers. Then there’s the famous example of Minh Le and Jess Cliffe, who were hired by Valve to finish their Half-Life mod, Counterstrike. However, compensating every modder and level maker would clearly not be a supportable model for developers or publishers.
Having said all that, I think our work does not exactly fit into Sony’s idea of what LBP users should be creating. We are resisting, in a sense, by providing a more art historical example of what gamers can do with this engine beyond making endless game remakes, side-scrollers and other overrepresented forms. We want players to open our levels and say “WTF is this? How do I play it?” Then we want them to go into create mode and author LBP levels that contain more of their own unique perspectives and less of the game.
[Corset Lore is Tamara Yadao’s chiptune project.]
8. What does it mean to improvise with new interfaces? Has anything ever gone horribly wrong during a moment of improvisation? Is there a tension between improvisation and culture jamming, or do the two fit naturally together?
CB: It’s clear that improvising with new interfaces is freer and sometimes this means our works in progress lack context and have to be honed to speak more clearly. This freedom encourages a spontaneous reaction to the systems we build that often provokes the exploitation of weaknesses and failure. Working within a paradigm of exploitation seems appropriate to us, considering our chosen medium. In play, there is always the possibility of failure, or in a sense, losing to the console. When we design interfaces within console and game parameters we build in fail-safes while also embracing mechanisms that encourage failure during our performance/play.
In an elemental way, culture jamming is a more targeted approach, whereas improvisation seems to operate with a looser agenda. Improvisation is already a critical approach to the structures of game narrative. Improvising with a video game opens up the definition of what a game space is, or can be.
All images used with permission by foci + loci.
foci + loci are Chris Burke and Tamara Yadao.
Chris Burke came to his interest in game art via his work as a composer, sound designer and filmmaker. As a sound designer and composer he has worked with, among others, William Pope L., Jeremy Blake, Don Was, Tom Morello and Björk. In 2005 he created This Spartan Life which transformed the video game Halo into a talk show. Within the virtual space of the game, he has interviewed McKenzie Wark, Katie Salen, Malcolm McLaren, the rock band OK Go and others. This and other work in game art began his interest in the unique treatment of space and time in video games. In 2012, he contributed the essay “Beyond Bullet Time” to the “Understanding Machinima” compendium (2013, Continuum).
Tamara Yadao is an interdisciplinary artist and composer who works with gaming technology, movement, sound, and video. In Fall 2009, at Diapason Gallery, she presented a lecture on “the glitch” called “Post-Digital Music: The Expansion of Artifacts in Microsound and the Aesthetics of Failure in Improvisation.” Current explorations include electro-acoustic composition in virtual space, 8-bit sound in antiquated game technologies (under the moniker Corset Lore), movement and radio transmission as a live performance tool and the spoken word. Her work has been performed and exhibited in Europe and North America, and in 2014, Tamara was the recipient of a commissioning grant by the Jerome Fund for New Music through the American Composers Forum.
REWIND! . . .If you liked this post, you may also dig:
Improvisation and Play in New Media, Games, and Experimental Sound Practices — Skot Deeming and Martin Zeilinger
Sounding Out! Podcast #41: Sound Art as Public Art — Salomé Voegelin
Sounding Boards and Sonic Styles — Josh Ottum
This is article 2.0 in Sounding Out!‘s April Forum on “Sound and Technology.” Every Monday this month, you’ll be hearing new insights on this age-old pairing from the likes of Sounding Out! veterano Aaron Trammell along with new voices Andrew Salvati and Owen Marshall. These fast-forward folks will share their thinking about everything from Auto-tune to techie manifestos. So, turn on your quantizing for Sounding Out! and enjoy today’s supersonic in-depth look at sampling from from SO! Regular Writer Primus Luta. –JS, Editor-in-Chief
My favorite sample-based composition? No question about it: “Stroke of Death” by Ghostface and produced by The RZA.
Supposedly the story goes, RZA was playing records in the studio when he put on the Harlem Underground Band’s album. It is a go-to album in a sample-based composer collection, because of the open drum breaks. One such break appears in the cover of Bill Wither’s “Ain’t No Sunshine”, notably used by A Tribe Called Quest on “Everything is Fair.”
RZA, a known break beat head, listened as the song approached the open drums, when the unthinkable happened: a scratch in his copy of the record. Suddenly, right before the open drums dropped, the vinyl created its own loop, one that caught RZA’s ear. He recorded it right there and started crafting the beat.
This sample is the only source material for the track. RZA throws a slight turntable backspin in for emphasis, adding to the jarring feel that drives the beat. That backspin provides a pitch shift for the horn that dominates the sample, changing it from a single sound into a three-note melody. RZA also captures some of the open drums so that the track can breathe a bit before coming back to the jarring loop. As accidental as the discovery may have been, it is a very precisely arranged track, tailor-made for the attacking vocals of Ghostface, Solomon Childs, and the RZA himself.
“Stroke of Death” exemplifies how transformative sample-based composition can be. Other than by knowing the source material, the sample is hard to identify. You cannot figure out that the original composition is Wither’s “Ain’t No Sunshine” from the one note RZA sampled, especially considering the note has been manipulated into a three-note melody that appears nowhere in either rendition of the composition. It is sample based, yes, but also completely original.
Classifying a composition like this as a ‘happy accident’ downplays just how important the ear is in sample-based composition, particularly on the transformative end of the spectrum. J Dilla once said finding the mistakes in a record excited him and that it was often those mistakes he would try to capture in his production style. Working with vinyl as a source went a long way in that regard, as each piece of vinyl had the potential to have its own physical characteristics that affected what one heard. It’s hard to imagine “Stroke of Death” being inspired from a digital source. While digital files can have their own glitches, one that would create an internal loop on playback would be rare.
There has been a change in the sound of sampling over the past few decades. It is subtle but still perceptible; one can hear it even if a person does not know what it is they are hearing. It is akin to the difference between hearing a blues man play and hearing a music student play the blues. They technically are both still the blues, but the music student misses all of the blue notes.
The ‘blue notes’ of the blues were those aspects of the music that could not be transcribed yet were directly related to how the song conveyed emotion. It might be the fact that the instrument was not fully in tune, or the way certain notes were bent but others were not, it could even be the way a finger hit the body of a guitar right after the string was strummed. It goes back farther than the blues and ultimately is not exclusive to the African American tradition from which the phrase derives; most folk music traditions around the world have parallels. “The Rite of Spring” can be understood as Stravinsky ‘sampling’ the blue notes of Transylvanian folk music. In many regards sample-based composing is a modern folk tradition, so it should come as no surprise that it has its own blue notes.
The sample-based composition work of today is still sampling, but much of it lacks the blue notes that helped define the golden era of the art. I attribute this discrepancy to the evolution of technology over the last two decades. Many of the things that could be understood as the blue notes of sampling were merely ways around the limits of the technology. In the same way, the blue notes of most folk music happened when the emotion went beyond the standards of the instrument (or alternately the limits imposed upon it by the literal analysis of western theory). By looking at how the technology has evolved we can see how blue notes of sampling are being lost as key limitations are being overcome by “advances.”
First, let’s consider the E-Mu SP-1200, which is still thought to be the most definitive sounding sampler for hip-hop styled sample-based compositions, particularly related to drums. The primary reason for this is its low-resolution sampling and conversion rates. For the SP-1200 the Analog to Digital (A/D) and Digital to Analog (D/A) converters were 12-bit at a sample rate of 26.04 kHz (CD quality is 16-bit 44.1 kHz). No matter what quality the source material, there would be a loss in quality once it was sampled into and played out of the SP-1200. This loss proved desirable for drum sounds particularly when combined with the analog filtering available in the unit, giving them a grit that reflected the environments from which the music was emerging.
On top of this, individual samples could only be 2.5 seconds long, with a total available sample time of only 10 seconds. While the sample and conversion rates directly affected the sound of the samples, the time limits drove the way that composers sampled. Instead of finding loops, beatmakers focused on individual sounds or phrases, using the sequencer to arrange those elements into loops. There were workarounds for the sample time constraints; for example, playing a 33-rpm record at 45 rpm to sample, then pitching it back down post sampling was a quick way to increase the sample time. Doing this would further reduce the sample rate, but again, that could be sonically appealing.
An under appreciated limitation of the SP-1200 however, was the visual feedback for editing samples. The display of the SP-1200 was completely alpha numeric; there were no visual representations of the sample other than numbers that were controlled by the faders on the interface. The composer had to find the start and end points of the sample solely by ear. Two producers might edit the exact same kick drum with start times 100 samples (a fraction of a millisecond) apart. Were one of the composers to have recorded the kick at 45 rpm and pitched it down, the actual resolution for the start and end times would be different. When played in a sequence, these 100 samples affect the groove, contributing directly to the feel of the composition. The timing of when the sample starts playback is combined with the quantization setting and the swing percentage of the sequencer. That difference of 100 samples in the edit further offsets the trigger times, which even with quantization turned off fit into the 24 parts per quarter grid limitations of the machine.
Akai’s MPC-60 was the next evolution in sampling technology. It raised the sample and conversion rates to 16-bit and 40 kHz. Sample time increased to a total of 13.1 seconds (upgradable to 26.2). Sequencing resolution increased to 96 parts per quarter. Gone was the crunch of the SP-1200, but the precision went up both in sampling and in sequencing. The main trademark of the MPC series was the swing and groove that came to Akai from Roger Linn’s Linn Drum. For years shrouded in mystery and considered a myth by many, in truth there was a timing difference that Linn says was achieved by delaying certain notes by samples. Combined with the greater PPQ resolution in unquantized mode, even with more precision than the SP-1200, the MPC lent itself to capturing user variation.
Despite these technological advances, sample time and editing limitations, combined with the fact that the higher resolution sampling lacked the character of the SP-1200, kept the MPC from being the complete package sample composers desired. For this reason it was often paired with Akai’s S-950 rack sampler. The S-950 was a 12-bit sampler but had a variable sample rate between 7.5 kHz and 40 kHz. The stock memory could hold 750 KB of samples which at the lowest sample rate could garner upwards of 60 seconds of sampling and at the higher sample rates around 10 seconds. This was expandable to up to 2.5 MB of sample memory.
The editing capabilities made the S-950 such a powerful sampler. Being able to create internal sample loops, key map samples to a keyboard, modify envelopes for playback, and take advantage of early time stretching (which would come of age with the S-1000)—not to mention the filter on the unit—helped take sampling deeper into the sound design territory. This again increased the variable possibilities from composer to composer even when working from the same source material. Often combined with the MPC for sequencing, composers had the ultimate sample-based composition workstation.
Today, there are literally no limitations for sampling. Perhaps the subtlest advances have developed the precision with which samples can be edited. With these advances, the biggest shift has been the reduction of the reliance on ears. Recycle was an early software program that started to replace the ears in the editing process. With Recycle an audio file could be loaded, and the software would chop the sample into component parts by searching for the transients. Utilizing Recycle on the same source, it was more likely two different composers could arrive at a kick sample that was truncated identically.
Another factor has been the waveform visualization of samples for editing. Some earlier hardware samplers featured the waveform display for truncating samples, but the graphic resolution within the computer made this even more precise. By looking at the waveform you are able to edit samples at the point where a waveform crosses the middle point between the negative and positive side of the signal, known as the zero-crossing. The advantage of zero-crossing sampling is that it prevents errors that happen when playback goes from either side of the zero point to another point in one sample, which can make the edit point audible because of the break in the waveform. The end result of zero-crossing edited samples is a seamlessness that makes samples sound like they naturally fit into a sequence without audible errors. In many audio applications snap-to settings mean that edits automatically snap to zero-crossing—no ears needed to get a “perfect” sounding sample.
It is interesting to note that with digital files it’s not about recording the sample, but editing it out of the original file. It is much different from having to put the turntable on 45 rpm to fit a sample into 2.5 seconds. Another differentiation between digital sample sources is the quality of the files, whether original digital files (CD quality or higher), lossless compression (FLAC), lossy compressed (MP3, AAC) or the least desirable though most accessible, transcoded (lossy compression recompressed such as YouTube rips). These all result in a different degradation of quality than the SP-1200. Where the SP-1200’s downsampling often led to fatter sounds, these forms of compression trend toward thinner-sounding samples.
Some producers have created their own sound using thinned out samples with the same level of sonic intent as The RZA’s on “Stroke of Death.” The lo-fi aesthetic is often an attempt to capture a sound to parallel the golden era of hardware-based sampling. Some software-based samplers by example will have an SP-1200 emulation button that reduces bit rates to 12-bit. Most of software sequencers have groove templates that allow for the sequencers to emulate grooves like the MPC timing.
Perhaps the most important part of the sample-based composition process however, cannot be emulated: the ear. The ear in this case is not so much about the identification of the hot sample. Decades of history should tell us that the hot sample is truly a dime a dozen. It takes a keen composer’s ear to hear how to manipulate those sounds into something uniquely theirs. Being able to listen for that and then create that unique sound—utilizing whatever tools— that is the blue note of sampling. And there is simply no way to automate that process.
Featured image: “Blue note inverted” by Flickr user Tim, CC BY-ND 2.0
Primus Luta is a husband and father of three. He is a writer and an artist exploring the intersection of technology and art, and their philosophical implications. He maintains his own AvantUrb site. Luta was a regular presenter for Rhythm Incursions. As an artist, he is a founding member of the collective Concrète Sound System, which spun off into a record label for the exploratory realms of sound in 2012. Recently Concréte released the second part of their Ultimate Break Beats series for Shocklee.
REWIND!…If you liked this post, you may also dig:
“SO! Reads: Jonathan Sterne’s MP3: The Meaning of a Format”-Aaron Trammell
“Remixing Girl Talk: The Poetics and Aesthetics of Mashups”-Aram Sinnreich
“Sound as Art as Anti-environment”-Steven Hammer