Some of the most popular early 21st century feminist approaches to pop culture are rooted in a collapse of visual and aural representations. For example, though Disney princesses have become visibly more diverse and realistic, linguists Carmen Fought and Karen Eisenhauer have compiled data showing that women characters in Disney princess films speak less in films released between 1989-1999 than they did in films released in the 1930s-1950s. Writing in Noisey in 2015, Emma Garland wonders whether we “have created an environment in which female artists are being judged only on their feminism.” Both in her own analysis and in the thinkpieces she references, that judgment addresses the verbal content of song lyrics or artists’ public statements and the visual content of music videos. Noting that “a lengthy Google search will drag up hundreds of editorial pieces about the [Rihanna’s] ‘BBHMM’ video” (The Guardian alone hosts six), but barely any reviews of the actual song, Garland illustrates just how much feminist analysis of pop music skews to the visual and away from sound and music. Popular post-feminist analysis focuses on the visual and verbal because of the influence of law and legal theory on 20th century American feminism. However, in post-feminist pop, the sound lets in the very same problems the lyrics and visuals claim to have solved.
In her article “Liberal Feminism From Law To Art,” L. Ryan Musgrave argues, “many early feminist accounts of how art is political depend largely on a distinctly liberal version of politics” (214). This is a classical contractarian liberalism where “equality meant equality before the law…[and] democratic representation in a state” (214). Legally, these principles inform foundational liberal values like freedom of speech (e.g., the First Amendment to the US Constitution) and equality of opportunity (e.g., the Fourteenth Amendment to the US Constitution). 20th century feminist artists and art scholars translated these legal principles into ideas about how art should be made and interpreted. According to Musgrave, the political principle of free speech is translated into an aesthetic principle of “free expression…we should celebrate in order to be inclusive, straightforward expressions of either one’s individual experience or one’s identity as a member of an historically disenfranchised group” (223). This type of feminist aesthetics sought to counter the tendency to silence, ignore, trivialize, and censor women’s art.
In pop music, liberal feminism informs discussions of the marginalization of women artists in a particular genre, or celebrations of women’s self-expression, like ‘90s “girl power” and “revolution girl style now” aesthetics. As an aesthetic principle, equality of opportunity manifests as a two-pronged commitment to non-exploitative modes of production and to representational accuracy: women should not be objectified by or excluded from artistic practice, and they should be truthfully, realistically depicted in art (Musgrave 220). Pop and hip hop feminisms often appeal to this type of feminism, generally in discussions of women’s bodies: were women objectified in the video-making process? Do they appear on screen only as objects? Do the images of women accurately depict “real” women, or are they unrealistic images of too-thin, too-blonde ideals?
With its commitments to free speech and equality of opportunity, mainstream Anglo-American feminist aesthetics translates liberalism’s concept of political representation into a concept of aesthetic representation. The outcome of this translation is a “realism focused on the content of artworks” (Musgrave 223; emphasis mine) and “the conviction that it is the job of art or creative work to get it right, to show how it ‘really’ is, to come clean of previously incorrect and ideologically weighted images” (226). A feminist aesthetics focused primarily on the representational content of artworks and the subjectivity (or objectification) of artists translates classical liberalism’s ideas of what politics, injustice, and equality are into artistic terms.
Meghan Trainor’s infamous “All About That Bass” and “Dear Future Husband,” Lilly Allen’s “Hard Out Here,” and even Usher’s “I Don’t Mind” are recent examples of songs that trade in equality of opportunity-style liberal feminism. For example, “All About That Bass” and “Hard Out Here” address a disjoint between how women are portrayed in the media and how they “really” look. “Dear Future Husband” and “I Don’t Mind” are about (partnered, heterosexual) women’s entitlement to work, even sex work. Both of these approaches share the underlying assumption, drawn from liberalism, that art re-presents reality in the same way a vote re-presents a citizen’s will or an elected representative re-presents the will of their constituents: accurately and truthfully, in the sense of truth as correspondence between statement and fact, signifier and signified.
Within a liberal feminist framework, sound can only be political if it has a representational content that depicts or expresses a subject’s voice or identity. This is why post-feminist approaches to pop music overlook sound and music: requiring attention to things like formal relationships, pattern repetition and development, the interaction among voices and timbres, and, well, structure, they don’t fit into liberalism’s understanding of what politics is and how it works. But because the music part of Anglo-American pop music always does more than this, it has political effects that this liberal feminist framework can’t perceive as political, or as having to do with gender (or race).
This can be both a good thing and a bad thing. On the one hand, this emphasis on the visual to the exclusion of sound opens a space for radical and subcultural politics within the mainstream. For example, as Regina Bradley has argued here at Sounding Out!, Beyoncé uses sound to move outside the politics of respectability that her visual image often reinforces. On the other hand, it creates a back door through which white supremacist patriarchy can sneak in. This is the same back door that all liberalisms have, the back door that lets substantive inequality pass as equality before the law and/or the market (Falguni Sheth, Charles Mills, and Carole Pateman talk extensively about this).
Popular post-feminist pop refers back to liberal aesthetics in order to establish its “post-”ness, that is, to show that the problems liberal feminism identified are things in its and our past. For example, inaccurate representation and objectification or silencing are precisely the things that the contemporary pop examples I cited claim to have fixed: Trainor and Allen accurately represent “real” women, and Usher’s song talks about a stripper as an empowered (near) equal rather than an object. However, the sounds in these songs tell different stories; they make white supremacist patriarchy entirely present.
Usher’s “I Don’t Mind” uses sound to straighten out some of the ratchetness in hip hop sexuaility. L.H. Stallings’s “Hip Hop & the Black Ratchet Imagination” argues that
the strip club genre and the hip hop strip club also develop as a result of the unacknowledged presence of black women with various gender performances and sexual identities within the club, on stage and off, whose bodies and actions elicit new performances of black masculinity. Moreover, when woman is undone in this way, we note the potential for such undoing to temporarily queer men.” (138)
Though it’s conventional to see women strippers and their male rapper audience in terms of heterosexual desire and normativity, the dancers’ use of black dance performance traditions and aesthetics displace scripts of femininity and put their bodily gender performance in transition. And because “this is what rappers get caught up in–the fantasy of woman whose origin is in the female dancers’ undoing of woman,” (138), this fantasy also undoes them as “men.” The dancers’ performances are a type of “corporeal orature” (138) that puts outwardly heteropatriarchal gender and sexuality in transition, bending it away from respectability (the reproduction and transmission of wealth, property, and non-deviance qua whiteness) and toward ratchet.
The booty clap synth patch is one way this corporeal orature gets translated into sounds. It’s a particular variation on the hand-clap drum machine sound, and it translates the “booty clap” dance move and the rhythm of twerking into music. Following the dancers’ rhythm, the patch is usually used in a four-on-the-floor pattern, as for example in Juicy J’s “Bandz A Make Her Dance.” Featuring Juicy J as the misogynist foil to Usher’s progressive nice guy, “I Don’t Mind” is easy to hear as a direct response to “Bandz.” Like “Bandz,” “I Don’t Mind” is a song about men’s desire for strippers. However, “I Don’t Mind” straightens that desire out and classes it up by rewriting–indeed, erasing–the corporeal orature translated into the 4/4 booty-clap synth rhythm. Throughout “I Don’t Mind” that same synth patch is used on beats 2 and 4; it takes a “ratchet” sound and translates it into very respectable, traditional R&B rhythmic terms. Sound is the back door that lets very traditional gender and sexual politics sneak in to undermine some nominally progressive, feminist lyrics.
Sound plays a similar role in much of Meghan Trainor’s work. In “Dear Future Husband,” she sings apparently feminist lyrics about economic and sexual empowerment over do-wop, a backing track that sounds straight out of an episode of Happy Days (a 1970s TV program about 1950s nostalgia). Similarly, “All About That Bass” puts lyrics about positive body image over a very retro bassline that has more in common with the bassline in the theme song to David Simon’s New Orleans series Treme than it does with the bass in either Iggy Azalea’s “Black Widow” or Jessie J’s “Bang Bang”—two of the other singles consistently in the top five slots during “All About That Bass’s” weeks-long dominance of the Billboard Hot 100 in fall 2014.
Especially after the success of 2014’s “Uptown Funk” by Mark Ronson and Bruno Mars, this retromania isn’t unusual. But few pop songs look back as far as these post-feminist songs do, to the 1950s and even earlier. Appealing to pre-Civil Rights era sounds, these songs double down on the racialized sexual normalcy of white women’s performances of post-feminist empowerment. “Dear Future Husband,” “All About That Bass,” and “Marvin Gaye” (Trainor’s collaboration with Charlie Puth) all take rhythms, timbres, and genre conventions appropriated from black pop music, but which have, over half a century, been assimilated to bourgeois respectability. They recall Grease more so than Little Richard.
For example, James Shotwell describes “Marvin Gaye” as taking “an innocent approach to talking about sex, with accompaniment that is straight out of your grandma’s favorite sock hop memories…Just like how Mark Ronson and Bruno Mars have made a mint in recent years with a revitalization of funk ethos, Meghan Trainor and Charlie Puth are now doing the same for pop, only with less risk.” “Marvin Gaye” sounds less sexually risky because it recalls what, for whites, was a more racially “innocent” time, a pre-Civil Rights era when white ears could more easily avoid the sounds of black radical politics in either James Brown’s funk or Gaye’s soul. In “Marvin Gaye,” old-school sounds evoke a time when society was organized by the same sort of comparatively simple racial politics that organize the song itself. For example, its bridge follows the trap convention of using a male-chorus “Hey!” on the 2 and 4 of every measure. Its verses, however, put that same “Hey!” patch only on 4. Sounds evoke racial non-whiteness to generate tension, but then resolve that tension sonically. Definitive sonic resolution shuts down the transitional effect ratchet sounds, like those heard in the bridge, can have on sexuality and gender.
In “Marvin Gaye” and the other retromanical post-feminist pop songs, sounds do the white supremacist patriarchal work the lyrics and videos claim to have progressed past. Even though these women’s speech and appearance are outside the bounds of traditional femininity, the sounds reassure us that this newfangled gender performance isn’t racially and sexually deviant, that it isn’t “ratchet” in Stallings’ sense. Using liberalism to define the paramaters of political (in)justice, contemporary post-feminist aesthetics focus our attention and effort on verbal content and visual mimesis; this creates an opening for sound and music to either destabilize or double down on normative gender, sexual, and racial performance. As the “Marvin Gaye” example shows, this opening is an essential component of neoliberal post-feminism: sound recodes white women’s transgressions of traditional femininity as racially and sexually normal.
Featured image: “mannequin head on concrete with headphones” from Flickr user J E Theriot, (CC BY 2.0)
Robin James is Associate Professor of Philosophy at UNC Charlotte. She is author of two books: Resilience & Melancholy: pop music, feminism, and neoliberalism, published by Zer0 books last year, and The Conjectural Body: gender, race and the philosophy of music was published by Lexington Books in 2010. Her work on feminism, race, contemporary continental philosophy, pop music, and sound studies has appeared in The New Inquiry, Hypatia, differences, Contemporary Aesthetics, and the Journal of Popular Music Studies. She is also a digital sound artist and musician. She blogs at its-her-factory.com and is a regular contributor to Cyborgology.
REWIND! . . .If you liked this post, you may also dig:
CLICK HERE TO DOWNLOAD: Sound and Sexuality in Video Games
SUBSCRIBE TO THE SERIES VIA ITUNES
ADD OUR PODCASTS TO YOUR STITCHER FAVORITES PLAYLIST
This week’s podcast questions how identity is coded into the battle cries shouted by characters in video games. By exploring the tools that sound studies provides to understand the various dynamics of identity, this podcast aims to provoke a conversation about how identity is encoded within the design of games. The all too invisible intersection between sound, identity, and code reveals the ways that sound can help explain the interior logic of the games and other digital systems. Here, Milena Droumeva and Aaron Trammell discuss how femininity and sexuality have been coded within game sounds and consider the degree to which these repetitive and objectifying tropes can be resisted by players and designers alike.
Milena Droumeva is an Assistant Professor of Communication at Simon Fraser University specializing in mobile technologies, sound studies and multimodal ethnography, with a long-standing interest in game cultures. She has worked extensively in educational research on game-based learning, as well as in interaction design for responsive environments. Milena is a sound studies scholar, a multimodal ethnographer, and a soundwalking enthusiast, published widely in the areas of acoustic ecology, media and game studies, design and technology. You can find her musings on sound and other material goodies at http://natuaural.com.
Aaron Trammell is a Provost Postdoctoral Scholar for Faculty Diversity in Informatics and Digital Knowledge at the Annenberg School for Communication and Journalism at the University of Southern California. He earned his doctorate from the Rutgers University School of Communication and Information in 2015. Aaron’s research is focused on revealing historical connections between games, play, and the United States military-industrial complex. He is interested in how military ideologies become integrated into game design and how these perspectives are negotiated within the imaginations of players. He is the Co-Editor-in-Chief of the journal Analog Game Studies and the Multimedia Editor of Sounding Out!
Featured image borrowed from Geralt @Pixabay CC BY.
REWIND! . . .If you liked this post, you may also dig:
Video Gaming and the Sonic Feedback of Surveillance – Aaron Trammell
This post continues our summer Sound and Pleasure series, as the third and final podcast in a three part series by Leonard J. Paul. What is the connection between sound and enjoyment, and how are pleasing sounds designed? Pleasure is, after all, what brings y’all back to Sounding Out! weekly, is it not?
Part of the goal of this series of podcasts has been to reveal the interesting and invisible labor practices which are involved in sound design. In this final entry Leonard J. Paul breaks down his process in designing living sounds for the game Vessel. How does one design empathetic or aggressive sounds? If you need to catch up read Leonard’s last entry where he breaks down the vintage sounds of Retro City Rampage. Also, be sure to be sure to check out last week’s edition where Leonard breaks down his process in designing sound for Sim Cell. But first, listen to this! -AT, Multimedia Editor
CLICK HERE TO DOWNLOAD: Game Audio Notes III: The Nature of Sound in Vessel
SUBSCRIBE TO THE SERIES VIA ITUNES
ADD OUR PODCASTS TO YOUR STITCHER FAVORITES PLAYLIST
Game Audio Notes III: The Nature of Sound in Vessel
Strange Loop Game’s Vessel is set in an alternate world history where a servant class of liquid automatons (called fluros) has gone out of control. The player explores the world and solves puzzles in an effort to restore order. While working on Vessel, I personally recorded all of the sounds so that I could have full control over the soundscape. I recorded all of the game’s samples with a Zoom H4n portable recorder. My emphasis on real sounds was intended to focus the player’s experience of immersion in the game.
This realistic soundscape was supplemented with a variety of techniques that produced sounds that dynamically responded to the changes in the physics engine. Water and other fluids in the game were difficult to model with both the physics engine and the audio engine (FMOD Designer). Because fluids are fundamentally connected to the game’s physics engine, they takes on a variety of different dynamic forms as players interact with the fluid in different ways. In order to address this Kieran Lord, the audio coder, and I considered factors like the amount of liquid in a collision with anything, the hardness of the surface that it was colliding with, the type of liquid in motion, whether the player is experiencing an extreme form of that sound because it is colliding with their head, and, of course, how fast the liquid is travelling.
Although there was a musical score, I designed the effects to be played without music. Each element of the game, for instance a lava fluro’s (one of the game’s rebellious automatons) footsteps, entailed required layers of sound. The footsteps were composed of water sizzling on a hot pan, a gloopy slap of oatmeal and a wet rag hitting the ground. Finding the correct emotional balance to support the game’s story was fundamental to my work as a sound designer. The game’s sound effects were constantly competing with the adaptive music (which is also contingent on player action) that plays throughout the game, so it was important to provide an informative quality to them. The sound effects inform you about the environment while the music sets the emotional underscore of the gameplay and helps guide you in the puzzles.
Defining the character of the fluros was difficult because I wanted players to have empathy for them. This was important to me because there is often no way to avoid destroying them when solving the game’s puzzles. While recording sounds in the back of an antique shop, I came across a vintage Dick Tracey gun that made a fantastic clanking sound when making a siren sound. Since the gun allowed me to control how quickly the siren rose and fell, it was a great way to produce vocalizations for the fluros. I simply recorded the gun’s siren sound, chopped the recording into smaller pieces, and then played back different segments randomly. The metal clanking gave a mechanical feel and the siren’s tone gave a vocal quality to the resulting sound that was perfect for the fluros. I could make the fluros sound excited by choosing a higher pitch range from the sample grains and inform the player when they approached their goal.
I wanted a fluid-based scream to announce a fluro’s death. I tried screaming underwater, screaming into a glass of water, and a few other things, but nothing worked. Eventually, when recording a rubber ear syringe, I found squeezing the water out quickly lent a real shriek while it spit out the last of the water. Not only did this sound really cut through the din of the gears clanking in the mix, but it also bonded a watery yell with the sense of being crushed and running out of breath.
For the final boss, I tried many combinations of glurpy sounds to signify its lava form. Eventually I recorded a nail in a board being dragged across a large rusty metal sheet. Though it was quite excruciating to listen to, I pitched down the recording and combined it with a pitched down and granulated recording of myself growling into a cup of water. This sound perfectly captured the emotion I wanted to feel when encountering a final boss. Although it can take a long time to arrive at the “obvious” sound, simplicity is often the key.
Anticipation is fundamental to a player’s sense of immersion. It carves a larger space for tension to build, for instance a small crescendo of a creaking sound can develop a tension that builds to a sudden and large impact. A whoosh before a punch lands adds extra weight to the force of the punch. These cues are often naturally present in real-world sounds, such as a rush of air sweeping in before a door slams. A small pause might be included just for added suspense and helps to intensify the effect of the door slamming. Dreading the impact is half of the emotion of a large hit .
Recording all of the sounds for Vessel was a large undertaking but since I viewed each recording as a performance, I was able to make the feeling of the world very cohesive. Each sound was designed to immerse the player in the soundscape, but also to allow players enough time to solve puzzles without becoming annoyed with the audio. All sounds have a life of their own and a resonance of memory and time that stays with the them during each playthrough of a game. In Retro City Rampage I left a sonic space for the player to wax nostalgic. In Sim Cell, I worked to breathe life into a set of sterile and synthesized sounds. Each recorded sound in Vessel is alive in comparison, telling stories of time, place and recording with them, that are all their own.
The common theme of my audio work on Retro City Rampage, Sim Cell and Vessel, is that I enjoy putting constraints on myself to inspire my creativity. I focus on what works and removing non-essential elements. Exploring the limits of constraints often provokes interesting and unpredictable results. I like “sculpting” sounds and will often proceed from a rough sketch, polishing and reducing elements until I like what I hear. Typically I remove layers that don’t add an emotive aspect to the sound design. In games there are often many sounds that can play at once, so clarity and focus are necessary when preventing sounds from getting lost in a sonic goo.
In this post I have shown how play and experimentation are fundamental to my creative process. For an aspiring sound artist, spending time with Pure Data, FMOD Studio or Wwise and a personal recorder is a great way to improve their skill with game audio. This series of articles has aimed to reveal the tacit decisions behind the production of game audio that get obscured by the fun of the creative process. Plus, I hope they offer a bit of inspiration to those creating their own sounds in the future.
- My GDC 2012 talk – My site includes an MP3 recording of the talk and the full powerpoint presentation. A full video of the talk is in the GDC Vault but requires membership
- See the Jon Hopkins Soundtrack in the Video Game Vessel video game soundtrack video by me on YouTube – shows use of Lua with music in the custom level editor : https://www.youtube.com/watch?v=KOyjMPPvaY4
Leonard J. Paul attained his Honours degree in Computer Science at Simon Fraser University in BC, Canada with an Extended Minor in Music concentrating in Electroacoustics. He began his work in video games on the Sega Genesis and Super Nintendo Entertainment System and has a twenty year history in composing, sound design and coding for games. He has worked on over twenty major game titles totalling over 6.4 million units sold since 1994, including award-winning AAA titles such as EA’s NBA Jam 2010, NHL11, Need for Speed: Hot Pursuit 2, NBA Live ’95 as well as the indie award-winning title Retro City Rampage.
He is the co-founder of the School of Video Game Audio and has taught game audio students from over thirty different countries online since 2012. His new media works has been exhibited in cities including Surrey, Banff, Victoria, São Paulo, Zürich and San Jose. As a documentary film composer, he had the good fortune of scoring the original music for multi-awarding winning documentary The Corporation which remains the highest-grossing Canadian documentary in history to date. He has performed live electronic music in cities such as Osaka, Berlin, San Francisco, Brooklyn and Amsterdam under the name Freaky DNA.
He is an internationally renowned speaker on the topic of video game audio and has been invited to speak in Vancouver, Lyon, Berlin, Bogotá, London, Banff, San Francisco, San Jose, Porto, Angoulême and other locations around the world.
His writings and presentations are available at http://VideoGameAudio.com
Featured image: Courtesy of Vblank Entertainment (c)2014 – Artwork by Maxime Trépanier.
REWIND! . . .If you liked this post, you may also dig:
Sounding Out! Podcast #31: Hand Made Music in Retro City Rampage– Leonard J. Paul
Papa Sangre and the Construction of Immersion in Audio Games- Enongo Lumumba-Kasongo
Editor’s Note: WARNING: THE FOLLOWING POST IS INTERACTIVE!!! This week’s post is especially designed by one of our regulars, Andreas Duus Pape, to spark conversation and provoke debate on its comment page. I have directly solicited feedback and commentary from several top sound studies scholars, thinkers, artists, and musicians, who will be posting at various times throughout the day and the week–responding to Andreas, responding to each other, and responding to your comments. Look for commentary by Bill Bahng Boyer (NYU), Maile Colbert(Binaural/Nodar, Faculdade de Belas Artes da Universidade do Porto), Adriana Knouf(Cornell University), Primus Luta (AvantUrb, Concrète Sound System), Alejandro L. Madrid (University of Illinois at Chicago), Tara Rodgers (University of Maryland), Jonathan Skinner (ecopoetics), Jonathan Sterne (McGill University), Aaron Trammell (Rutgers University, Sounding Out!) and yours truly (Binghamton University, Sounding Out!). Full bios of our special respondents follow the post. We wholeheartedly wish to entice you this Monday to play. . .and listen. . .and then share your thoughts via the comment page. . .and play again. . .listen again. . .read the comments. . .and share more thoughts. . .yeah, just go ahead and loop that. –JSA, Editor-in-Chief
I’m a musician and an economist. Sometimes you will find me playing acoustic folk rock and blues on guitar, harmonica and voice. And at other times I will be at work, where I apply my expertise in game theory to the computer modeling of social phenomena. I create simulations of people interacting – such as how people decide which way to vote on an issue such as a tax levy, or how people learn to sort objects given to them in an experiment. In these simulations, the user can set up characteristics of the environment, such as the number of people and their individual goals. After things are set up, users watch these interactions unfold. The simulation is a little story, and one need only tweak the inputs to see how the story changes.
As a musician, I was curious if a program that generates social stories could be refashioned to generate musical pieces. I wanted to build a music-generation engine that the listener could tweak in order to get a different piece each time. But not just any tune – a piece with some flow, some story. I like that tension between randomness and structure. On one hand, I want every song to vary in unpredictable ways; on the other hand, I want to create music and not structureless noise.
I created a basic story of predators and prey, whimsically naming the prey “Peters,” represented by rabbits, and the predators “Wolves.” My simulation depicts a plain in the savannah with a green oasis. The prey seek the oasis and the predators seek the prey. Each character has its own goals and the closer they are to achieving them, the happier they are. Both predators and prey want to have stomachs full of food, so naturally they want to be close to their target (be it prey or oasis). As they travel through the savannah, they learn what choices (directions of movement) make them happier, and use this experience to guide them.
So how does this story become music? To this question there are two answers: a technical one and an intuitive one. The intuitive answer is that in real life the story of predators and prey plays out geographically on the savannah, but musically this is a story that plays out over a sonic landscape. To elaborate, I abstracted the movement of the prey and predator on the geography of the plain into the musical geometry of a sonic landscape. The farther north an agent travels, the higher the pitch. And, the farther east an agent travels the longer the duration. In other words, as an agent travels to the northwest, she makes longer-lasting tones that are higher pitched. I also mapped happiness to volume, so that happy agents make louder tones. Finally, so that each agent would have a distinct voice as they traveled through this space, I chose different instruments for each agent.
In the video below I assigned the “church organ” sound to prey, and the “brass section” sound to predators.
Ultimately, there are some things that I like about this piece and others that I do not.
As a harmonica player, I improvise by creating and resolving tension. I think this piece does that well. The predator will pursue the prey into a quiet, low-pitch corner, creating a distant, rumbling sound – only to watch prey escape to the densely polyphonic northwest corner. There is an ebb and flow to this chase that I recognize from blues harmonica solos. In contrast to my experience as a harmonica player, however, I have found that some of the most compelling parts of the dynamics come from the layering of notes. The addition of notes yields a rich sonic texture, much like adding notes to a chord on an organ.
Unfortunately, for largely technical reasons, there is a lack of coherent rhythm and pacing. The programming platform (agent-based modeling software called NetLogo) is not designed to have the interface proceed in real-time. Basically, the overall speed of the piece can change as the processing load increases or decreases. I found that as agents learnt more about their surroundings (and more system resources are allocated to this “memory”), they became slower and slower. To fix this, I capped the size of their memory banks so that they would forget their oldest memories. The closest I have come to a rhythmic structure is by ordering the way that the agents play. This technique makes the piece have a call-and-response feel. If only the piece to had a coherent rhythm, then I could imagine playing harmonica along with it.
One last comment on pitch: while an earlier version of this piece mapped each step in space to a semitone, things sounded too mechanical. Even though this was the easiest and most intuitive decision from a technical standpoint, it was aesthetically lacking, so I have now integrated traditional musical scales. The minor scale, in my opinion, is the most interesting as it makes the predator/prey dynamic sound appropriately foreboding.
You can play this piece yourself. Simply go to this link with Java enabled in your browser (recommended: Google Chrome). Pressing “Setup” then “Go” will create your own run of the piece. As it is running, you can adjust the slider above the graphic window to change the speed. Press “Go” again to stop the model, adjust any parameters you wish and press “Setup” and “Go” again to see how the piece changes. Here are some parameters to try: instA and instB to change the instruments associated with prey and predators; PlayEveryXSteps to change the pace of the piece (higher results in a slower paced piece); Num-PackAs and Num-PackBs changes the number of prey and predators; the vertical PeterVol and WolfVol adjust the overall volume of prey and predators.
In regards to my version of “Peter and the Wolf,” I have a number of things that I’m curious about.
First, how does this relate to what you think of as music? Do you like listening to it? Which elements do you like and which do you dislike? For example, what do you think about about the tension and rhythm – do you agree the first works and that the second could be improved? Would you listen to this for enjoyments’ sake, and what would it take for this to be more than a novelty? What do you think about the narrative that drives the piece? I chose the predator and prey narrative, admittedly, on a whim. Do you think there might be some other narrative or agent specific goals that might better drive this piece? Is there any metaphor that might better describe this piece? As a listener do you enjoy the experience of being able to customize and configure the piece? What would you like to have control over that is missing here? Would you like more interaction with the piece or less interaction?
Finally, and perhaps most importantly, what do you think of the premise? Can simple electronic agents (albeit ones which interact socially) aspire to create music? Is there something promising in this act of simulation? Is music-making necessarily a human activity and is this kind of work destined to be artificial and uncanny?
Thanks for listening. I look forward to your thoughts.
– – –
Andreas Duus Pape is an economist and a musician. As an economist, he studies microeconomic theory and game theory–that is, the analysis of strategy and the construction of models to understand social phenomena–and the theory of individual choice, including how to forecast the behavior of agents who construct models of social phenomena. As a musician, he plays folk in the tradition of Dylan and Guthrie, blues in the tradition of Williamson and McTell, and country in the tradition of Nelson and Cash. Pape is an assistant Professor in the department of Economics at Binghamton University and is a faculty member of the Collective Dynamics of Complex Systems (CoCo) Research Group.
– – –
Guest Respondents on the Comment Page (in alphabetical order)
Bill Bahng Boyer is a doctoral candidate in music at New York University who is completing a dissertation on public listening in the New York City subway system.
Maile Colbert is an intermedia artist with a concentration in sound and video, living and working between New York and Portugal. She is an associated artist at Binaural/Nodar.
N. Adriana Knouf is a Ph.D. candidate in information science at Cornell University.
Primus Luta is a writer and an artist exploring the intersection of technology and art; he maintains his own AvantUrb site and is a founding member of the live electronic music collective Concrète Sound System.
Alejandro L. Madrid is Associate Professor of Latin American and Latino Studies at the University of Illinois at Chicago and a cultural theorist and music scholar whose research focuses on the intersection of modernity, tradition, globalization, and ethnic identity in popular and art music, dance, and expressive culture from Mexico, the U.S.-Mexico border, and the circum-Caribbean.
Tara Rodgers is an Assistant Professor of Women’s Studies and a faculty fellow in the Digital Cultures & Creativity program at the University of Maryland. As Analog Tara, she has released electronic music on compilations such as the Le Tigre 12″ and Source Records/Germany, and exhibited sound art at venues including Eyebeam (NYC) and the Museum of Contemporary Canadian Art (Toronto).
Jonathan Skinner founded and edits the journal ecopoetics, which features creative-critical intersections between writing and ecology. Skinner also writes ecocriticism on contemporary poetry and poetics.
Jonathan Sterne teaches in the Department of Art History and Communication Studies and the History and Philosophy of Science Program at McGill University. His latest book, Mp3 The Meaning of a Format comes out this fall from Duke University Press.
Jennifer Stoever-Ackerman is co-founder, Editor-in-Chief and Guest Posts Editor for Sounding Out! She is also Assistant Professor of English at Binghamton University and a former Fellow at the Society for the Humanities at Cornell University (2011-2012).
Aaron Trammell is Multimedia Editor of Sounding Out! and a Ph.D. Candidate in Media and Communications at Rutgers University.