Archive | Synthesizers RSS for this section

Further Experiments in Agent-based Musical Composition

Photo by whistler1984 @Flickr.

Editor’s Note:  WARNING: THE FOLLOWING POST IS INTERACTIVE!!!  This week’s post is especially designed by one of our regulars, Andreas Duus Pape, to spark conversation and provoke debate on its comment page.  I  have directly solicited feedback and commentary from several top sound studies scholars, thinkers, artists, and musicians, who will be posting at various times throughout the day and the week–responding to Andreas, responding to each other, and responding to your comments.  Look for commentary by Bill Bahng Boyer (NYU), Maile Colbert(Binaural/Nodar, Faculdade de Belas Artes da Universidade do Porto), Adriana Knouf(Cornell University), Primus Luta (AvantUrb, Concrète Sound System), Alejandro L. Madrid (University of Illinois at Chicago), Tara Rodgers (University of Maryland), Jonathan Skinner (ecopoetics),  Jonathan Sterne (McGill University), Aaron Trammell (Rutgers University, Sounding Out!) and yours truly (Binghamton University, Sounding Out!).  Full bios of our special respondents follow the post. We wholeheartedly wish to entice you this Monday to play. . .and listen. . .and then share your thoughts via the comment page. . .and play again. . .listen again. . .read the comments. . .and share more thoughts. . .yeah, just go ahead and loop that.  –JSA, Editor-in-Chief

I’m a musician and an economist. Sometimes you will find me playing acoustic folk rock and blues on guitar, harmonica and voice. And at other times I will be at work, where I apply my expertise in game theory to the computer modeling of social phenomena. I create simulations of people interacting – such as how people decide which way to vote on an issue such as a tax levy, or how people learn to sort objects given to them in an experiment. In these simulations, the user can set up characteristics of the environment, such as the number of people and their individual goals. After things are set up, users watch these interactions unfold. The simulation is a little story, and one need only tweak the inputs to see how the story changes.

As a musician, I was curious if a program that generates social stories could be refashioned to generate musical pieces. I wanted to build a music-generation engine that the listener could tweak in order to get a different piece each time. But not just any tune – a piece with some flow, some story. I like that tension between randomness and structure. On one hand, I want every song to vary in unpredictable ways; on the other hand, I want to create music and not structureless noise.

I created a basic story of predators and prey, whimsically naming the prey “Peters,” represented by rabbits, and the predators “Wolves.” My simulation depicts a plain in the savannah with a green oasis. The prey seek the oasis and the predators seek the prey. Each character has its own goals and the closer they are to achieving them, the happier they are. Both predators and prey want to have stomachs full of food, so naturally they want to be close to their target (be it prey or oasis). As they travel through the savannah, they learn what choices (directions of movement) make them happier, and use this experience to guide them.

Photo by bantam10 @Flickr

So how does this story become music? To this question there are two answers: a technical one and an intuitive one. The intuitive answer is that in real life the story of predators and prey plays out geographically on the savannah, but musically this is a story that plays out over a sonic landscape. To elaborate, I abstracted the movement of the prey and predator on the geography of the plain into the musical geometry of a sonic landscape. The farther north an agent travels, the higher the pitch. And, the farther east an agent travels the longer the duration. In other words, as an agent travels to the northwest, she makes longer-lasting tones that are higher pitched. I also mapped happiness to volume, so that happy agents make louder tones. Finally, so that each agent would have a distinct voice as they traveled through this space, I chose different instruments for each agent.

In the video below I assigned the “church organ” sound to prey, and the “brass section” sound to predators.

Ultimately, there are some things that I like about this piece and others that I do not.

As a harmonica player, I improvise by creating and resolving tension. I think this piece does that well. The predator will pursue the prey into a quiet, low-pitch corner, creating a distant, rumbling sound – only to watch prey escape to the densely polyphonic northwest corner. There is an ebb and flow to this chase that I recognize from blues harmonica solos. In contrast to my experience as a harmonica player, however, I have found that some of the most compelling parts of the dynamics come from the layering of notes. The addition of notes yields a rich sonic texture, much like adding notes to a chord on an organ.

Unfortunately, for largely technical reasons, there is a lack of coherent rhythm and pacing. The programming platform (agent-based modeling software called NetLogo) is not designed to have the interface proceed in real-time. Basically, the overall speed of the piece can change as the processing load increases or decreases. I found that as agents learnt more about their surroundings (and more system resources are allocated to this “memory”), they became slower and slower. To fix this, I capped the size of their memory banks so that they would forget their oldest memories. The closest I have come to a rhythmic structure is by ordering the way that the agents play. This technique makes the piece have a call-and-response feel. If only the piece to had a coherent rhythm,  then I could imagine playing harmonica along with it.

One last comment on pitch: while an earlier version of this piece mapped each step in space to a semitone, things sounded too mechanical. Even though this was the easiest and most intuitive decision from a technical standpoint, it was aesthetically lacking, so I have now integrated traditional musical scales. The minor scale, in my opinion, is the most interesting as it makes the predator/prey dynamic sound appropriately foreboding.

Photo by deivorytower @Flickr.

You can play this piece yourself. Simply go to this link with Java enabled in your browser (recommended: Google Chrome). Pressing “Setup” then “Go” will create your own run of the piece. As it is running, you can adjust the slider above the graphic window to change the speed. Press “Go” again to stop the model, adjust any parameters you wish and press “Setup” and “Go” again to see how the piece changes. Here are some parameters to try: instA and instB to change the instruments associated with prey and predators; PlayEveryXSteps to change the pace of the piece (higher results in a slower paced piece); Num-PackAs and Num-PackBs changes the number of prey and predators; the vertical PeterVol and WolfVol adjust the overall volume of prey and predators.

In regards to my version of “Peter and the Wolf,” I have a number of things that I’m curious about.

First, how does this relate to what you think of as music? Do you like listening to it? Which elements do you like and which do you dislike? For example, what do you think about about the tension and rhythm – do you agree the first works and that the second could be improved? Would you listen to this for enjoyments’ sake, and what would it take for this to be more than a novelty? What do you think about the narrative that drives the piece? I chose the predator and prey narrative, admittedly, on a whim. Do you think there might be some other narrative or agent specific goals that might better drive this piece? Is there any metaphor that might better describe this piece? As a listener do you enjoy the experience of being able to customize and configure the piece? What would you like to have control over that is missing here? Would you like more interaction with the piece or less interaction?

Finally, and perhaps most importantly, what do you think of the premise? Can simple electronic agents (albeit ones which interact socially) aspire to create music? Is there something promising in this act of simulation? Is music-making necessarily a human activity and is this kind of work destined to be artificial and uncanny?

Thanks for listening. I look forward to your thoughts.

“The Birth of Electronic Man.” Photo by xdxd_vs_xdxd @Flickr.

– – –

Andreas Duus Pape is an economist and a musician. As an economist, he studies microeconomic theory and game theory–that is, the analysis of strategy and the construction of models to understand social phenomena–and the theory of individual choice, including how to forecast the behavior of agents who construct models of social phenomena. As a musician, he plays folk in the tradition of Dylan and Guthrie, blues in the tradition of Williamson and McTell, and country in the tradition of Nelson and Cash. Pape is an assistant Professor in the department of Economics at Binghamton University and is a faculty member of the Collective Dynamics of Complex Systems (CoCo) Research Group.

– – –

Guest Respondents on the Comment Page (in alphabetical order)

Bill Bahng Boyer is a doctoral candidate in music at New York University who is completing a dissertation on public listening in the New York City subway system.

Maile Colbert  is an intermedia artist with a concentration in sound and video, living and working between New York and Portugal. She is an associated artist at Binaural/Nodar.

N. Adriana Knouf is a Ph.D. candidate in information science at Cornell University.

Primus Luta is a writer and an artist exploring the intersection of technology and art; he maintains his own AvantUrb site and is a founding member of the live electronic music collective Concrète Sound System.

Alejandro L. Madrid is Associate Professor of Latin American and Latino Studies at the University of Illinois at Chicago and a cultural theorist and music scholar whose research focuses on the intersection of modernity, tradition, globalization, and ethnic identity in popular and art music, dance, and expressive culture from Mexico, the U.S.-Mexico border, and the circum-Caribbean.

Tara Rodgers is an Assistant Professor of Women’s Studies and a faculty fellow in the Digital Cultures & Creativity program at the University of Maryland. As Analog Tara, she has released electronic music on compilations such as the Le Tigre 12″ and Source Records/Germany, and exhibited sound art at venues including Eyebeam (NYC) and the Museum of Contemporary Canadian Art (Toronto).

Jonathan Skinner founded and edits the journal ecopoetics, which features creative-critical intersections between writing and ecology. Skinner also writes ecocriticism on contemporary poetry and poetics.

Jonathan Sterne teaches in the Department of Art History and Communication Studies and the History and Philosophy of Science Program at McGill University. His latest book, Mp3 The Meaning of a Format comes out this fall from Duke University Press.

Jennifer Stoever-Ackerman is co-founder, Editor-in-Chief and Guest Posts Editor for Sounding Out! She is also Assistant Professor of English at Binghamton University and a former Fellow at the Society for the Humanities at Cornell University (2011-2012).

Aaron Trammell is Multimedia Editor of Sounding Out! and a Ph.D. Candidate in Media and Communications at Rutgers University.

In Defense of Auto-Tune

Lil Wayne, I Am Still Music Tour, Photo by Matthew Eisman

I am here today to defend auto-tune. I may be late to the party, but if you watched Lil Wayne’s recent schizophrenic performance on MTV’s VMAs you know that auto-tune isn’t going anywhere.   The thoughtful and melodic opening song “How to Love” clashed harshly with the expletive-laden guitar-rocking “John” Weezy followed with. Regardless of how you judge that disjunction, what strikes me about the performance is that auto-tune made Weezy’s range possible. The studio magic transposed onto the live moment dared auto-tune’s many haters to revise their criticisms about the relationship between the live and the recorded. It suggested that this technology actually opens up possibilities, rather than marking a limitation.

Auto-tune is mostly synonymous with the intentionally mechanized vocal distortion effect of singers like T-Pain, but it has actually been used for clandestine pitch correction in the studio for over 15 years.  Cher’s voice on 1998’s “Believe” is probably the earliest well-known use of the device to distort rather than correct, though at the time her producers claimed to have used a vocoder pedal, probably in an attempt to hide what was then a trade secret—the Antares Auto-Tune machine is widely used to correct imperfections in studio singing. The corrective function of auto-tune is more difficult to note than the obvious distortive effect because when used as intended, auto-tuning is an inaudible process. It blends flubbed or off-key notes to the nearest true semi-tone to create the effect of perfect singing every time.  The more off-key a singer is, the harder it is to hide the use of the technology.  Furthermore, to make melody out of talking or rapping the sound has to be pushed to the point of sounding robotic.

Antares Auto-Tune 7

Antares Auto-Tune 7 Interface

The dismissal of auto-tuned acts is usually made in terms of a comparison between the modified recording and what is possible in live performance, like indie folk singer Neko Case’s extended tongue-lashing in Stereogum.  Auto-tune makes it so that anyone can sing whether they have talent or not, or so the criticism goes, putting determination of talent before evaluation of the outcome. This simple critique conveniently ignores how recording technology has long shaped our expectations in popular music and for live performance. Do we consider how many takes were required for Patti LaBelle to record “Lady Marmalade” when we listen?  Do we speculate on whether spliced tape made up for the effects of a fatiguing day of recording? Chances are that even your favorite and most gifted singer has benefited from some form of technology in recording their work. When someone argues that auto-tune allows anyone to sing, what they are really complaining about is that an illusion of authenticity has been dispelled. My question in response is: So what? Why would it so bad if anyone could be a singer through Auto-tuning technology?  What is really so threatening about its use?

As Walter Benjamin writes in “The Work of Art in the Age of Mechanical Reproduction,” the threat to art presented by mechanical reproduction emerges from the inability for its authenticity  to be reproduced—but authenticity is a shibboleth.  He explains that what is really threatened is the authority of the original; but how do we determine what is original in a field where the influences of live performance and record artifact are so interwoven?  Auto-tune represents just another step forward in undoing the illusion of art’s aura. It is not the quality of art that is endangered by mass access to its creation, but rather the authority of cultural arbiters and the ideological ends they serve.

Auto-tune supposedly obfuscates one of the indicators of authenticity, imperfections in the work of art.  However, recording technology already made error less notable as a sign of authenticity to the point where the near perfection of recorded music becomes the sign of authentic talent and the standard to which live performance is compared.  We expect the artist to perform the song as we have heard it in countless replays of the single, ignoring that the corrective technologies of recording shaped the contours of our understanding of the song.

In this way, we can think of the audible auto-tune effect is actually re-establishing authenticity by making itself transparent.  An auto-tuned song establishes its authority by casting into doubt the ability of any art to be truly authoritative and owning up to that lack. Listen to the auto-tuned hit “Blame It” by Jaime Foxx, featuring T-Pain, and note how their voices are made nearly indistinguishable by the auto-tune effect.

It might be the case that anyone is singing that song, but that doesn’t make it less bumping and less catchy—in fact, I’d argue the slippage makes it catchier.   The auto-tuned voice is the sound of a democratic voice.  There isn’t much precedent for actors becoming successful singers, but “Blame It” provides evidence of the transcendent power of auto-tune  allowing anyone to participate in art and culture making.   As Benjamin reminds us, “The fact that the new mode of participation first appeared in a disreputable form must not confuse the spectator.”  The fact that “anyone” can do it increases possibilities and casts all-encompassing dismissal of auto-tune as reactionary and elitist.

Mechanical reproduction may “pry an object from its shell” and destroy its aura and authority–demonstrating the democratic possibilities in art as it is repurposed–but I contend that auto-tune goes one step further. It pries singing free from the tyranny of talent and its proscriptive aesthetics.  It undermines the authority of the arbiters of talent and lets anyone potentially take part in public musical vocal expression. Even someone like Antoine Dodson, whose rant on the local news, ended up a catchy internet hit thanks to the Songify project.

Auto-tune represents a democratic impulse in music. It is another step in the increasing access to cultural production, going beyond special classes of people in social or economic position to determine what is worthy. Sure, not everyone can afford the Antares Auto-Tune machine, but recent history has demonstrated that such technologies become increasingly affordable and more widely available.  Rather than cold and soulless, the mechanized voice can give direct access to the pathos of melody when used by those whose natural talent is not for singing.  Listen to Kanye West’s 808s & Heartbreak, or (again) Lil Wayne’s “How To Love.”  These artists aren’t trying to get one over on their listeners, but just the opposite, they want to evoke an earnestness that they feel can only be expressed through the singing voice. Why would you want to resist a world where anyone could sing their hearts out?

 

Osvaldo Oyola is a regular contributor to Sounding Out! He is also an English PhD student at Binghamton University.