Further Experiments in Agent-based Musical Composition

Photo by whistler1984 @Flickr.

Editor’s Note:  WARNING: THE FOLLOWING POST IS INTERACTIVE!!!  This week’s post is especially designed by one of our regulars, Andreas Duus Pape, to spark conversation and provoke debate on its comment page.  I  have directly solicited feedback and commentary from several top sound studies scholars, thinkers, artists, and musicians, who will be posting at various times throughout the day and the week–responding to Andreas, responding to each other, and responding to your comments.  Look for commentary by Bill Bahng Boyer (NYU), Maile Colbert(Binaural/Nodar, Faculdade de Belas Artes da Universidade do Porto), Adriana Knouf(Cornell University), Primus Luta (AvantUrb, Concrète Sound System), Alejandro L. Madrid (University of Illinois at Chicago), Tara Rodgers (University of Maryland), Jonathan Skinner (ecopoetics),  Jonathan Sterne (McGill University), Aaron Trammell (Rutgers University, Sounding Out!) and yours truly (Binghamton University, Sounding Out!).  Full bios of our special respondents follow the post. We wholeheartedly wish to entice you this Monday to play. . .and listen. . .and then share your thoughts via the comment page. . .and play again. . .listen again. . .read the comments. . .and share more thoughts. . .yeah, just go ahead and loop that.  –JSA, Editor-in-Chief

I’m a musician and an economist. Sometimes you will find me playing acoustic folk rock and blues on guitar, harmonica and voice. And at other times I will be at work, where I apply my expertise in game theory to the computer modeling of social phenomena. I create simulations of people interacting – such as how people decide which way to vote on an issue such as a tax levy, or how people learn to sort objects given to them in an experiment. In these simulations, the user can set up characteristics of the environment, such as the number of people and their individual goals. After things are set up, users watch these interactions unfold. The simulation is a little story, and one need only tweak the inputs to see how the story changes.

As a musician, I was curious if a program that generates social stories could be refashioned to generate musical pieces. I wanted to build a music-generation engine that the listener could tweak in order to get a different piece each time. But not just any tune – a piece with some flow, some story. I like that tension between randomness and structure. On one hand, I want every song to vary in unpredictable ways; on the other hand, I want to create music and not structureless noise.

I created a basic story of predators and prey, whimsically naming the prey “Peters,” represented by rabbits, and the predators “Wolves.” My simulation depicts a plain in the savannah with a green oasis. The prey seek the oasis and the predators seek the prey. Each character has its own goals and the closer they are to achieving them, the happier they are. Both predators and prey want to have stomachs full of food, so naturally they want to be close to their target (be it prey or oasis). As they travel through the savannah, they learn what choices (directions of movement) make them happier, and use this experience to guide them.

Photo by bantam10 @Flickr

So how does this story become music? To this question there are two answers: a technical one and an intuitive one. The intuitive answer is that in real life the story of predators and prey plays out geographically on the savannah, but musically this is a story that plays out over a sonic landscape. To elaborate, I abstracted the movement of the prey and predator on the geography of the plain into the musical geometry of a sonic landscape. The farther north an agent travels, the higher the pitch. And, the farther east an agent travels the longer the duration. In other words, as an agent travels to the northwest, she makes longer-lasting tones that are higher pitched. I also mapped happiness to volume, so that happy agents make louder tones. Finally, so that each agent would have a distinct voice as they traveled through this space, I chose different instruments for each agent.

In the video below I assigned the “church organ” sound to prey, and the “brass section” sound to predators.

Ultimately, there are some things that I like about this piece and others that I do not.

As a harmonica player, I improvise by creating and resolving tension. I think this piece does that well. The predator will pursue the prey into a quiet, low-pitch corner, creating a distant, rumbling sound – only to watch prey escape to the densely polyphonic northwest corner. There is an ebb and flow to this chase that I recognize from blues harmonica solos. In contrast to my experience as a harmonica player, however, I have found that some of the most compelling parts of the dynamics come from the layering of notes. The addition of notes yields a rich sonic texture, much like adding notes to a chord on an organ.

Unfortunately, for largely technical reasons, there is a lack of coherent rhythm and pacing. The programming platform (agent-based modeling software called NetLogo) is not designed to have the interface proceed in real-time. Basically, the overall speed of the piece can change as the processing load increases or decreases. I found that as agents learnt more about their surroundings (and more system resources are allocated to this “memory”), they became slower and slower. To fix this, I capped the size of their memory banks so that they would forget their oldest memories. The closest I have come to a rhythmic structure is by ordering the way that the agents play. This technique makes the piece have a call-and-response feel. If only the piece to had a coherent rhythm,  then I could imagine playing harmonica along with it.

One last comment on pitch: while an earlier version of this piece mapped each step in space to a semitone, things sounded too mechanical. Even though this was the easiest and most intuitive decision from a technical standpoint, it was aesthetically lacking, so I have now integrated traditional musical scales. The minor scale, in my opinion, is the most interesting as it makes the predator/prey dynamic sound appropriately foreboding.

Photo by deivorytower @Flickr.

You can play this piece yourself. Simply go to this link with Java enabled in your browser (recommended: Google Chrome). Pressing “Setup” then “Go” will create your own run of the piece. As it is running, you can adjust the slider above the graphic window to change the speed. Press “Go” again to stop the model, adjust any parameters you wish and press “Setup” and “Go” again to see how the piece changes. Here are some parameters to try: instA and instB to change the instruments associated with prey and predators; PlayEveryXSteps to change the pace of the piece (higher results in a slower paced piece); Num-PackAs and Num-PackBs changes the number of prey and predators; the vertical PeterVol and WolfVol adjust the overall volume of prey and predators.

In regards to my version of “Peter and the Wolf,” I have a number of things that I’m curious about.

First, how does this relate to what you think of as music? Do you like listening to it? Which elements do you like and which do you dislike? For example, what do you think about about the tension and rhythm – do you agree the first works and that the second could be improved? Would you listen to this for enjoyments’ sake, and what would it take for this to be more than a novelty? What do you think about the narrative that drives the piece? I chose the predator and prey narrative, admittedly, on a whim. Do you think there might be some other narrative or agent specific goals that might better drive this piece? Is there any metaphor that might better describe this piece? As a listener do you enjoy the experience of being able to customize and configure the piece? What would you like to have control over that is missing here? Would you like more interaction with the piece or less interaction?

Finally, and perhaps most importantly, what do you think of the premise? Can simple electronic agents (albeit ones which interact socially) aspire to create music? Is there something promising in this act of simulation? Is music-making necessarily a human activity and is this kind of work destined to be artificial and uncanny?

Thanks for listening. I look forward to your thoughts.

“The Birth of Electronic Man.” Photo by xdxd_vs_xdxd @Flickr.

– – –

Andreas Duus Pape is an economist and a musician. As an economist, he studies microeconomic theory and game theory–that is, the analysis of strategy and the construction of models to understand social phenomena–and the theory of individual choice, including how to forecast the behavior of agents who construct models of social phenomena. As a musician, he plays folk in the tradition of Dylan and Guthrie, blues in the tradition of Williamson and McTell, and country in the tradition of Nelson and Cash. Pape is an assistant Professor in the department of Economics at Binghamton University and is a faculty member of the Collective Dynamics of Complex Systems (CoCo) Research Group.

– – –

Guest Respondents on the Comment Page (in alphabetical order)

Bill Bahng Boyer is a doctoral candidate in music at New York University who is completing a dissertation on public listening in the New York City subway system.

Maile Colbert  is an intermedia artist with a concentration in sound and video, living and working between New York and Portugal. She is an associated artist at Binaural/Nodar.

N. Adriana Knouf is a Ph.D. candidate in information science at Cornell University.

Primus Luta is a writer and an artist exploring the intersection of technology and art; he maintains his own AvantUrb site and is a founding member of the live electronic music collective Concrète Sound System.

Alejandro L. Madrid is Associate Professor of Latin American and Latino Studies at the University of Illinois at Chicago and a cultural theorist and music scholar whose research focuses on the intersection of modernity, tradition, globalization, and ethnic identity in popular and art music, dance, and expressive culture from Mexico, the U.S.-Mexico border, and the circum-Caribbean.

Tara Rodgers is an Assistant Professor of Women’s Studies and a faculty fellow in the Digital Cultures & Creativity program at the University of Maryland. As Analog Tara, she has released electronic music on compilations such as the Le Tigre 12″ and Source Records/Germany, and exhibited sound art at venues including Eyebeam (NYC) and the Museum of Contemporary Canadian Art (Toronto).

Jonathan Skinner founded and edits the journal ecopoetics, which features creative-critical intersections between writing and ecology. Skinner also writes ecocriticism on contemporary poetry and poetics.

Jonathan Sterne teaches in the Department of Art History and Communication Studies and the History and Philosophy of Science Program at McGill University. His latest book, Mp3 The Meaning of a Format comes out this fall from Duke University Press.

Jennifer Stoever-Ackerman is co-founder, Editor-in-Chief and Guest Posts Editor for Sounding Out! She is also Assistant Professor of English at Binghamton University and a former Fellow at the Society for the Humanities at Cornell University (2011-2012).

Aaron Trammell is Multimedia Editor of Sounding Out! and a Ph.D. Candidate in Media and Communications at Rutgers University.

Tags: , , , , , , , , , , , , , , , ,

About Andreas Duus Pape

I am an economist and a musician.

39 responses to “Further Experiments in Agent-based Musical Composition”

  1. sukandi93 says :

    Reblogged this on sukandi93 and commented:
    no koment

    Like

  2. ecopoetics says :

    Thank you, Andreas, for sharing your fascinating project with us and for letting us in to your process, as well as for inviting discussion and critique. The conversation playing out here is a provocative one. I am coming late to the game (pun not intended) so, rather than respond to particular comments, I’ll just launch into a set of concerns that emerged for me as I read Andreas’s post, listened to the composition, and read the above thread. Please excuse the notational nature of what follows.

    1) The concerns about whether or not the results of such and such an experimental compositional practice are or are not “music” always amuse and perplex me. (One would expect audiophiles and critics to be asking these questions, but it’s always the composers themselves who seem most anxious. Sometimes for good reasons, but also sometimes for regressive ones, as the long history of Cage’s recuperation by “music” — one he himself aided and abetted — attests.) As a non-musician with wide open ears, I found this piece very interesting to listen to, especially with the cyclical building and resolution of tension. (Jennifer’s recollection of 80s B-movie soundtracks is right on: perhaps because of the organ timbre, I’m reminded specifically of certain cheap horror flicks.) It’s suspenseful, in ways that I don’t think depend on the “narrative.”

    But I’ll listen to just about anything irregular. Sometimes, when I stream archived WFMU radio shows from my iPod and the file glitches, hangs, and repeats, I’m not sure I don’t like the results better than the “undistorted” sound (which often, given that station’s programming tastes, already is quite distorted to begin with). And I get endless pleasure out of listening to environmental sounds without needing to know whether or not it is “music.”

    Haven’t forty plus years of “soundscape” recording–as well as an ever-expanding awareness of musical systems outside the Western–presented enough material for us to explore and try to understand what makes listening for pleasure tick, what the mechanics, history, audile techniques involved are, how they work, and how we work in relation to them, without having to worry about whether or not to classify the phenomena as deserving of the special attention and cultural capital designated by “music”?

    2) That’s why the process part of this is so interesting. I was fascinated that Andreas ties his process to the production of narratives — where a simulation is, to his ears, a “little story.” I was caught up short by that assertion because I don’t necessarily view interactions, not even human interactions, as stories. And narrative isn’t the first thing I think of when I come to music. That’s the poet in me speaking, and perhaps the early 21st-century listener, and it may be that Andreas’s interest in stories connects back to his affinity for folk, blues and country, all strongly narrative forms. As a poet, I listen for pattern, for rhythm, texture, and surprise, for lyric sweetness and arresting dissonance, for the “duende” that emerges from the breaking of a form and disruption of its expectations. As an ecocritic, I study the sounding of nonhuman life via aesthetic production for promise of a break with our habit of projecting human stories onto other species’ ways. (As Haraway and other have shown, this extends to our conceptualization of machinic nonhuman actants, as well.) Current students of Darwin are returning to his work on sexual selection as a way to modify and open out a science too locked into its functional narratives about adaptation and fitness, etc. For me, the “predator/ prey” model falls too easily into the functionalist narrative groove that is finally starting to give way. At the same time, Andreas’s use of the term “play” indicates a much broader conception of what’s going on here.

    3) I was a little disappointed when Andreas retreated from his narrative, in the face of Maile’s quite apt critique, to confess that “the predator/prey animal story is supposed to be a metaphor, but not more than that.” I was interested in this project precisely for what it might reveal about human/ nonhuman interactions in the zone of aesthetics and play. (For a provocative, extended exploration of this question, see David Rothenberg’s new book, Survival of the Beautiful.) If we take seriously some of the comments in this thread about distributed and collaborative agency (I love to hear musicians talk about their analog equipment, but why limit it to the “warm” technologies?) — about agency of machines, humans, and animals, as well, perhaps, as the agency of game structures — then I think we need to come up with new, more finely-grained rhetoric for describing these actions and interactions across different systems. (And we need to be better observers of the other-than-human world.) “Story” might enable us to relate to the events, but in what ways does a focus on narrative limit our understanding? Also, in an era of anthropogenic mass extinction, I think we might become a bit more vigilant about our tendency to turn other-than-human life into “metaphors” for our own concerns and obsessions. Or worse, how we might even extract play from the life of these others. (I started to feel uncomfortable about the way in which our discussion around and enjoyment of “play” seemed to be happening at the expense of these poor wolves and rabbits, locked into their grim “predator/ prey” interaction. What, no play for them too?) Especially since there is such huge, untapped (or forgotten) potential to learn from that other life, if not to collaborate with it. The animal world is so much more interesting than wolf chases rabbit! Though that can be interesting enough, when one really studies it . . .

    How about wolf plays with rabbit?

    In that regard, rather than a retreat to “love stories,” I would want to see this project pushed further in the direction of the kinds of questions Bill asks (“Does the concept of rational subjectivity really work in the context of lion prides and gemsbok herds?”) or with the kinds of experiments Tara suggests (bringing in field recorded material). I think this project holds huge promise, Andreas — especially in regards to the ways in which studies of animal behavior can help us better relate to our own machinic dimensions; I’ve long been interested in various attempts to generate bird song or flocking behavior through artificial intelligence programs — so I hope you keep us posted on its evolution! Thanks again for letting us in on the project’s early, critical stages. I’m going to keep listening to it, even for the love story in the chase: for how human loves computer loves wolf loves rabbit loves sound . . .

    Like

    • Andreas Duus Pape says :

      1) “…[W]ithout having to worry about whether or not to classify the phenomena as deserving of the special attention and cultural capital designated by ‘music’?” That’s a fair point. Part of it is simply introspective: I’m trying to create a thing that can produce sound that I enjoy listening to. Perhaps the term “music” is too loaded for this. The sense in which this is an experiment is the sense in which I’m trying to play with the pieces of this program to come up with something i like, cultural context and all, I guess.

      2) That’s a very interesting comment about narrative vs., as you say, a broader conception of what’s going on. I think you’re right on, in that my interest in stories, musically, connects to my folk, blues, and country forms. I honestly had not thought to question much about seeking a narrative. Much to ponder here. What you say about “play” in the last sentence is interesting…I don’t think of “play” and “narrative” as in conflict. Do you? …. On the other hand, your comment about projecting human stories onto other species’ ways…that’s definitely going on, here.

      3) “I was interested in this project precisely for what it might reveal about human/ nonhuman interactions in the zone of aesthetics and play.” That’s a very interesting perspective. I guess I want to defend the “retreat” into love stories. Part of what I find appealing about this framework is exactly the ability to craft different kinds of interactions between the agents through changing their goals. I really don’t think of that as a retreat at all, but rather part of what I was thinking when I originally conceived of the project.

      Flocking behavior is definitely part of what I was thinking of when I originally conceived of this as well….I talk about that a little bit in response to Bill’s question about packs.

      I’m glad you’ll continue listening. I’m sure my next post here will be the next phase of this project, whatever direction that will take. I’m sure it will incorporate many of the ideas and questions and critiques presented here.

      Like

  3. Jonathan Sterne says :

    Two thoughts after reading and listening.

    On the listening end, this is entirely hearable as music, framed as music, and operates like music. Most of the music in my world is made by electronic devices with speakers attached to them. This piece arrived the same way (well, via headphones, but that’s frequent enough for me). There is plenty of music I’ve heard—often in academic settings—with less audible structure or interest in it. Sure, there is a difference in origins between this and the Deepchord I was listening to last night; but Deepchord is actually an interesting example. Here’s a line from an interview with Rod Modell (of Deepchord):

    Steve and I like analog because it’s alive. I used to love putting my old Korg MS-20 [synthesizer—JS] outside in the cold garage for a few hours during the winter months. I would then bring it inside the warm house, power it up, and program something simple with a SQ-10 sequencer, and that little twelve-step sequence would mutate for two hours. Constantly changing. It was amazing to me. You would leave the room, and come back and it would sound totally different. So organic and so alive. Its personality would change as it warmed up and became more comfortable, just like a human being’s would.

    You’ve got the same inverse-Cage-let’s-set-up-parameters-and-allow-it-to-unfold scenario that is present in this piece (okay, fewer parameters). Modell uses this as a morality lesson in analog-vs-digital but what’s interesting to me is precisely the opposite: the composition and performance practice is pretty much conceptually the same.

    I’m also interested in Andreas’ comment that

    there is lack of coherent rhythm and pacing. […] The closest I have come to a rhythmic structure is by ordering the way that the agents play. This technique makes the piece have a call-and-response feel. If only the piece to had a coherent rhythm, then I could imagine playing harmonica along with it.”

    Well, there’s rhythm and there’s meter (granting that explaining the difference between the two is maddeningly difficult). What I think you’re looking for, Andreas, is straight meter, which is useful for any dance-oriented or -derived music, to be sure. But it’s also true that after a century or so of recordings, people expect their music to stick to a pretty straight pace from phase to phase. Yet, even 100 years ago, you could hear very different ideas about speed and pacing in musicians’ performances. I recently saw Aleks Kolkowski demonstrate this point live, with an old Enrico Caruso recording, an Auxetophone (a phonograph with an air powered amp) and a small group of musicians. They needed a conductor to play along with the recording, because Caruso’s speed and phrasing were so variable and unpredictable by 21st century standards. The speeding up and slowing down seem like part of the piece here. Obviously, there’s more pronounced and harder to follow than a Caruso record, but it sounds to me like the program does have a coherent sense of time, just one tied to “nonmusical” concerns like processor load.

    Like

    • Andreas Duus Pape says :

      Thanks for your comment. I think the comment about set-up-parameters-and-let-unfold aspect is right on. And I appreciate the distinction you’re drawing between rhythm and meter. No doubt, what I meant was meter, also no doubt based on the environment I’m used to improvising in, which is primarily blues. You seem to be suggesting that, in the programmers’ parlance, it’s not a bug, it’s a feature. I’ll think about that some more. Perhaps I should try to be more open-minded about what I am expecting in regards to straight meter. Thanks!

      Like

  4. Bill Bahng Boyer says :

    Thank you for sharing your creative process with us, Andreas. I doubt I would have the courage to share my personal thoughts on game theory with a group of economists. As a long-time Cage fan with an incredibly useful undergraduate degree in music composition, I really enjoy work that challenges assumptions about authorship and intentionality, and this piece fits rather nicely into a long history (Mozart, Hiller, Cage, Xenakis, Cope, Polli, etc) that the other respondents have already touched upon.

    In his lectures on biopolitics, Michel Foucault identified a naturalization of economic theory that led to the development of liberalism long ago. I am fascinated by your attempts in this piece to tie game theory to “the natural” by way of a shaky representation of wild animals as rational subjects making right-angle choices. For me, as a composer who has some experience in this sort of music making, listening to this piece is less about hearing the sonic manifestation of these animals’ geospatial psyches and more about hearing the disjuncture between the free market of sound and the aesthetic limits of the system’s regulator. Judging from the other comments above, I am not alone in this sentiment.

    Unfortunately, I was not able to get the program to run in Chrome on my Mac, so I have only heard the video embedded above. Some questions after listening: Why translate the x-y axis of space to the musical parameters of pitch and duration? Obviously the result of this decision is that agents are represented by melodic lines, and the struggle for survival is represented as a counterpoint that operates not according to the logic of tonal tension and resolution as most listeners would expect, but according to the logic of sonic warfare: he who is loudest is winning. Can a predator silence (kill) a prey agent? Will the piece end when the last surviving agent dies of old age or disease? Don’t animals work in packs? Does the concept of rational subjectivity really work in the context of lion prides and gemsbok herds? I wonder just how many possible ways you can convey the relationship between hunters and preys with musical material and how many of them you have tried before selecting this model.

    You mention that you are “curious if a program that generates social stories could be refashioned to generate musical pieces.” I’m curious about the inverse: can a program that generates musical pieces be refashioned to generate social stories? In this case, what social stories can your piece generate (and which has it already)? I leave this question for others to address. Thanks again for sharing. Talking about avant-garde musical craft in a public forum sure makes me happy!

    Like

    • Andreas Duus Pape says :

      Hello Bill, thanks for your comment.

      Maile Colbert made a comment in which she criticized my ecological metaphor as being somewhat confused, which she was right about; I think that’s related to your comment about connecting game theory to “the natural.” The predator/prey animal story is supposed to be a metaphor, but not more than that. I think it’s interesting what can be taken from that metaphor, but I wouldn’t push it too far. I’m more interested in narrative that can be constructed from the goals and roles of these agents. In my reply to her, I talked about possibly instead crafting a “love story” by letting the goals of the agents be to find each other in this landscape. I would be curious in that case whether the music would sound different in a way that fit with some kind of idea I or others had about what a love song would sound like. That is, predator and prey is just one possible way the agents’ goals might fit together into some kind of narrative. Maybe it would have been a more interesting piece if I was clearer about that.

      I’m sorry to hear you couldn’t get it work on Chrome on your Mac…that’s exactly the environment I tested it. Did the page simply appear blank? I would trouble-shoot a bit if you’re interested in trying to get it to work.

      I agree, since the agents take connected paths through the landscape, interpreting one of the coordinates as pitch forces a melodic line. I’m not sure I really understand your question, here. It’s one way to interpret the space into sound, and I was experimenting with different ways to do that, and evaluate the output as to whether it is more or less musically interesting to me. It’s an interesting point, about the implication of happiness being mapped to volume implying the struggle for survival being sonic warfare. I did hope that there would be a way to map the struggle for survival into, as you say, tension and resolution. I think there is some tension and resolution in this piece. So I guess I don’t understand your comment from that point of view. Isn’t the existence of tension and resolution in the piece evidence that some kind of mapping was made? I don’t think there’s a simple solution to this; I don’t think mapping happiness into pitch, for example, would suffice (although the program can be set to do that). I was hoping that tension and resolution would emerge from the piece as a consequence of a simple interpretation of the agent behavior into sound combined with interaction between the agents. That was my goal, in any case.

      “Can a predator silence (kill) a prey agent? Will the piece end when the last surviving agent dies of old age or disease?”

      Yes, the agents have an internal variable which represents their current energy, and when it goes to zero, they fall silent. When the predator dies, the prey wanders forever; when the prey days, the predator wanders until its energy goes to zero and also dies.

      “Don’t animals work in packs?”

      It’s interesting you mention packs. I experimented with that, and thought that would be a bigger part of the piece. I implemented some kind of communication between members of the same pack. I also implemented reproduction, where when agents have enough energy, they spawned other members of the pack. I thought population cycles, which tend to emerge in agent-based predator/prey models like this one, might be the source of some perhaps musical cycles, as it were. Unfortunately, at least so far, those experiments proved unsuccessful. More than two agents in the model seemed to cause too much of a jumble in sound, at least to my ear, and the changing size of population caused more problems in rhythm along the lines I described in the piece. Ultimately, I decided not to include these experiments in what I presented here, simply because I didn’t like how it sounded very much. I hope that I’ll find a way to rescue this vein, though. The social dynamics within and between packs I think has potential for this piece.

      Thanks again for your comment and taking the time to check out the piece.

      Like

      • Bill Bahng Boyer says :

        Re: tonal tension and resolution, I meant that there’s a long-standing logic of harmonic and melodic composition familiar, at least on an unconscious level, to a very very large portion of the world’s population. It’s the logic that dictates that twenty five measures of E7 at the end of Roy Orbison’s “Oh, Pretty Woman” be followed by two beats of A Major. It’s the logic that prevents Christina Aguilera from straying from a repetitive melodic contour in “Beautiful.”

        Your piece seems to disregard tonality as a signifying structure entirely, by which I mean that you could switch the x and y axes or even invert them, without the meaning of the music changing much. What matters here is not the pitch or the duration of the notes, but the volume of the voices, and that’s something I would like to hear more about.

        Like

      • Andreas Duus Pape says :

        In response to Boyer, 11:31a: “Your piece seems to disregard tonality as a signifying structure entirely, by which I mean that you could switch the x and y axes or even invert them, without the meaning of the music changing much. ”

        I agree with the second half of the sentence, after ‘by,’ but I disagree with the first, and I don’t think the second half implies the first. I also simply disagree with this: “What matters here is not the pitch or the duration of the notes, but the volume of the voices[.]” It does matter. Part of the struggle for survival is geographic, part of it is the agents’ happiness. Part of what matters is their location. And location is mapped to pitch and duration. So, it matters, at least in my mind. I guess what I’m rejecting in what you are saying (if this indeed is what you are saying, I’m not trying to put words in your mouth) is the idea that there is a simple connection between the mapping I choose to make between the variables and aspects of the notes and the qualities of the final piece. It’s a complicated connection, and it emerges from the process. At least, that’s the intent, although this incarnation may well have fallen short.

        I think I understood, or, at least, thought I understood, what you meant about what tension and resolution is.

        Like

  5. Maile Colbert says :

    Hi Andreas and everybody. My comment is going to start a little awkward, as the experience for me was slightly tainted by what for many might be a minor issue, but I feel I should mention it as the whole base of your narrative is an ecological one. Your two chosen species play out their drama in a savanna…a grassy plain on the margins of the tropics, in which a rabbit would have food aplenty throughout the region, and in which there would be no such thing as an oasis, which is found only in arid desert regions. This may seem nitpicky, but was enough to start me off with questioning the rigor of what was behind the program. For me, using patterns in nature to organize sound should include an element of chaos, as in nature itself. This may also add an interesting element to the sound as well. I feel that there is something not quite worked out in what you want from this work. Is “music” really your goal, rather then “unorganized sound”? But then what if that is not what is revealed? How much of your hand in the program, for example with tweaking the memory, is appropriate? And that would only be answered if you were clear within that initial goal. If you want an ecological narrative played out to experiment and experience what sound dynamics may form, then perhaps that should be the focus rather than music. Or, as you questioned yourself, is the wolf and rabbit relationship the right narrative…perhaps not. Because with this interference, what is the difference between this sound making program and other sound making programs? The question of whether a computer can make music was answered decades ago…it can, and does, and we hear it very often. I found the suggestions of Primus Luta very interesting, and could help head in the direction of having the sonification bring with it more information and meaning, which would make it more emotively compelling as well. Tara brought up the work of Andrea Polli, and I would second this. Her sonification is rigorous in its what and why, as well as being aesthetically interesting and often beautiful. All in all, this could be an interesting work…and I commend your bravery and generosity on sharing and publicly asking for critique!

    Like

    • Andreas Duus Pape says :

      Hi Maile. I hear your comment about the inaccuracies of my ecological narrative. I agree, it’s not a real story, I should have been more careful.

      I think my goal, is, in fact, music. My hope was that this narrative, a predator/prey narrative that was admittedly generic, might be a way to do it. I think there are other possible narratives that I could set up in this space by changing the goals of the agents: for example, I recently thought that a narrative that might be interesting would be a generic “love story,” in this setting, perhaps one where the goals of the players is to find each other in this landscape.

      You mention rigor, and I hear you. I know that I don’t have a rigorous definition of music, which indeed may be a serious problem with this enterprise. But I am considering this as an exploratory enterprise first and foremost, I guess…I’m thinking about what elements I can remove, or add, and whether the resulting thing is something I like to listen to, whether I find it compelling. I admit it’s not a rigorous definition of music, but I do think there is some rigor to my experimental method, if only using my own experience (and the experience of others, though forums like this) as a guide of whether it’s a success or failure. I think I’m trying to use that process to evolve something that is musical and narrative-based.

      Does that make sense?

      I’ll investigate the people you and others are pointing me to, I’m curious. Thanks!

      Like

      • Maile Colbert says :

        That does make sense, and I think with that in mind you could find a narrative that fits well. I quite like the idea of the love story actually, it could have all sorts of complications as parameters for change and could be quite whimsical in certain moments…for example when things slow down.

        Like

      • Andreas Duus Pape says :

        (intended to be a reply to Maile’s 11:00 am comment)

        I like the idea of the love story, too. It might be what I try to develop for my next post on Sounding Out! I’m curious if it will “feel different” musically from the predator/prey piece. Maybe it won’t, maybe it will. I’m genuinely curious about how much my tinkering with the narrative comes out in the piece itself. Maybe the connection is too tenuous, maybe it isn’t.

        Thanks again for your thoughts about this, I really appreciate it.

        Like

  6. Nicholas Knouf says :

    Andreas, this is very interesting and raises a lot of ideas and questions, so my comments might come off as slightly disjointed! I’m going to try and pick up some threads by other commenters as well.

    There’s two things I wanted to bring up. The first actually is detached from the musical content of the piece and has more to do with the visual layout of the interface. It struck me how similar the interface for this agent-based program is to certain experimental music software such as pure data or Max/MSP (an example of a similarly complex pure data interface can be seen here: https://nicholasbuer.files.wordpress.com/2008/06/pd.jpg ). There’s a tendency within this type of software to return to a limited set of interface metaphors: sliders, text boxes, toggle buttons, etc. Perhaps we can chalk this up to a couple of things: the software designer wanting to hew closely to existing interface metaphors, or limitations of the underlying graphical toolkit used to build the software. Yet I’m always interested in how interfaces shape—for better and for worse—the interactions we can have with our software and hardware, something that is quite manifest to those of us who have tried to learn a musical instrument. As a violinist and violist I have to admit that the contortions I put my body through are, all else being equal, rather _uncomfortable_. Yet the hoops I have to jump through in order to configure my limbs and extremities just right enables some rather powerful sounds to be drawn out of my instrument. All this is to say that the challenges of an interface can potentially end up being rewarding if the result is considered good enough. I was wondering, then, if you could comment a bit on what aspects of the software were liberating or limiting for you. And you’ve already mentioned the problematic of the timing, but was there something you learned about the software from this musical piece that might help inform your use of it for game theoretic purposes?

    Returning to the music, I’ve recently been looking into the sounds of economic and financial exchange, and specifically how various noises get mobilized for the purposes of either profit or the signaling of changes in the market. I’d like to riff off of what Tara Rodgers asked regarding the choices regarding the organ and bass. Was this related to the constraints of the software, and how specifically did you decide on those two mappings? One of the things I’ve come across in my research is the ways in which the sounds of traders within open outcry pits can be used as a proxy for future movements in the market. So, mapping this onto a predator-prey simulation, one might, depending on one’s political orientation towards finance, map the predator to the sounds of traders and the prey to the casseroles of Quebec, or, in a choice more sympathetic to traders, map the prey to the sounds of traders and the predator to the sounds of high-frequency trading computers eating away at the traders’ profits. I’m obviously conflating the musical result here with your background in game theory and economics, and thus trying to construct a congruent mapping between the sounds and the software. But perhaps this is a conflation that you did not intend? In a certain sense this goes back to my question regarding the interface: in repurposing an interface (or system, or object, or what-have-you), how far can one push it beyond its given constraints? And how much does the original shape of the interface color any potential reframings?

    As you can see, this raised a lot of questions for me. Thanks for sharing it with us.

    Like

    • Andreas Duus Pape says :

      “Yet I’m always interested in how interfaces shape—for better and for worse—the interactions we can have with our software and hardware, something that is quite manifest to those of us who have tried to learn a musical instrument. … I was wondering, then, if you could comment a bit on what aspects of the software were liberating or limiting for you.”

      That’s a very interesting point. Yes, the interface no doubt shaped how I thought about this. I “built” that particular interface from a toolbox of sliders, pulldown menus, etc., that the programmers of NetLogo provided for me, and in the background those elements of the “public interface” interacted with the code, which for me, as the programmer, was also an interface. At a basic level, I had to boil down everything I wanted done to essentially mathematical steps that process input and deliver output. I think that part of the interface, the part that I’m interacting the most with, no doubt shaped this piece. The other part of it is how the agents process their memory into making choices about the future…that I treated like a black box in my expression here, but in fact that is a complicated and central part of my research. That structure, that agents face a situation (a location of themselves and others) have a certain set of actions (N,S,E,W) and make a guess about which way to go based on how similar the current situation is to past situations, and whether those choices made them happy or sad, strongly shapes the piece. For example, the agents are focused on each step as a choice, as opposed a series of choices…and that’s baked in, as it were, because of the choice theory I’m using in the program. Maybe it would/could sound “more musical” if agents were thinking over a longer time horizon, making choices of a series of steps. That’s a very interesting direction that this could go.

      “And you’ve already mentioned the problematic of the timing, but was there something you learned about the software from this musical piece that might help inform your use of it for game theoretic purposes?”

      Also a very interesting question, and one that I had not thought about at all, how this could inform my game theory work, that is, go in the other direction. I really have no idea. I guess as a social scientist I am trying to play with choice theories that are human-ish in one way or another. Maybe this will teach me something more about my choice theory. (That said, my response to your previous question is already something of an answer!)

      “…regarding the choices regarding the organ and bass. Was this related to the constraints of the software, and how specifically did you decide on those two mappings?”

      A lot of it is constraints of the software: the software has a certain library of musical instruments and I’m allowed only a certain degree of control over those instruments.

      I definitely think there could be a way to make this program model sounds and music of economic and financial situations. I could map the characters of the predator and prey to financial and economic actors as you say, but I could also come up with completely different characters and a different game. That is, instead of taking the roles and goals of predators and prey as given, I could directly attempt to model the behavior of traders, their goals and how they interact, and assign sound/notes to their behavior in a way similar to what I did with the predators and prey. Whereas the combination of NetLogo and I are constrained in our ability to produce arbitrary sounds with arbitrary control, we are much less limited in designing a model of a different social situation. I strongly suspect that would get closer to what you’re looking for. Is that interesting to you?

      Like

  7. Alejandro L. Madrid says :

    I have to confess that at the beginning I was a bit baffled by Andreas questions. To me it is clear that this is music, there is no question about it. As Aaron writes, it is sound organized into a logical system. I would even go as far as to state that the “author” of this particular arrangement of sounds (regardless of the computer mediation) is Andreas in the same sense that John Cage is the author of a piece like *Music of Changes*, the only difference I see between these two works lies in who makes the final decisions that result in a specific arrangement of sounds or pitches. Evidently, the piece would not exists if Andreas did not create the computer code and the program. Like Aaron, I believe Andreas experiment bring up the question of authorship, but it is an old debate that goes back to Cage as well as Barthes and Foucault. One of the directions that debate moved us towards was precisely a questioning of meaning in music in relation to authorship and reception. I am one who believes (as I guess Jennifer does too, by her response) that what makes music meaningful to people is the personal relation they establish with it; sometimes responding to the author’s aesthetic discourse, sometimes ignoring it, but always transcending it. So, I think it is possible to establish such connection to this music; it really is not up to the author or the medium but to the listener.

    Like

    • Alejandro L. Madrid says :

      Sorry if my comment may not be responding to the latest posts. I wrote it this morning but forgot to post it and did it just now, without reading the most recent comments.

      Like

  8. primusluta says :

    Thank you Andreas for the work you’ve put into this. Before getting into it I’d just like to point out that Peter and The Wolf is indeed one of the best narrative compositional pieces ever IMHO. It’s quite appropriate here I think to use it as the ‘analog’ model for what you are trying to accomplish. That said tough shoes to fill.

    I have to say I was far more pleased with the results than I expected. One of the things that this type of thing lends it self to is quickly overcomplicating itself. With all of the data involved I was certain that it wouldn’t translate, but I do believe it did. It’s interesting that you chose prey and predator rather than specifics as that leaves it open to the general tension of the hunt which I think is captured here.

    One of my first thoughts though was the use of pitch to map space. It was one of the points which I was certain wouldn’t work, and though I feel it has I’ll still put some of my initial concerns on the table.

    I think zeroing in and actually identifying the species involved in the hunt should be audibly definable. Right now what is being modeled is an abstraction and as such the results sound abstract. By zeroing in so that it is something more tangibly being modeled, I think the sound will achieve a higher level of definition.

    By example your issue around rhythm, by going to the two rhythms present in whatever the prey and predator are – the heartbeat and their walking pattern. The more intense the chase the rate of both goes up. As life slips one (or both) go on the decline. But they also (likely) have distinct patterns to their heartbeats and their walking that could make for interesting counter rhythms.

    Similarly while the choice of instrument works well here, I’d love to hear the sound of each more modeled around the subject. Thinking not just around pitch but other instrument characteristics (dynamics, distortion, detuning, etc). These factors change in parallel with things like the pace of the chase as mentioned above.

    Of course the more variables you add to the equation as you experienced, the harder to process the information and produce a reliable sound. Perhaps there’s a way to compute offline rather than live. As much as the live aspect of getting to adjust the data and such makes it interactive, I think that the modeling itself is worth the sacrifice to get strong results.

    Hope this helps. Looking forward to how things progress.

    Like

  9. Primus Luta says :

    Thank you Andreas for the work you’ve put into this. Before getting into it I’d just like to point out that Peter and The Wolf is indeed one of the best narrative compositional pieces ever IMHO. It’s quite appropriate here I think to use it as the ‘analog’ model for what you are trying to accomplish. That said tough shoes to fill.

    I have to say I was far more pleased with the results than I expected. One of the things that this type of thing lends it self to is quickly overcomplicating itself. With all of the data involved I was certain that it wouldn’t translate, but I do believe it did. It’s interesting that you chose prey and predator rather than specifics as that leaves it open to the general tension of the hunt which I think is captured here.

    One of my first thoughts though was the use of pitch to map space. It was one of the points which I was certain wouldn’t work, and though I feel it has I’ll still put some of my initial concerns on the table.

    I think zeroing in and actually identifying the species involved in the hunt should be audibly definable. Right now what is being modeled is an abstraction and as such the results sound abstract. By zeroing in so that it is something more tangibly being modeled, I think the sound will achieve a higher level of definition.

    By example your issue around rhythm, by going to the two rhythms present in whatever the prey and predator are – the heartbeat and their walking pattern. The more intense the chase the rate of both goes up. As life slips one (or both) go on the decline. But they also (likely) have distinct patterns to their heartbeats and their walking that could make for interesting counter rhythms.

    Similarly while the choice of instrument works well here, I’d love to hear the sound of each more modeled around the subject. Thinking not just around pitch but other instrument characteristics (dynamics, distortion, detuning, etc). These factors change in parallel with things like the pace of the chase as mentioned above.

    Of course the more variables you add to the equation as you experienced, the harder to process the information and produce a reliable sound. Perhaps there’s a way to compute offline rather than live. As much as the live aspect of getting to adjust the data and such makes it interactive, I think that the modeling itself is worth the sacrifice to get strong results.

    Hope this helps. Looking forward to how things progress.

    Like

    • Andreas Duus Pape says :

      “I have to say I was far more pleased with the results than I expected. … With all of the data involved I was certain that it wouldn’t translate, but I do believe it did.” Thanks!

      I really like your idea of tying the factors of the music to the closest factors within real animals, rhythm for example. I could imagine a way to implement heartbeat and walking rhythm.

      You’re right, that I could compute offline instead of live, too. I’m used to letting it play out in real time, but I should be able to precalculate all the moves and then have it play out in any time. It’s a good idea.

      Thanks so much!

      Like

  10. j. stoever-ackerman says :

    Mary–your comment about the opening really caught my attention, not because I read it in the same way that you did, but because I remembered being surprised at the strict division Andreas made between “play” and “work” there. This made me think about the labor involved in making music (and music as a labor and an industry) as well as the “play” that is almost certainly involved in being a professor, It also reminded me of a conversation I had with Aaron discussing the seriousness of play.

    At the risk of sounding like Adorno here, all of this is to say that, for non musicians–those of us well-versed in what Daniel Cavicchi calls “audiencing” in his book _Listening and Longing in the Nineteenth Century_, the “reception of performances and works, the history of music consumption, the development of audience practices, and the uses of music in daily life” (4), but not so much practiced in the act of making music–this may involve a different type of labor altogether, one that blurs work and play more dramatically, calling attention to the “work” of listening. It might take “practice” to listen to this piece.

    Also–on a different note, Andreas–do the predator and prey listen to each other in this piece as time moves on–is that what you mean by them “learning” more about their surroundings? That is something I really value as a listener of a musical piece–and in experiencing live music–the fact that I can hear the players listening to each other in the sounds produced (and even as the musicians have their own agenda/expertise), what Fred Moten calls the “improvisation of the ensemble” I think this may be something that for me ultimately separates music from non-music,

    And Steven–excellent points about authorship, something to think of especially in regards to how recorded music is multiply authored–by the band, the engineers, the producers, the mixers, the record company (if one is involved).

    Like

    • Andreas Duus Pape says :

      “Also–on a different note, Andreas–do the predator and prey listen to each other in this piece as time moves on–is that what you mean by them “learning” more about their surroundings?”

      Indeed they do. In particular, what a particular agent is taking into account in encoding her memories is her own location and the other agent’s location at a particular point in time, and how happy she is at the time.

      I’m enjoying this conversation!

      Like

  11. Mary Caton Lingold says :

    This is a really wonderful discussion and a fascinating piece, Andreas. I also found myself enjoying the music and particularly the disjuncture in rhythm. The “slow-down” heightened my attention and brought out the component parts.

    I’d like to bring up the concept of “play.” When I first looked at the opening lines of your post, I misread them to read like this: “I’m a musician and an economist. Sometimes you will find me playing acoustic folk rock and blues on guitar, harmonica and voice. And at other times I will be at work, where I play my expertise in game theory to the computer modeling of social phenomena.” You actually wrote that you “apply your expertise in game theory to computer modeling of social phenomena,” but it strikes me that my misreading is actually really in line with what you are experimenting with. What do we mean when we say that we “play” music and what would it mean to “play” our academic work?

    I think the concept of play de-centers the notion of authorship by implying that one’s engagement with music, or an intellectual project is exploratory and experiential rather than authorial and fixed (and therefore publishable.) When you play music, the music is happening to you and you are making it happen all at the same time. Not unlike being chased by a predator or hunting for lunch! Which brings me to the notion of time, which Jennifer discussed eloquently above.

    Andreas writes, “I found that as agents learnt more about their surroundings (and more system resources are allocated to this “memory”), they became slower and slower. To fix this, I capped the size of their memory banks so that they would forget their oldest memories.” I find myself wanting to make something really theoretically deep about your description of the technical challenge and workaround. I have to remind myself that the computational interface is not necessarily analogous to human experience. But, play with me for a second. There is a profound relationship between memory and musical performance. I find I play best when I remember just enough to know the structure of a tune, but not enough to remember all my mistakes and limiting habits. In fact, I love it when I forget my oldest memories and am simply ‘in the moment’. I agree very much with Jennifer that sound and music profoundly shape the way we experience time. This is one of the challenges of interacting with large quantities of sound content in digital or analog environments — you just can’t “skim” it. Thank you for sharing this project and for starting a wonderful discussion!

    Like

    • Andreas Duus Pape says :

      “There is a profound relationship between memory and musical performance. I find I play best when I remember just enough to know the structure of a tune, but not enough to remember all my mistakes and limiting habits. In fact, I love it when I forget my oldest memories and am simply ‘in the moment’.”

      I love it. I totally like what you’re doing here with the concept of play and pushing this as a metaphor. That’s part of what I was thinking with this…I was trying to write some of the code following some introspection into my experience making music.

      Like

    • Aaron Trammell says :

      Great connections, Mary! They helped me make some connections for myself that I have been struggling with for some time.

      I agree and disagree about the practice of playing music. Yes, it is true that music making is an embodied practice, and that when we make music questions of authorship do become fuzzy. But I think that the exploratory and experiential aspects of play are not necessarily those highlighted by the software that Andreas created. Andreas’ program is, in my opinion, anathema to these more “creative” aspects of play because it focuses more on the game-like space of simulation and music. Games, as opposed to free-form play, rely on rules to organize their internal machinations. The imposition of rules upon play has deep cultural implications – it is the moment where the social order is defined. This occurs most clearly here the second we impose the western musical order of timbres, textures, notes and scales upon the agents. But it also occurs on a procedural level – these agents have desires that model something new, something that challenges the status-quo of the western musical order.

      I responded, though, to a question of consumerism, one which is firmly embedded within the discourse and logic of capitalism. Is this the sort of music that I would listen to for fun? And for me the honest truth is that I would not. When I consume music, it is rarely because I seek enlightenment or intellectual stimulation, it is instead to commiserate with the authors of the piece, the experiences they are sharing, the stories (both scripted and sonic) that they are telling. I seek an emotional connection, not an intellectual one. And it’s exactly this affective connection that record companies exploit with their pre-packaged nostalgia, predictable song structures, predictable lyrics, and mid-tempo grooves.

      The sounds and textures produced here speak to a discourse of classical music which (to borrow Adorno from Jennifer) courts an audience which genuinely believes in things like progress and enlightenment. This discourse, as someone like Foucault might problematize, relies exactly upon a dogmatic, singular sense of truth to push it forward. While this discourse of enlightenment and truth does run in contrast to the consumer ideals of the record industry which I noted earlier, I contend that it may not be the space of freedom and possibility that it might seem.

      This is mostly because as listeners we can’t intuitively crack (or even begin to attribute a source) to the algorithmic code that Andreas produced. Certainly, authorship is a much problematized term, but accountability is not. And while we could refer to Andreas as the author of the “rules” which structure the simulation and ourselves as authors of the “parameters,” our agency to begin a dialogue, or mount any sort of critique, through or about this structure is minimal.

      That said, this music-making machine has inspired some fantastic dialogue between us all, and I look forward to reading more as time progresses!

      Like

      • primusluta says :

        “Andreas’ program is, in my opinion, anathema to these more “creative” aspects of play because it focuses more on the game-like space of simulation and music. Games, as opposed to free-form play, rely on rules to organize their internal machinations. ”

        I’m going to question this as a means of furthering discussion a bit. I think that a lot of what has happened in the visualization parallel with digital music from plugin UI’s to performance visuals, has encouraged this game like perception of music from computers, and in an example like this where indeed there is something like a game which can be seen as being played between predator and prey but that perception can be deceiving.

        The parallel between this and Peter and the Wolf shows that the subject of exploration is one with musical goals not just game play. In Peter and the Wolf the story guides the directions fo the music, but that story too is based on the variable interaction of the prey and the predator. What Andreas has done is removed the narration and used only the variables for musical guidance. One can imagine that it would not require the story of Peter and the Wolf to come up with the music for it.

        Another thought this reminds me of Goodiepal’s Radical Computer Music and IMO the forward thinkingness of it. If we evaluate this piece as compared to human music playing we get dangerous to missing the point. This is music composed for the performance of a machine. It is a machine interpretation of predator and prey based on the learnings applied programmatically by Andreas. It should never lose its machineness IMO and be evaluated on those grounds, toward answering the question ‘how well is the computer able to interpret the dynamic of prey and predator through music” Based on what’s been presented here, very well, and with plenty of room to grow.

        In fact the memory hurdle which Andrea finds himself up against is a clear line of distinction between the computer and its human counterpart. The human can process almost an unlimited number of variables both consciously and unconsciously (or fixed memory versus ram) to translate into a performance. Because of technological limitations of the moment, the computer can only do so much while playing in real time before the process required eats into the performability. But t can still render the music and perhaps with enough processing power working on enough variables, render a piece of music with a level of intricate and yet contextually relevant detail, beyond what a human could formulate on its own.

        Like

      • Mary Caton Lingold says :

        Aaron, thank you for these thoughts. And also, thank you to primusluta, Andreas, and Jennifer for engaging in some of the things I commented on as well. This is a great conversation! Aaron, I want to push back a little against some of your claims about the algorithmic design of the game/performance making it unlike free-form play. There are layers and layers of opaque formal structures (if not rules) that go into making all kinds of music. Even musical instruments are objects with a fixed entity.

        I’m fascinated by the fact that you need generic structures and social convention to make play and improvisation possible. Otherwise, you just get lost. I’m always reminded of the anecdote about Robert Frost. He supposedly said that writing poetry in free verse is like playing tennis without a net. Although I’m all in favor of free verse poetry (and net-less tennis), something about that notion rings true to me — that in order to be innovative, you have to have a formal constraint in the first place. To me, the formal constraints of Andreas’ piece are precisely what makes it music as opposed to some other thing. There had to be such a thing as a guitar for Jimmi Hendrix to set one on fire.

        Primusluta suggests that the machine itself is the one doing the playing in Andreas’ piece and that really jives with me.

        Like

      • Andreas Duus Pape says :

        Mary says: “Aaron, I want to push back a little against some of your claims about the algorithmic design of the game/performance making it unlike free-form play. There are layers and layers of opaque formal structures (if not rules) that go into making all kinds of music. Even musical instruments are objects with a fixed entity. … I’m fascinated by the fact that you need generic structures and social convention to make play and improvisation possible.”

        That’s very interesting, and I think I agree, at least in my own experience. It seems that play and improvisation is playing within a structure and pushing on that structure. That somehow play comes from that interaction between structure and randomness or structure and free will or something like that. I think there is a tension there, but I think that tension is compelling. When I’m jamming musically with others, for example, we’re jamming within a structure, like 12 bar blues or some other chord progression. That’s certainly part of it for me.

        Like

      • Aaron Trammell says :

        Man, I wish that WordPress would allow nested replies beyond the third level… Primus Luta and Mary, thanks for giving me some pushback – this is exciting!

        I see your point, Mary, I shouldn’t rag on the digital machine when the same critique can me mounted on the analog machines that preceded it. Is the rhetoric of the guitar any different than the rhetoric of agent modeling software?

        Again, I agree, but only to a point. I think it is much easier to break the rules with a guitar than with agent-modeling software. The sounds made by stings over a hollow body are far easier to modulate and modify than the sounds made by Andreas’ software. A cantankerous player can tap the body of a guitar, tap the strings, change the strings (different strings have different sounds), change their pick, play with the tuning, modulate tuning in real time, the possibilities are also only limited to the imagination. Although there are many parameters in the software that we can modify, I contend that they only provide the illusion of choice, for a subset of sounds that have already been programmed and imagined. Plus, I would critique both the synthesizer (in this and other forms), and the guitar (and all it’s colorful pedals) for inviting a technological consumer fetishism at the margins of creativity.

        Where has Hendrix’s burning guitar gotten us? It has supported millions of advertisements on AOR radio, been featured as a selling point of thousands of guitar magazines, printed as a banner at Guitar Centers and Sam Ashes everywhere, and used to sell hundreds and hundreds of Hendrix – signature edition – left handed Fender Stratocasters. This technological fetishism only gets worse with computers, which have always relied upon the rhetoric of creativity and possibility to sell the latest greatest model. Now I’m not arguing that there is no potential for creativity or possibility to be found in this software, but I am arguing that this idea of technological innovation is deeply integrated into a consumer marketing machine.

        So where does this all leave us when theorizing the practice of play in these contexts? Well, I’m also fascinated by this. I agree that without a social context, play meanders. But, when rules are introduced, there is a double bind. There is the potential in changing rules to introduce new social potentials and possibilities, but there is also a potential (and in my opinion, this is an inevitability in today’s America) that these rules will be exploited – particularly by for-profit industries. So I’ll rephrase my earlier statement as a new question: Why do we value the sort of creativity which plays neatly within the rules of our social system (game-like, innovation) instead of the sort of creativity that moves without direction (playful meandering)? Or is this a question of why we value value at all?

        Now to address Primus Luta’s points – I don’t agree with the distinction you are drawing between musical goals and game-like goals. Haven’t musicians always been playing some sort of game? Hence – “playing music.” I think this is what Mary was hinting at above when she said that I ought to critique guitars as well as machines as there are few relevant differences between the two. But that said, I’m interested in what you mean by the forward thinkingness of Goodiepal’s Radical Computer Music. What do you mean by forward thinking? To me that sounds like there may be some political agency or intent to the composition – what exactly do you think these politics are?

        To bring it back to my previous point, I’m not sure what the value of “intricate yet contextually relevant detail” is when it goes beyond the limits of human perception. This seems like the sort of marketing song and dance the music industry has been using to construct a discourse of audiophilia for the last century.

        Is it live, or is it Memorex?

        Like

  12. j. stoever-ackerman says :

    Interesting point, Aaron. So for you, the politics of authorship are too diluted in this piece to sustain any real affective connection to the piece. I myself had an affective connection, but mainly because of memory–on Andreas’s demo, it initially sounded like a b-movie soundtrack from an 80s television show (21 Jump Street comes to mind for some reason)–during that immediately dateable period in Hollywood when everyone was trading in orchestras and bands for one synth player. It got increasingly less interesting for me as it wore on and kind of fell apart. I think Andreas’s point about the piece being unable to unfold in real time has something to do with that–i am increasingly coming to think of music (and sound in general) as one of the ways we come to know and experience the passage of time, and I felt alienated from this piece in regards to that.

    I think that perhaps the questions asked here are a little fallacious in the sense that I saw the human all over this piece. Because of the cultural codes you bring up, we are really asking the computer if it can make music for *us*, specifically–and there may not even be a unified *us* in our group of listeners/respondents. Embedded in the computer code for this program are all sorts of culturally specific codes about what qualities make a collection of sounds “music”–right down to the allusive reference of “Peter and the Wolf” in the naming of the “agents” and the embedding of Western musical scales into the algorithm.

    Perhaps computers already make music for computers and we just don’t recognize it as music at all?

    Like

    • Steven Hammer says :

      Great piece and interesting reactions here, especially regarding (human?) authorship. I found the composition to be really engaging and rich, in much the same way that I find circuit-bent compositions and instruments to be so captivating. They violate convention and authorial paradigms in really useful ways. True, I feel significantly less “passion” in this sonic space, but I find significant mental/psychedelic stimulation, which I think has been missing from sonic conversations (likely because it’s tricky to make distinctions). In any case, I enjoy the kinds of sonic spaces created by pseudo-indeterminacy. The loss of human control in the creative process almost always yields results worth of examination.

      I’m reminded of a video I viewed recently (https://vimeo.com/49484255) in which objects were scanned and those data triggered sonic events. In this example (and many others like it), there seems to be a moment in which we forget to acknowledge the human hands in the preproduction work. In other words, as Jennifer hints at above, parameters/sounds/effects must be predefined and programmed, objects must be chosen, and sounds nearly always correspond to some culturally canonized sonic event classified as music. Sometimes this translates into unstated yet implicit claims of (pure) indeterminacy and nonhuman composition/authorship. We’re not engaging in indeterminacy of composition here, but rather undetermined inputs matched to predetermined parameters. Maybe I’m being picky here…

      Anyway, to pick up Aaron’s questions re: authorship:
      Where authorship lies is going to, of course, depend on one’s philosophical leanings. Humanist? Romantic? Posthumanist? Hell, maybe the author is dead anyway (Barthes) or only a function of discursive performance (Foucault). I find that Latour and other OOO/ANT thinkers are useful in situations like this. Maybe the author is not a single entity, but a collection of actants working in relationship with one another to create meaning (here in the form of sound). Andreas gives us a great example of a human working with code, sound, narrative structures, cultural references, and audiences. Meaning is made (and therefore authorship is performed) in these interactions between actors/ideas. This network of actors, in effect, becomes the author. This might be hard to swallow given the philosophical baggage we inherited from the Romantic poets, but it is an appealing approach to understanding the complexities of creation, authorship, and reaction.

      So many big and interesting ideas here. Big thanks to Andreas!

      Like

      • Andreas Duus Pape says :

        “I found the composition to be really engaging and rich, in much the same way that I find circuit-bent compositions and instruments to be so captivating.” Thanks!

        “We’re not engaging in indeterminacy of composition here, but rather undetermined inputs matched to predetermined parameters. ”
        I think that’s totally right, and I agree with Jennifer’s comment about this, too. There’s definitely a way I’m mapping these agent behaviors through my prism of what music is.

        Like

    • Tara Rodgers says :

      Hello, all, and thank you, Andreas, for sharing this work and for the interesting questions you pose. I’ll enter the conversation here with a reply that riffs on Jennifer’s observations about the ways that human-ness inhabits code.
      My initial impressions of this piece have to do with the various ways in which it can have a communicative function. On one level, you’ve set up the intra-communication of agents within the composition, which provides the piece with a certain form. On another level, this piece seems to be at least in part striving to communicate something “about” predators and prey to listeners. (I found myself, at least, closing my eyes at various points to see if I could hear this story, or these relational dynamics, in the music.) This kind of communicative function is often central to agent-based composition more generally, as well as data sonification projects–such as works by Andrea Polli (http://www.andreapolli.com/) and Carrie Bodle (http://carriebodle.com/)–which use sound to communicate patterns and meanings of social or scientific data. But I’m especially interested in the relationship that you as the composer seem to have to this work–especially the comparisons you draw between your process of making this piece vs. harmonica improvisation. It seems there is an interesting set of issues here about musical instruments or sound-making tools (e.g., NetLogo software, a harmonica, a synthesized organ sound, etc.) as partners or companions (thinking with Donna Haraway here) in music/sonic communication. For example, the programmer’s cultural, habitual, and/or technological frame may be to default to Western musical scales; the software/hardware’s constraint is characterized by slowing down the tempo when there is a heavy processing load… and from such multiple, sometimes complex, overlapping human and nonhuman trajectories, the piece itself emerges. (I’ve been thinking of these issues a lot when I do noise improv performances with one or two analog synths–which I think of, together with myself, as a “duo” or “trio” where each of “us” brings a set of possibilities to the table.) I wonder if you’ve tried playing your harmonica along with this piece, despite the fact that you find it lacking in coherent rhythm? Or if there might be ways of using the NetLogo composition as a kind of score or starting point for a harmonica piece?
      Another impression and set of questions that I have has to do with the relationship between sonic content and musical form in this piece. In other words, perhaps what many listeners hear as “musical” in this composition results from the mapping of agent movements and interactions to the instrumental sounds of organ and brass. I understand that these may be limited options within the software, but it piqued my curiosity to at least imagine what the concepts of “predators” and “prey” might sound like if those concepts were detached from traditional Western instrumentation (even mapped to field recordings or noise textures, for a couple examples). Here, again, one might experiment with moving this exercise over to the harmonica–what would the same story sound like with that instrument as a companion in its production?

      Like

      • Andreas Duus Pape says :

        I think that’s right; I do think the piece is trying to communicate something about predators and prey.

        I have tried to play harmonica along with the piece, but I couldn’t get it to work for me. I wish I could say more about it, but my improvisation is unschooled and mostly intuitive, and I rarely know why it doesn’t work, I just know when it does. I also tried playing rhythm guitar along with the piece, allowing the predator and prey to play “lead.” That seemed to work better. I may attempt to record a bit of that and put it up here.

        Like

  13. Aaron Trammell says :

    I think that I’ll start this off by saying that I found the ambience of the music surprisingly enjoyable. I found that the more I listened to it, the more I found myself challenged by heady questions like: “what is music, really?” I believe that music is essentially just organized tones, it’s our cultural systems that give these patterns and tones meaning.

    This program does half of this work, it organizes tones into a coherent system. And it even simulates many of the things that we expect to encounter in this system – instruments and scales. But to the question of whether or not I would seek this out, I would not. I would not seek this out because there are no creators here, and therefore this program tells very little about one’s life experience.

    So, I pose the question: where does authorship lie in this context? Is Andreas the author, is the algorithm the author, or am I (if I program the machine)? When negotiating authorship through so many perspectives, what gets through? What are the politics of authorship, and therefore where is the passion?

    Like

  14. Andreas Duus Pape says :

    Looking forward to hearing your feedback!

    Like

Leave a reply to primusluta Cancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.