Sounding Out! Podcast #31: Game Audio Notes III: The Nature of Sound in Vessel
This post continues our summer Sound and Pleasure series, as the third and final podcast in a three part series by Leonard J. Paul. What is the connection between sound and enjoyment, and how are pleasing sounds designed? Pleasure is, after all, what brings y’all back to Sounding Out! weekly, is it not?
Part of the goal of this series of podcasts has been to reveal the interesting and invisible labor practices which are involved in sound design. In this final entry Leonard J. Paul breaks down his process in designing living sounds for the game Vessel. How does one design empathetic or aggressive sounds? If you need to catch up read Leonard’s last entry where he breaks down the vintage sounds of Retro City Rampage. Also, be sure to be sure to check out last week’s edition where Leonard breaks down his process in designing sound for Sim Cell. But first, listen to this! -AT, Multimedia Editor
–
CLICK HERE TO DOWNLOAD: Game Audio Notes III: The Nature of Sound in Vessel
SUBSCRIBE TO THE SERIES VIA ITUNES
ADD OUR PODCASTS TO YOUR STITCHER FAVORITES PLAYLIST
–
Game Audio Notes III: The Nature of Sound in Vessel
Strange Loop Game’s Vessel is set in an alternate world history where a servant class of liquid automatons (called fluros) has gone out of control. The player explores the world and solves puzzles in an effort to restore order. While working on Vessel, I personally recorded all of the sounds so that I could have full control over the soundscape. I recorded all of the game’s samples with a Zoom H4n portable recorder. My emphasis on real sounds was intended to focus the player’s experience of immersion in the game.
This realistic soundscape was supplemented with a variety of techniques that produced sounds that dynamically responded to the changes in the physics engine. Water and other fluids in the game were difficult to model with both the physics engine and the audio engine (FMOD Designer). Because fluids are fundamentally connected to the game’s physics engine, they takes on a variety of different dynamic forms as players interact with the fluid in different ways. In order to address this Kieran Lord, the audio coder, and I considered factors like the amount of liquid in a collision with anything, the hardness of the surface that it was colliding with, the type of liquid in motion, whether the player is experiencing an extreme form of that sound because it is colliding with their head, and, of course, how fast the liquid is travelling.
Although there was a musical score, I designed the effects to be played without music. Each element of the game, for instance a lava fluro’s (one of the game’s rebellious automatons) footsteps, entailed required layers of sound. The footsteps were composed of water sizzling on a hot pan, a gloopy slap of oatmeal and a wet rag hitting the ground. Finding the correct emotional balance to support the game’s story was fundamental to my work as a sound designer. The game’s sound effects were constantly competing with the adaptive music (which is also contingent on player action) that plays throughout the game, so it was important to provide an informative quality to them. The sound effects inform you about the environment while the music sets the emotional underscore of the gameplay and helps guide you in the puzzles.
Defining the character of the fluros was difficult because I wanted players to have empathy for them. This was important to me because there is often no way to avoid destroying them when solving the game’s puzzles. While recording sounds in the back of an antique shop, I came across a vintage Dick Tracey gun that made a fantastic clanking sound when making a siren sound. Since the gun allowed me to control how quickly the siren rose and fell, it was a great way to produce vocalizations for the fluros. I simply recorded the gun’s siren sound, chopped the recording into smaller pieces, and then played back different segments randomly. The metal clanking gave a mechanical feel and the siren’s tone gave a vocal quality to the resulting sound that was perfect for the fluros. I could make the fluros sound excited by choosing a higher pitch range from the sample grains and inform the player when they approached their goal.
I wanted a fluid-based scream to announce a fluro’s death. I tried screaming underwater, screaming into a glass of water, and a few other things, but nothing worked. Eventually, when recording a rubber ear syringe, I found squeezing the water out quickly lent a real shriek while it spit out the last of the water. Not only did this sound really cut through the din of the gears clanking in the mix, but it also bonded a watery yell with the sense of being crushed and running out of breath.

Vessel’s Lava boss with audio debug output. Used with permission (c) 2014 Strange Loop Games
For the final boss, I tried many combinations of glurpy sounds to signify its lava form. Eventually I recorded a nail in a board being dragged across a large rusty metal sheet. Though it was quite excruciating to listen to, I pitched down the recording and combined it with a pitched down and granulated recording of myself growling into a cup of water. This sound perfectly captured the emotion I wanted to feel when encountering a final boss. Although it can take a long time to arrive at the “obvious” sound, simplicity is often the key.
Anticipation is fundamental to a player’s sense of immersion. It carves a larger space for tension to build, for instance a small crescendo of a creaking sound can develop a tension that builds to a sudden and large impact. A whoosh before a punch lands adds extra weight to the force of the punch. These cues are often naturally present in real-world sounds, such as a rush of air sweeping in before a door slams. A small pause might be included just for added suspense and helps to intensify the effect of the door slamming. Dreading the impact is half of the emotion of a large hit .

Recording inside of a clock tower with my H4n recorder for Vessel. Used with permission by the author.
Recording all of the sounds for Vessel was a large undertaking but since I viewed each recording as a performance, I was able to make the feeling of the world very cohesive. Each sound was designed to immerse the player in the soundscape, but also to allow players enough time to solve puzzles without becoming annoyed with the audio. All sounds have a life of their own and a resonance of memory and time that stays with the them during each playthrough of a game. In Retro City Rampage I left a sonic space for the player to wax nostalgic. In Sim Cell, I worked to breathe life into a set of sterile and synthesized sounds. Each recorded sound in Vessel is alive in comparison, telling stories of time, place and recording with them, that are all their own.
The common theme of my audio work on Retro City Rampage, Sim Cell and Vessel, is that I enjoy putting constraints on myself to inspire my creativity. I focus on what works and removing non-essential elements. Exploring the limits of constraints often provokes interesting and unpredictable results. I like “sculpting” sounds and will often proceed from a rough sketch, polishing and reducing elements until I like what I hear. Typically I remove layers that don’t add an emotive aspect to the sound design. In games there are often many sounds that can play at once, so clarity and focus are necessary when preventing sounds from getting lost in a sonic goo.
In this post I have shown how play and experimentation are fundamental to my creative process. For an aspiring sound artist, spending time with Pure Data, FMOD Studio or Wwise and a personal recorder is a great way to improve their skill with game audio. This series of articles has aimed to reveal the tacit decisions behind the production of game audio that get obscured by the fun of the creative process. Plus, I hope they offer a bit of inspiration to those creating their own sounds in the future.
Additional Resources:
- My GDC 2012 talk – My site includes an MP3 recording of the talk and the full powerpoint presentation. A full video of the talk is in the GDC Vault but requires membership
- See the Jon Hopkins Soundtrack in the Video Game Vessel video game soundtrack video by me on YouTube – shows use of Lua with music in the custom level editor : https://www.youtube.com/watch?v=KOyjMPPvaY4
–
Leonard J. Paul attained his Honours degree in Computer Science at Simon Fraser University in BC, Canada with an Extended Minor in Music concentrating in Electroacoustics. He began his work in video games on the Sega Genesis and Super Nintendo Entertainment System and has a twenty year history in composing, sound design and coding for games. He has worked on over twenty major game titles totalling over 6.4 million units sold since 1994, including award-winning AAA titles such as EA’s NBA Jam 2010, NHL11, Need for Speed: Hot Pursuit 2, NBA Live ’95 as well as the indie award-winning title Retro City Rampage.
He is the co-founder of the School of Video Game Audio and has taught game audio students from over thirty different countries online since 2012. His new media works has been exhibited in cities including Surrey, Banff, Victoria, São Paulo, Zürich and San Jose. As a documentary film composer, he had the good fortune of scoring the original music for multi-awarding winning documentary The Corporation which remains the highest-grossing Canadian documentary in history to date. He has performed live electronic music in cities such as Osaka, Berlin, San Francisco, Brooklyn and Amsterdam under the name Freaky DNA.
He is an internationally renowned speaker on the topic of video game audio and has been invited to speak in Vancouver, Lyon, Berlin, Bogotá, London, Banff, San Francisco, San Jose, Porto, Angoulême and other locations around the world.
His writings and presentations are available at http://VideoGameAudio.com
–
Featured image: Courtesy of Vblank Entertainment (c)2014 – Artwork by Maxime Trépanier.
—
REWIND! . . .If you liked this post, you may also dig:
Sounding Out! Podcast #30: Game Audio Notes I: Growing Sounds for Sim Cell- Leonard J. Paul
Sounding Out! Podcast #31: Hand Made Music in Retro City Rampage– Leonard J. Paul
Papa Sangre and the Construction of Immersion in Audio Games- Enongo Lumumba-Kasongo
Further Experiments in Agent-based Musical Composition
Editor’s Note: WARNING: THE FOLLOWING POST IS INTERACTIVE!!! This week’s post is especially designed by one of our regulars, Andreas Duus Pape, to spark conversation and provoke debate on its comment page. I have directly solicited feedback and commentary from several top sound studies scholars, thinkers, artists, and musicians, who will be posting at various times throughout the day and the week–responding to Andreas, responding to each other, and responding to your comments. Look for commentary by Bill Bahng Boyer (NYU), Maile Colbert(Binaural/Nodar, Faculdade de Belas Artes da Universidade do Porto), Adriana Knouf(Cornell University), Primus Luta (AvantUrb, Concrète Sound System), Alejandro L. Madrid (University of Illinois at Chicago), Tara Rodgers (University of Maryland), Jonathan Skinner (ecopoetics), Jonathan Sterne (McGill University), Aaron Trammell (Rutgers University, Sounding Out!) and yours truly (Binghamton University, Sounding Out!). Full bios of our special respondents follow the post. We wholeheartedly wish to entice you this Monday to play. . .and listen. . .and then share your thoughts via the comment page. . .and play again. . .listen again. . .read the comments. . .and share more thoughts. . .yeah, just go ahead and loop that. –JSA, Editor-in-Chief
—
I’m a musician and an economist. Sometimes you will find me playing acoustic folk rock and blues on guitar, harmonica and voice. And at other times I will be at work, where I apply my expertise in game theory to the computer modeling of social phenomena. I create simulations of people interacting – such as how people decide which way to vote on an issue such as a tax levy, or how people learn to sort objects given to them in an experiment. In these simulations, the user can set up characteristics of the environment, such as the number of people and their individual goals. After things are set up, users watch these interactions unfold. The simulation is a little story, and one need only tweak the inputs to see how the story changes.
As a musician, I was curious if a program that generates social stories could be refashioned to generate musical pieces. I wanted to build a music-generation engine that the listener could tweak in order to get a different piece each time. But not just any tune – a piece with some flow, some story. I like that tension between randomness and structure. On one hand, I want every song to vary in unpredictable ways; on the other hand, I want to create music and not structureless noise.
I created a basic story of predators and prey, whimsically naming the prey “Peters,” represented by rabbits, and the predators “Wolves.” My simulation depicts a plain in the savannah with a green oasis. The prey seek the oasis and the predators seek the prey. Each character has its own goals and the closer they are to achieving them, the happier they are. Both predators and prey want to have stomachs full of food, so naturally they want to be close to their target (be it prey or oasis). As they travel through the savannah, they learn what choices (directions of movement) make them happier, and use this experience to guide them.
So how does this story become music? To this question there are two answers: a technical one and an intuitive one. The intuitive answer is that in real life the story of predators and prey plays out geographically on the savannah, but musically this is a story that plays out over a sonic landscape. To elaborate, I abstracted the movement of the prey and predator on the geography of the plain into the musical geometry of a sonic landscape. The farther north an agent travels, the higher the pitch. And, the farther east an agent travels the longer the duration. In other words, as an agent travels to the northwest, she makes longer-lasting tones that are higher pitched. I also mapped happiness to volume, so that happy agents make louder tones. Finally, so that each agent would have a distinct voice as they traveled through this space, I chose different instruments for each agent.
In the video below I assigned the “church organ” sound to prey, and the “brass section” sound to predators.
Ultimately, there are some things that I like about this piece and others that I do not.
As a harmonica player, I improvise by creating and resolving tension. I think this piece does that well. The predator will pursue the prey into a quiet, low-pitch corner, creating a distant, rumbling sound – only to watch prey escape to the densely polyphonic northwest corner. There is an ebb and flow to this chase that I recognize from blues harmonica solos. In contrast to my experience as a harmonica player, however, I have found that some of the most compelling parts of the dynamics come from the layering of notes. The addition of notes yields a rich sonic texture, much like adding notes to a chord on an organ.
Unfortunately, for largely technical reasons, there is a lack of coherent rhythm and pacing. The programming platform (agent-based modeling software called NetLogo) is not designed to have the interface proceed in real-time. Basically, the overall speed of the piece can change as the processing load increases or decreases. I found that as agents learnt more about their surroundings (and more system resources are allocated to this “memory”), they became slower and slower. To fix this, I capped the size of their memory banks so that they would forget their oldest memories. The closest I have come to a rhythmic structure is by ordering the way that the agents play. This technique makes the piece have a call-and-response feel. If only the piece to had a coherent rhythm, then I could imagine playing harmonica along with it.
One last comment on pitch: while an earlier version of this piece mapped each step in space to a semitone, things sounded too mechanical. Even though this was the easiest and most intuitive decision from a technical standpoint, it was aesthetically lacking, so I have now integrated traditional musical scales. The minor scale, in my opinion, is the most interesting as it makes the predator/prey dynamic sound appropriately foreboding.
You can play this piece yourself. Simply go to this link with Java enabled in your browser (recommended: Google Chrome). Pressing “Setup” then “Go” will create your own run of the piece. As it is running, you can adjust the slider above the graphic window to change the speed. Press “Go” again to stop the model, adjust any parameters you wish and press “Setup” and “Go” again to see how the piece changes. Here are some parameters to try: instA and instB to change the instruments associated with prey and predators; PlayEveryXSteps to change the pace of the piece (higher results in a slower paced piece); Num-PackAs and Num-PackBs changes the number of prey and predators; the vertical PeterVol and WolfVol adjust the overall volume of prey and predators.
In regards to my version of “Peter and the Wolf,” I have a number of things that I’m curious about.
First, how does this relate to what you think of as music? Do you like listening to it? Which elements do you like and which do you dislike? For example, what do you think about about the tension and rhythm – do you agree the first works and that the second could be improved? Would you listen to this for enjoyments’ sake, and what would it take for this to be more than a novelty? What do you think about the narrative that drives the piece? I chose the predator and prey narrative, admittedly, on a whim. Do you think there might be some other narrative or agent specific goals that might better drive this piece? Is there any metaphor that might better describe this piece? As a listener do you enjoy the experience of being able to customize and configure the piece? What would you like to have control over that is missing here? Would you like more interaction with the piece or less interaction?
Finally, and perhaps most importantly, what do you think of the premise? Can simple electronic agents (albeit ones which interact socially) aspire to create music? Is there something promising in this act of simulation? Is music-making necessarily a human activity and is this kind of work destined to be artificial and uncanny?
Thanks for listening. I look forward to your thoughts.
– – –
Andreas Duus Pape is an economist and a musician. As an economist, he studies microeconomic theory and game theory–that is, the analysis of strategy and the construction of models to understand social phenomena–and the theory of individual choice, including how to forecast the behavior of agents who construct models of social phenomena. As a musician, he plays folk in the tradition of Dylan and Guthrie, blues in the tradition of Williamson and McTell, and country in the tradition of Nelson and Cash. Pape is an assistant Professor in the department of Economics at Binghamton University and is a faculty member of the Collective Dynamics of Complex Systems (CoCo) Research Group.
– – –
Guest Respondents on the Comment Page (in alphabetical order)
Bill Bahng Boyer is a doctoral candidate in music at New York University who is completing a dissertation on public listening in the New York City subway system.
Maile Colbert is an intermedia artist with a concentration in sound and video, living and working between New York and Portugal. She is an associated artist at Binaural/Nodar.
N. Adriana Knouf is a Ph.D. candidate in information science at Cornell University.
Primus Luta is a writer and an artist exploring the intersection of technology and art; he maintains his own AvantUrb site and is a founding member of the live electronic music collective Concrète Sound System.
Alejandro L. Madrid is Associate Professor of Latin American and Latino Studies at the University of Illinois at Chicago and a cultural theorist and music scholar whose research focuses on the intersection of modernity, tradition, globalization, and ethnic identity in popular and art music, dance, and expressive culture from Mexico, the U.S.-Mexico border, and the circum-Caribbean.
Tara Rodgers is an Assistant Professor of Women’s Studies and a faculty fellow in the Digital Cultures & Creativity program at the University of Maryland. As Analog Tara, she has released electronic music on compilations such as the Le Tigre 12″ and Source Records/Germany, and exhibited sound art at venues including Eyebeam (NYC) and the Museum of Contemporary Canadian Art (Toronto).
Jonathan Skinner founded and edits the journal ecopoetics, which features creative-critical intersections between writing and ecology. Skinner also writes ecocriticism on contemporary poetry and poetics.
Jonathan Sterne teaches in the Department of Art History and Communication Studies and the History and Philosophy of Science Program at McGill University. His latest book, Mp3 The Meaning of a Format comes out this fall from Duke University Press.
Jennifer Stoever-Ackerman is co-founder, Editor-in-Chief and Guest Posts Editor for Sounding Out! She is also Assistant Professor of English at Binghamton University and a former Fellow at the Society for the Humanities at Cornell University (2011-2012).
Aaron Trammell is Multimedia Editor of Sounding Out! and a Ph.D. Candidate in Media and Communications at Rutgers University.
Recent Comments