What does finance sound like? Is it the clanging of the opening and closing bells at the New York Stock Exchange? The shouting of offers to buy or sell? The beeps made by cash registers as a credit card is swiped? The whirring of fans working overtime to cool computers? What is this noise?
Noise, however, is not purely a sonic phenomenon. Since the late 1940s, noise has been intimately linked with theories of communication and information, as Aaron Trammell discusses in Sounding Out! posts such as “What Mixtapes Can Teach Us About Noise.” My research attempts to bring these two aspects of noise—the sonic and informatic—into conversation. I trace the interferences noise makes within a set of disparate disciplines: I listen to the history of the impact of information theory on experimental and electronic music; investigate the interferences of “fearless speech,” artistic robotics, and the public; and examine how noises digital and sonic have impacted the development of finance. Rather than creating my own definition of noise, I follow how other disciplines deal with their encounters with noise as both a material phenomenon—something that interferes with a signal, or a sound that is deemed unwanted—and as something to be theorized, asking questions such as what are the meanings of these noises? or should we be controlling noise at all?
In this post, I discuss three vignettes that outline the different ways in which noise (sonic and informatic) interferes with different aspects of finance: the shouts of open-outcry pits and the information they may or may not convey; new forms of electronic trading and the noises of server farms and trading behavior; and the Flash Crash of May 6th, 2010 that provoked noises from both traders and artists. Each reflects a particular conjunction of the sonic and informatic aspects of noise. When we attend to both components simultaneously, we discover that financial noises are complex entities that are not inherently revolutionary nor regressive, but are rather an elusive combination of both.
Noisy Trading: The Pits
My interest in the noises of finance comes in part from listening to open-outcry trading, following the work of Caitlin Zaloom’s Out of the Pits: Traders And Technology from Chicago to London and the documentary Floored (2008). An open-outcry pit, such as that found on the floor of the Chicago Board of Trade (CBOT), pairs buyers and sellers through a bodily practice of trading involving the extremities of behavior. Shouting, pushing, and shoving occur on the steps of the pit as buyers and sellers work to match their orders through nearly whatever means necessary.
In the wonderfully titled article “Is Sound Just Noise?”, the business school professors Joshua Coval and Tyler Shumway ask, in one of the few academic articles related to the sounds of the pits, whether or not the shouting might convey information that is not necessarily available on the computer screens that were then coming to dominate trading:
we ask whether there exists information that is regularly communicated across an open outcry pit but cannot be easily transmitted over a computer network. Any signals that convey information regarding the emotion of market participants—fear, excitement, uncertainty, eagerness, and so forth—are likely to be difficult to transmit across an electronic network (1890).
Coval and Shumway found that the ambient sound level of the pits did have predictive impact regarding various aspects of the market: in short, the louder the pits got, the higher the volatility in the prices of securities and the decrease in the likelihood of conducting a trade.
Noisy Trading, Redux: Datacenters
Yet changes in the structure of the market have not only shifted the location of activity to people behind computer screens and away from these types of sounds, it has also shifted the actual location of the exchanges themselves. No longer do most trades take place in the physical location of, for example, the NYSE; rather, they take place in buildings like this one, at 1700 MacArthur Boulevard in Mahwah, NJ.
This is the location of the NYSE’s new datacenter, a 400,000 square foot facility. (In the linked video, note the whirring of the fans, a new noise of finance beyond that of the pits.) The servers in these datacenters—run by highly-capitalized financial firms large and small alike—are able to respond much quicker to market information the closer they are to the computers that run the exchange. And what can be closer than being co-located in the same datacenter as the exchange? This need for speed has lead to all sorts of interesting situations, such as new fibre-optic lines being laid to shave off a millisecond or two in travel between New Jersey and Chicago, or the taking into account of special relativity effects in the location of future datacenters. The new High-Frequency Trading (HFT) algorithms run on these servers in these datacenters.
Noisy Trades, Sonified: May 6th 2010
The voice on this recording, made on May 6th, 2010, belongs to Ben Lichenstein, an employee of a firm called Trader’s Audio. Now, Trader’s Audio provides live coverage of market movements from a person on the floor of an exchange in order for day traders and others to get an idea of the “sentiment” of a market. It’s kind of like a play-by-play of market activity, a running commentary of major market movements that can’t be discerned soley by the watching of numbers on a screen. What, then, could have been going on for Ben Lichenstein to be in such a frenzy, for his voice to be inflected in such a way? What are we to make of this noise?
Well, May 6th, 2010 was the day of what has infamously become known as the Flash Crash. The full details of this day are beyond the scope of this post, so I will outline it schematically, following the findings of the official US report produced by the Commodity Futures Trading Commission (CFTC) and the Securities and Exchange Commission (SEC). (For a different take on this, see the sociologist of finance Donald MacKenzie’s “How to Make Money in Microseconds”.) In short, between the hours of 2 and 3PM Eastern Time the New York Stock Exchange (NYSE) had both its largest single day loss as well as its largest single day gain, a swing of over 600 points. A series of trades made by algorithms that failed to take into account their impact on the market caused the prices of securities to swing to extremes, excerbated by the activity of High-Frequency Trading (HFT) algorithms. While the market eventually recovered—in part due to the activity of the same algorithms that caused the problem in the first place—the event indicated the precariousness of the stock market, the potential for things to spiral quickly out of control, and the difficulty in forecasting the behavior of an ecosystem of opaque algorithms.
How do the HFT algorithms relate to the Flash Crash that took place on May 6th, 2010? While the report of the CFTC and the SEC regarding the Flash Crash does not lay blame on HFT in particular, it did indicate how these algorithms contributed to the large price swings, the immense number of shares traded, and the drying up of liquidity (that is, the ability to find buyers and sellers in the market). One of the reasons why the market swings were so severe on May 6th, 2010 was due to the fact that HFT algorithms react immediately to small fluctuations of price, a quality of markets that financial economists call microstructure noise, a fascinating topic that is unfortunately beyond the scope of this particular post. In general, HFT and these datacenters go hand-in-hand, as it is a truism that it will take longer for data to travel between a machine in New Jersey and one in Chicago, than it will to travel between two machines in the same data center in New Jersey. HFT works to take advantage of this shorter latency in order to exploit market movements on the timescale of milliseconds, accelerating trading far beyond the open-outcry pit.
Noisy Finance: The Sonic and the Informatic
Let’s conclude with a sonic artifact of the Flash Crash from the French collective rybn. Their work has explored the concept of “antidatamining,” that is, the use of the “data mining” techniques of computational capitalism in order to shed light on the intersection of data and society. Consider their piece FLASHCRASH SONIFICATION (one of the few artistic responses to the Flash Crash), where rybn took trading data from nine different exchanges on the afternoon of the Flash Crash and created an austere, digitally-sharp yet undulating soundscape that recalls the work of artists Ryoji Ikeda or Carsten Nicolai without the rhythmic precision. If you can, listen to their online-available, two-channel mix on headphones in order to appreciate the details of the piece.
The building towards the end of “FLASHCRASH SONIFICATION” was meant to “emphasize the moment of the crash, [by] adding an effect of resonance, which propagates slowly, making it more tense, as the krach goes on” (all quotes in this paragraph from author’s personal interview with rybn). Thus instead of merely transparently translating the data into sound, rybn constructed the sonification in order to bring out this resonance: “resonance is pointed [to] as one of the major risk[s] of HFT by many economists and the feedback phenomenon was in the center of our discussions when we were preparing the piece.” Isolating the Flash Crash was important for rybn as it was perhaps the “moment when people started to understand financ[ial] orientations more clearly” thereby highlighting the symptomatic nature of the “speculative short-term loop finance seems to be stuck in.”
In FLASHCRASH SONIFICATION, sonic noise becomes a translation of the data from the market—abstract yet eminently material—into a different abstract form that does not immediately signify. FLASHCRASH SONIFICATION suggests rather than indicates; listening to it cannot provide us with rational information regarding the dynamics of the Flash Crash. Instead it produces a dark foreboding of the mechanisms at work, the high-frequency pulses first recalling heartbeats that soon speed up beyond any ability for distinction. In FLASHCRASH SONIFICATION, rybn comments on the inability for computation—and by extension, the market—to be the perfectly rational, ordered space it is ideally understood to be.
In Noise We Cannot Trust
If there is one thing clear about the examples of noises heard and encountered in this post—the shouting in the pits, the fluctuations of prices, the whirring of air conditioning, the sonification of the Flash Crash—it is that noise cannot be counted upon for positive or negative disruption. Noise cannot be counted upon as a political exploit in the market, as it can signify the potential of a trade, or be recuperated into profit through the activity of HFT algorithms. Yet noise can also provide an alternative experience of the Flash Crash beyond that of bureaucratic reports and figures. It is thus through the interferences noise causes within the dynamics of finance that we come into contact with the equivocality of noise as a phenomenon, and thus become attuned to a particular need to not confine noise to preconceived notions of positivity or negativity.
Nicholas Knouf is a PhD candidate in information science at Cornell University in Ithaca, NY. His research explores the interstitial spaces between information science, critical theory, digital art, and science and technology studies. His dissertation, “Noisy Fields: Interference, Elusiveness, and Embodied Temporality in Sonic Practices,” examines the sonic and informatic characteristics of noise across a set of disparate disciplines, arguring for an attention to the equivocality of noise as a material-discursive phenomenon. He is also a media artist whose pieces engage with academic publishing, ad-hoc networking, and non-speech vocalizations. More information about his research and practice can be found at http://zeitkunst.org.
REWIND! . . .If you liked this post, you may also dig:
Experiments in Agent-based Sonic Composition–Andreas Pape
SO! Reads: Jonathan Sterne’s MP3: The Meaning of a Format–Aaron Trammell
Editor’s Note: WARNING: THE FOLLOWING POST IS INTERACTIVE!!! This week’s post is especially designed by one of our regulars, Andreas Duus Pape, to spark conversation and provoke debate on its comment page. I have directly solicited feedback and commentary from several top sound studies scholars, thinkers, artists, and musicians, who will be posting at various times throughout the day and the week–responding to Andreas, responding to each other, and responding to your comments. Look for commentary by Bill Bahng Boyer (NYU), Maile Colbert(Binaural/Nodar, Faculdade de Belas Artes da Universidade do Porto), Nick Knouf (Cornell University), Primus Luta (AvantUrb, Concrète Sound System), Alejandro L. Madrid (University of Illinois at Chicago), Tara Rodgers (University of Maryland), Jonathan Skinner (ecopoetics), Jonathan Sterne (McGill University), Aaron Trammell (Rutgers University, Sounding Out!) and yours truly (Binghamton University, Sounding Out!). Full bios of our special respondents follow the post. We wholeheartedly wish to entice you this Monday to play. . .and listen. . .and then share your thoughts via the comment page. . .and play again. . .listen again. . .read the comments. . .and share more thoughts. . .yeah, just go ahead and loop that. –JSA, Editor-in-Chief
I’m a musician and an economist. Sometimes you will find me playing acoustic folk rock and blues on guitar, harmonica and voice. And at other times I will be at work, where I apply my expertise in game theory to the computer modeling of social phenomena. I create simulations of people interacting – such as how people decide which way to vote on an issue such as a tax levy, or how people learn to sort objects given to them in an experiment. In these simulations, the user can set up characteristics of the environment, such as the number of people and their individual goals. After things are set up, users watch these interactions unfold. The simulation is a little story, and one need only tweak the inputs to see how the story changes.
As a musician, I was curious if a program that generates social stories could be refashioned to generate musical pieces. I wanted to build a music-generation engine that the listener could tweak in order to get a different piece each time. But not just any tune – a piece with some flow, some story. I like that tension between randomness and structure. On one hand, I want every song to vary in unpredictable ways; on the other hand, I want to create music and not structureless noise.
I created a basic story of predators and prey, whimsically naming the prey “Peters,” represented by rabbits, and the predators “Wolves.” My simulation depicts a plain in the savannah with a green oasis. The prey seek the oasis and the predators seek the prey. Each character has its own goals and the closer they are to achieving them, the happier they are. Both predators and prey want to have stomachs full of food, so naturally they want to be close to their target (be it prey or oasis). As they travel through the savannah, they learn what choices (directions of movement) make them happier, and use this experience to guide them.
So how does this story become music? To this question there are two answers: a technical one and an intuitive one. The intuitive answer is that in real life the story of predators and prey plays out geographically on the savannah, but musically this is a story that plays out over a sonic landscape. To elaborate, I abstracted the movement of the prey and predator on the geography of the plain into the musical geometry of a sonic landscape. The farther north an agent travels, the higher the pitch. And, the farther east an agent travels the longer the duration. In other words, as an agent travels to the northwest, she makes longer-lasting tones that are higher pitched. I also mapped happiness to volume, so that happy agents make louder tones. Finally, so that each agent would have a distinct voice as they traveled through this space, I chose different instruments for each agent.
In the video below I assigned the “church organ” sound to prey, and the “brass section” sound to predators.
Ultimately, there are some things that I like about this piece and others that I do not.
As a harmonica player, I improvise by creating and resolving tension. I think this piece does that well. The predator will pursue the prey into a quiet, low-pitch corner, creating a distant, rumbling sound – only to watch prey escape to the densely polyphonic northwest corner. There is an ebb and flow to this chase that I recognize from blues harmonica solos. In contrast to my experience as a harmonica player, however, I have found that some of the most compelling parts of the dynamics come from the layering of notes. The addition of notes yields a rich sonic texture, much like adding notes to a chord on an organ.
Unfortunately, for largely technical reasons, there is a lack of coherent rhythm and pacing. The programming platform (agent-based modeling software called NetLogo) is not designed to have the interface proceed in real-time. Basically, the overall speed of the piece can change as the processing load increases or decreases. I found that as agents learnt more about their surroundings (and more system resources are allocated to this “memory”), they became slower and slower. To fix this, I capped the size of their memory banks so that they would forget their oldest memories. The closest I have come to a rhythmic structure is by ordering the way that the agents play. This technique makes the piece have a call-and-response feel. If only the piece to had a coherent rhythm, then I could imagine playing harmonica along with it.
One last comment on pitch: while an earlier version of this piece mapped each step in space to a semitone, things sounded too mechanical. Even though this was the easiest and most intuitive decision from a technical standpoint, it was aesthetically lacking, so I have now integrated traditional musical scales. The minor scale, in my opinion, is the most interesting as it makes the predator/prey dynamic sound appropriately foreboding.
You can play this piece yourself. Simply go to this link with Java enabled in your browser (recommended: Google Chrome). Pressing “Setup” then “Go” will create your own run of the piece. As it is running, you can adjust the slider above the graphic window to change the speed. Press “Go” again to stop the model, adjust any parameters you wish and press “Setup” and “Go” again to see how the piece changes. Here are some parameters to try: instA and instB to change the instruments associated with prey and predators; PlayEveryXSteps to change the pace of the piece (higher results in a slower paced piece); Num-PackAs and Num-PackBs changes the number of prey and predators; the vertical PeterVol and WolfVol adjust the overall volume of prey and predators.
In regards to my version of “Peter and the Wolf,” I have a number of things that I’m curious about.
First, how does this relate to what you think of as music? Do you like listening to it? Which elements do you like and which do you dislike? For example, what do you think about about the tension and rhythm – do you agree the first works and that the second could be improved? Would you listen to this for enjoyments’ sake, and what would it take for this to be more than a novelty? What do you think about the narrative that drives the piece? I chose the predator and prey narrative, admittedly, on a whim. Do you think there might be some other narrative or agent specific goals that might better drive this piece? Is there any metaphor that might better describe this piece? As a listener do you enjoy the experience of being able to customize and configure the piece? What would you like to have control over that is missing here? Would you like more interaction with the piece or less interaction?
Finally, and perhaps most importantly, what do you think of the premise? Can simple electronic agents (albeit ones which interact socially) aspire to create music? Is there something promising in this act of simulation? Is music-making necessarily a human activity and is this kind of work destined to be artificial and uncanny?
Thanks for listening. I look forward to your thoughts.
– – –
Andreas Duus Pape is an economist and a musician. As an economist, he studies microeconomic theory and game theory–that is, the analysis of strategy and the construction of models to understand social phenomena–and the theory of individual choice, including how to forecast the behavior of agents who construct models of social phenomena. As a musician, he plays folk in the tradition of Dylan and Guthrie, blues in the tradition of Williamson and McTell, and country in the tradition of Nelson and Cash. Pape is an assistant Professor in the department of Economics at Binghamton University and is a faculty member of the Collective Dynamics of Complex Systems (CoCo) Research Group.
– – –
Guest Respondents on the Comment Page (in alphabetical order)
Bill Bahng Boyer is a doctoral candidate in music at New York University who is completing a dissertation on public listening in the New York City subway system.
Maile Colbert is an intermedia artist with a concentration in sound and video, living and working between New York and Portugal. She is an associated artist at Binaural/Nodar.
Nicholas Knouf is a Ph.D. candidate in information science at Cornell University.
Primus Luta is a writer and an artist exploring the intersection of technology and art; he maintains his own AvantUrb site and is a founding member of the live electronic music collective Concrète Sound System.
Alejandro L. Madrid is Associate Professor of Latin American and Latino Studies at the University of Illinois at Chicago and a cultural theorist and music scholar whose research focuses on the intersection of modernity, tradition, globalization, and ethnic identity in popular and art music, dance, and expressive culture from Mexico, the U.S.-Mexico border, and the circum-Caribbean.
Tara Rodgers is an Assistant Professor of Women’s Studies and a faculty fellow in the Digital Cultures & Creativity program at the University of Maryland. As Analog Tara, she has released electronic music on compilations such as the Le Tigre 12″ and Source Records/Germany, and exhibited sound art at venues including Eyebeam (NYC) and the Museum of Contemporary Canadian Art (Toronto).
Jonathan Skinner founded and edits the journal ecopoetics, which features creative-critical intersections between writing and ecology. Skinner also writes ecocriticism on contemporary poetry and poetics.
Jonathan Sterne teaches in the Department of Art History and Communication Studies and the History and Philosophy of Science Program at McGill University. His latest book, Mp3 The Meaning of a Format comes out this fall from Duke University Press.
Jennifer Stoever-Ackerman is co-founder, Editor-in-Chief and Guest Posts Editor for Sounding Out! She is also Assistant Professor of English at Binghamton University and a former Fellow at the Society for the Humanities at Cornell University (2011-2012).
Aaron Trammell is Multimedia Editor of Sounding Out! and a Ph.D. Candidate in Media and Communications at Rutgers University.