Archive | Digital Media RSS for this section

Sounding Out Podcast #36: Anne Zeitz and David Boureau’s “Retention”

AZ-DB-Retention02

Sound and Surveilance4

It’s an all too familiar movie trope. A bug hidden in a flower jar. A figure in shadows crouched listening at a door. The tape recording that no one knew existed, revealed at the most decisive of moments. Even the abrupt disconnection of a phone call manages to arouse the suspicion that we are never as alone as we may think. And although surveillance derives its meaning the latin “vigilare” (to watch) and French “sur-“ (over), its deep connotations of listening have all but obliterated that distinction.

In the final entry to our series on Sound and Surveillance, sound artist Anne Zeitz dissects the theory behind her installation Retention. What are the sounds of capture, and how do the sounds produced in and around spaces of capture affect our bodies? Listen in to find out. -AT

This slideshow requires JavaScript.

CLICK HERE TO DOWNLOAD: Anne Zeitz and David Boureau’s “Retention”

SUBSCRIBE TO THE SERIES VIA ITUNES

ADD OUR PODCASTS TO YOUR STITCHER FAVORITES PLAYLIST

This podcast presents Retention, a quadriphonic sound installation made with David Boureau. It considers the sounds of surveillance, detention and migration. Retention concentrates on the “soundscape” of the Mesnil Amelot 2+3 detention center for illegal immigrants situated to the North of Paris just beside the Charles de Gaulle airport. This center constitutes the largest complex for detaining “illegal immigrants” in France, with 240 places for individuals and families. Approximately 350 airplanes pass closely above the center over a 24 hours time span, creating intervals of very high sound levels that regularly drown out all other ambient sounds. Retention uses quadrophonic recording technology to capture and diffuse a live transmission of communication between pilots and the Charles de Gaulle control tower. The work also integrates recordings from inside the center made by communications via mobile phones. In the short intervals of silence (always implying sounds of some sort), the atmosphere seems suspended. This suspension is paradigmatic for the clash between the local and the global, between those who are trapped in a state of detention before being expulsed by the engines moving over their heads and those who circulate freely (nonetheless under surveillance) in our global society. Retention exhibits a changing sonic space in order to consider how “waiting zones” and processes of mobility meet.

-

Featured Image (c) Anne Zeitz and David Boureau, Retention, 2012.

-

Anne Zeitz is a researcher and artist working with photography, video, and sound media. Born in Berlin in 1980, she lives and works in Paris. Her research focuses on mechanisms of surveillance and mass media, theories of observation and attention, and practices of counter-observation in contemporary art. Her doctoral thesis (University Paris 8/ Esthétique, Sciences et Technologies des Arts, dissertation defence November 2014) is entitled (Counter-)observations, Relations of Observation and Surveillance in Contemporary Art, Literature and Cinema. Anne Zeitz was responsible for organizing the project Movement-Observation-Control (2007/2008) for the Goethe-Institut Paris and collaborated on the exhibition and conference Armed Response (2008) at the Goethe-Institut Johannesburg. She is a former member of the Observatoire des nouveaux médias (Paris 8/Ensad) and of the research project Média Médiums (Université Paris 8, ENSAPC, EnsadLAB, Archives Nationales, 2013/2014). Her most recent research concentrates on the work of the American artist Max Neuhaus with the publication of De Max-Feed a Radio Net (2014), part of the Média Médiums book series. She is the artist of this year’s Urban Photo Fest and participated at the Urban Encounters / Tate Britain in October 2014.

tape reelREWIND! . . .If you liked this post, you may also dig:

Toward a Civically Engaged Sound Studies, or ReSounding Binghamton – Jennifer Stoever

Sounding Out! Podcast #34: Sonia Li’s “Whale”-Sonia Li

Playing with Bits, Pieces, and Lightning Bolts: An Interview with Sound Artist Andrea Parkins – Maile Colbert

Acousmatic Surveillance and Big Data

11928222826_d311dabe2a_o

Sound and Surveilance4

It’s an all too familiar movie trope. A bug hidden in a flower jar. A figure in shadows crouched listening at a door. The tape recording that no one knew existed, revealed at the most decisive of moments. Even the abrupt disconnection of a phone call manages to arouse the suspicion that we are never as alone as we may think. And although surveillance derives its meaning the latin “vigilare” (to watch) and French “sur-“ (over), its deep connotations of listening have all but obliterated that distinction.

Moving on from cybernetic games to modes of surveillance that work through composition and patterns. Here, Robin James challenges us to consider the unfamiliar resonances produced by our IP addresses, search histories, credit trails, and Facebook posts. How does the NSA transform our data footprints into the sweet, sweet, music of surveillance? Shhhhhhhh! Let’s listen in. . . -AT

Kate Crawford has argued that there’s a “big metaphor gap in how we describe algorithmic filtering.” Specifically, its “emergent qualities” are particularly difficult to capture. This process, algorithmic dataveillance, finds and tracks dynamic patterns of relationships amongst otherwise unrelated material. I think that acoustics can fill the metaphor gap Crawford identifies. Because of its focus on identifying emergent patterns within a structure of data, rather than its cause or source, algorithmic dataveillance isn’t panoptic, but acousmatic. Algorithmic dataveillance is acousmatic because it does not observe identifiable subjects, but ambient data environments, and it “listens” for harmonics to emerge as variously-combined data points fall into and out of phase/statistical correlation.

Dataveillance defines the form of surveillance that saturates our consumer information society. As this promotional Intel video explains, big data transcends the limits of human perception and cognition – it sees connections we cannot. And, as is the case with all superpowers, this is both a blessing and a curse. Although I appreciate emails from my local supermarket that remind me when my favorite bottle of wine is on sale, data profiling can have much more drastic and far-reaching effects. As Frank Pasquale has argued, big data can determine access to important resources like jobs and housing, often in ways that reinforce and deepen social inequities. Dataveillance is an increasingly prominent and powerful tool that determines many of our social relationships.

The term dataveillance was coined in 1988 by Roger Clarke, and refers to “the systematic use of personal data systems in the investigation or monitoring of the actions or communications of one or more persons.” In this context, the person is the object of surveillance and data is the medium through which that surveillance occurs. Writing 20 years later, Michael Zimmer identifies a phase-shift in dataveillance that coincides with the increased popularity and dominance of “user-generated and user-driven Web technologies” (2008). These technologies, found today in big social media, “represent a new and powerful ‘infrastructure of dataveillance,’ which brings about a new kind of panoptic gaze of both users’ online and even their offline activities” (Zimmer 2007). Metadataveillance and algorithmic filtering, however, are not variations on panopticism, but practices modeled—both historically/technologically and metaphorically—on acoustics.

In 2013, Edward Snowden’s infamous leaks revealed the nuts and bolts of the National Security Administration’s massive dataveillance program. They were collecting data records that, according to the Washington Post, included “e-mails, attachments, address books, calendars, files stored in the cloud, text or audio or video chats and ‘metadata’ that identify the locations, devices used and other information about a target.” The most enduringly controversial aspect of NSA dataveillance programs has been the bulk collection of Americans’ data and metadata—in other words, the “big data”-veillance programs.

 

Borrowed fro thierry ehrmann @Flickr CC BY.

Borrowed from thierry ehrmann @Flickr CC BY.

Instead of intercepting only the communications of known suspects, this big dataveillance collects everything from everyone and mines that data for patterns of suspicious behavior; patterns that are consistent with what algorithms have identified as, say, “terrorism.” As Cory Doctorow writes in BoingBoing, “Since the start of the Snowden story in 2013, the NSA has stressed that while it may intercept nearly every Internet user’s communications, it only ‘targets’ a small fraction of those, whose traffic patterns reveal some basis for suspicion.” “Suspicion,” here, is an emergent property of the dataset, a pattern or signal that becomes legible when you filter communication (meta)data through algorithms designed to hear that signal amidst all the noise.

Hearing a signal from amidst the noise, however, is not sufficient to consider surveillance acousmatic. “Panoptic” modes of listening and hearing, though epitomized by the universal and internalized gaze of the guards in the tower, might also be understood as the universal and internalized ear of the confessor. This is the ear that, for example, listens for conformity between bodily and vocal gender presentation. It is also the ear of audio scrobbling, which, as Calum Marsh has argued, is a confessional, panoptic music listening practice.

Therefore, when President Obama argued that “nobody is listening to your telephone calls,” he was correct. But only insofar as nobody (human or AI) is “listening” in the panoptic sense. The NSA does not listen for the “confessions” of already-identified subjects. For example, this court order to Verizon doesn’t demand recordings of the audio content of the calls, just the metadata. Again, the Washington Post explains:

The data doesn’t include the speech in a phone call or words in an email, but includes almost everything else, including the model of the phone and the “to” and “from” lines in emails. By tracing metadata, investigators can pinpoint a suspect’s location to specific floors of buildings. They can electronically map a person’s contacts, and their contacts’ contacts.

NSA dataveillance listens acousmatically because it hears the patterns of relationships that emerge from various combinations of data—e.g., which people talk and/or meet where and with what regularity. Instead of listening to identifiable subjects, the NSA identifies and tracks emergent properties that are statistically similar to already-identified patterns of “suspicious” behavior. Legally, the NSA is not required to identify a specific subject to surveil; instead they listen for patterns in the ambience. This type of observation is “acousmatic” in the sound studies sense because the sounds/patterns don’t come from one identifiable cause; they are the emergent properties of an aggregate.

Borrowed from david @Flickr CC BY-NC.

Borrowed from david @Flickr CC BY-NC.

Acousmatic listening is a particularly appropriate metaphor for NSA-style dataveillance because the emergent properties (or patterns) of metadata are comparable to harmonics or partials of sound, the resonant frequencies that emerge from a specific combination of primary tones and overtones. If data is like a sound’s primary tone, metadata is its overtones. When two or more tones sound simultaneously, harmonics emerge whhen overtones vibrate with and against one another. In Western music theory, something sounds dissonant and/or out of tune when the harmonics don’t vibrate synchronously or proportionally. Similarly, tones that are perfectly in tune sometimes create a consonant harmonic. The NSA is listening for harmonics. They seek metadata that statistically correlates to a pattern (such as “terrorism”), or is suspiciously out of correlation with a pattern (such as US “citizenship”). Instead of listening to identifiable sources of data, the NSA listens for correlations among data.

Both panopticism and acousmaticism are technologies that incite behavior and compel people to act in certain ways. However, they both use different methods, which, in turn, incite different behavioral outcomes. Panopticism maximizes efficiency and productivity by compelling conformity to a standard or norm. According to Michel Foucault, the outcome of panoptic surveillance is a society where everyone synchs to an “obligatory rhythm imposed from the outside” (151-2), such as the rhythmic divisions of the clock (150). In other words, panopticism transforms people into interchangeable cogs in an industrial machine.  Methodologically, panopticism demands self-monitoring. Foucault emphasizes that panopticism functions most efficiently when the gaze is internalized, when one “assumes responsibility for the constraints of power” and “makes them play…upon himself” (202). Panopticism requires individuals to synchronize themselves with established compulsory patterns.

Acousmaticism, on the other hand, aims for dynamic attunement between subjects and institutions, an attunement that is monitored and maintained by a third party (in this example, the algorithm). For example, Facebook’s News Feed algorithm facilitates the mutual adaptation of norms to subjects and subjects to norms. Facebook doesn’t care what you like; instead it seeks to transform your online behavior into a form of efficient digital labor. In order to do this, Facebook must adjust, in part, to you. Methodologically, this dynamic attunement is not a practice of internalization, but unlike Foucault’s panopticon, big dataveillance leverages outsourcing and distribution. There is so much data that no one individual—indeed, no one computer—can process it efficiently and intelligibly. The work of dataveillance is distributed across populations, networks, and institutions, and the surveilled “subject” emerges from that work (for example, Rob Horning’s concept of the “data self”). Acousmaticism tunes into the rhythmic patterns that synch up with and amplify its cycles of social, political, and economic reproduction.

Sonic Boom! Borrowed from NASA's Goddard Space Flight Center @Flickr CC BY.

Sonic Boom! Borrowed from NASA’s Goddard Space Flight Center @Flickr CC BY.

Unlike panopticism, which uses disciplinary techniques to eliminate noise, acousmaticism uses biopolitical techniques to allow profitable signals to emerge as clearly and frictionlessly as possible amid all the noise (for more on the relation between sound and biopolitics, see my previous SO! essay). Acousmaticism and panopticism are analytically discrete, yet applied in concert. For example, certain tiers of the North Carolina state employee’s health plan require so-called “obese” and tobacco-using members to commit to weight-loss and smoking-cessation programs. If these members are to remain eligible for their selected level of coverage, they must track and report their program-related activities (such as exercise). People who exhibit patterns of behavior that are statistically risky and unprofitable for the insurance company are subject to extra layers of surveillance and discipline. Here, acousmatic techniques regulate the distribution and intensity of panoptic surveillance. To use Nathan Jurgenson’s turn of phrase, acousmaticism determines “for whom” the panoptic gaze matters. To be clear, acousmaticism does not replace panopticism; my claim is more modest. Acousmaticism is an accurate and productive metaphor for theorizing both the aims and methods of big dataveillance, which is, itself, one instrument in today’s broader surveillance ensemble.

-

Featured image “Big Brother 13/365″ by Dennis Skley CC BY-ND.

-

Robin James is Associate Professor of Philosophy at UNC Charlotte. She is author of two books: Resilience & Melancholy: pop music, feminism, and neoliberalism will be published by Zer0 books this fall, and The Conjectural Body: gender, race and the philosophy of music was published by Lexington Books in 2010. Her work on feminism, race, contemporary continental philosophy, pop music, and sound studies has appeared in The New Inquiry, Hypatia, differences, Contemporary Aesthetics, and the Journal of Popular Music Studies. She is also a digital sound artist and musician. She blogs at its-her-factory.com and is a regular contributor to Cyborgology.

tape reelREWIND!…If you liked this post, check out:

“Cremation of the senses in friendly fire”: on sound and biopolitics (via KMFDM & World War Z)–Robin James

The Dark Side of Game Audio: The Sounds of Mimetic Control and Affective ConditioningAaron Trammell

Listening to Whisperers: Performance, ASMR Community, and Fetish on YouTube–Joshua Hudelson

The Dark Side of Game Audio: The Sounds of Mimetic Control and Affective Conditioning

15138944411_ac3901a30a_o

Sound and Surveilance4

It’s an all too familiar movie trope. A bug hidden in a flower jar. A figure in shadows crouched listening at a door. The tape recording that no one knew existed, revealed at the most decisive of moments. Even the abrupt disconnection of a phone call manages to arouse the suspicion that we are never as alone as we may think. And although surveillance derives its meaning the latin “vigilare” (to watch) and French “sur-“ (over), its deep connotations of listening have all but obliterated that distinction.

This month, SO! Multimedia Editor Aaron Trammell curates a forum on Sound and Surveillance, featuring the work of Robin James and Kathleen Battles.  And so it begins, with Aaron asking. . .”Want to Play a Game?” –JS

It’s eleven o’clock on a Sunday night and I’m in the back room of a comic book store in Scotch Plains, NJ. Game night is wrapping up. Just as I’m about to leave, someone suggests that we play Pit, a classic game about trading stocks in the early 20th century. Because the game is short, I decide to give it a go and pull a chair up to the table. In Pit, players are given a hand of nine cards of various farm-related suits and frantically trade cards with other players until their entire hand matches the same suit. As play proceeds, players hold up a set of similar cards they are willing to trade and shout, “one, one, one!,” “two, two, two!,” “three, three, three!,” until another player is willing to trade them an equivalent amount of cards in a different suit. The game only gets louder as the shouting escalates and builds to a cacophony.

As I drove home that night, I came to the uncomfortable realization that maybe the game was playing me. I and the rest of the players had adopted similar dispositions over the course of the play. As we fervently shouted to one another trying to trade between sets of indistinguishable commodities, we took on similar, intense, and excited mannerisms. Players who would not scream, who would not participate in the reproduction of the game’s sonic environment, simply lost the game, faded out. As for the rest of us, we became like one another, cookie-cutter reproductions of enthusiastic, stressed, and aggravated stock traders, getting louder as we cornered the market on various goods.

We were caught in a cybernetic-loop, one that encouraged us to take on the characteristics of stock traders. And, for that brief period of time, we succumbed to systems of control with far reaching implications. As I’ve argued before, games are cybernetic mechanisms that facilitate particular modes of feedback between players and the game state. Sound is one of the channels through which this feedback is processed. In a game like Pit, players both listen to other players for cues regarding their best move and shout numbers to the table representing potential trades. In other games, such as Monopoly, players must announce when they wish to buy properties. Although it is no secret that understanding sound is essential to good game design, it is less clear how sound defines the contours of power relationships in these games. This essay offers two games,  Mafia, and Escape: The Curse of the Temple as case studies for the ways in which sound is used in the most basic of games, board games. By fostering environments that encourage both mimetic control and affective conditioning game sound draws players into the devious logic of cybernetic systems.

Understanding the various ways that sound is implemented in games is essential to understanding the ways that game sound operates as both a form of mimetic control and affective conditioning. Mimetic control is, at its most simple, the power of imitation. It is the degree to which we become alike when we play games. Mostly, it happens because the rules invoke a variety of protocols which encourage players to interact according to a particular standard of communication. The mood set by game sound is the power of affective conditioning. Because we decide what we interact with on account of our moods, moments of affective conditioning prime players to feel things (such as pleasure), which can encourage players to interact in compulsive, excited, subdued, or frenetic ways with game systems.

A game where sound plays a central and important role is Mafia (which has a number of other variants like Werewolf and The Resistance). In Mafia, some players take the secret role of mafia members who choose players to “kill” at night, while the eyes of the others are closed. Because mafia-team players shuffle around during the game and point to others in order to indicate which players to eliminate while the eyes of the other players are closed, the rules of the game suggest that players tap on things, whistle, chirp, and make other ambient noises while everyone’s eyes are closed. This allows for the mafia-team players to conduct their business secretly, as their motions are well below the din created by the other players. Once players open their eyes, they must work together to deduce which players are part of the mafia, and then vote on who to eliminate from the game. Here players are, in a sense, controlled by the game to provide a soundtrack. What’s more, the eeriness of the sounds produced by the players only accentuate the paranoia players feel when taking part in what’s essentially a lynch-mob.

The ambient sounds produced by players of Mafia have overtones of mimetic control. Protocols governing the use of game audio as a form of communication between bodies and other bodies, or bodies and machines, require that we communicate in particular ways at set intervals. Different than the brutal and martial forms of discipline that drove disciplinary apparatuses like Bentham’s panopticon, the form of control exerted through interactive game audio relies on precisely the opposite premise. What is often termed “The Magic Circle of Play” is suspect here as it promises players a space that is safe and fundamentally separate from events in the outside world. Within this space somewhat hypnotic behavior-patterns take place under the auspices of being just fun, or mere play. Players who refuse to play by the rules are often exiled from this space, as they refuse to enter into this contract of soft social norms with others.

Not all panopticons are in prisons. "Singing Ringing Tree at Sunset," Dave Leeming CC BY.

Not all panopticons are in prisons. “Singing Ringing Tree at Sunset,” Dave Leeming CC BY.

Escape: The Curse of the Temple relies on sound to set a game mood that governs the ways that players interact with each other. In Escape, players have ten minutes (of real time) where they must work together to navigate a maze of cardboard tiles. Over the course of the game there are two moments when players must return to the tile that they started the game on, and these are announced by a CD playing in the background of the room. When this occurs, a gong rings on the CD and rhythms of percussion mount in intensity until players hear a door slam. At this point, if players haven’t returned to their starting tile, they are limited in the actions they can take for the rest of the game. In the moments of calm before players make a mad dash for the entrance, the soundtrack waxes ambient. It offers the sounds of howling-winds, rattling chimes, and yawning corridors.

The game is spooky, overall. The combination of haunting ambient sounds and moments where gameplay is rushed and timed, makes for an adrenaline-fueled experience contained and produced by the game’s ambient soundtrack. The game’s most interesting moments come from points where one player is trapped and players must decide whether they should help their friend or help themselves. The tense, haunting, soundtrack evokes feelings of high-stakes immersion. The game is fun because it produces a tight, stressful, and highly interactive experience. It conditions its players through the clever use of its soundtrack to feel the game in an embodied and visceral way. Like the ways that horror movies have used ambient sounds to a great effect in producing tension in audiences (pp.26-27), Escape: The Curse of the Temple encourages players to immerse themselves in the game world by playing upon the tried and true affective techniques that films have used for years. Immersed players feel an increased sense of engagement with the game and because of this they are willingly primed to engage in the mimetic interactive behaviors that engage them within the game’s cybernetic logic.

These two forms of power, mimetic control and affective conditioning, often overlap and coalesce in games. Sometimes, they meet in the middle during games that offer a more or less adaptive form of sound, like Mafia. Players work together and mimic each other when reproducing the ambient forms of quiet that constitute the atmosphere of terror that permeates the game space. Even the roar of bids which occurs in Pit constitutes a form of affective conditioning that encourages players to buy, buy, buy as fast as possible. Effectively simulating the pressure of The Stock Exchange.

Although there is now a growing discipline around the production of game audio, there is relatively little discourse that attempts to understand how the implementation of sound in games functions as a mode of social control. By looking at the ways that sound is implemented in board and card games, we can gain insight of the ways in which it is implemented in larger technical systems (such as computer games), larger aesthetic systems (such as performance art), economic systems (like casinos and the stock market), and even social systems (like parties). Furthermore, it is easy to describe more clearly the ways in which game audio functions as a form of soft power through techniques of mimetic control and affective conditioning. It is only by understanding how these techniques affect our bodies that we can begin to recognize our interactions with large-scale cybernetic systems that have effects reaching beyond the game itself.

-

Aaron Trammell is co-founder and Multimedia Editor of Sounding Out! He is also a Media Studies PhD candidate at Rutgers University. His dissertation explores the fanzines and politics of underground wargame communities in Cold War America. You can learn more about his work at aarontrammell.com.

Featured image “Psychedelic Icon,” by Gwendal Uguen CC BY-NC-SA.

tape reelREWIND!…If you liked this post, you may also dig:

Papa Sangre and the Construction of Immersion in Audio Games- Enongo Lumumba-Kasongo 

Sounding Out! Podcast #31: Game Audio Notes III: The Nature of Sound in Vessel- Leonard J. Paul

Experiments in Aural Resistance: Nordic Role-Playing, Community, and Sound- Aaron Trammell

Erratic Furnaces of Infrasound: Volcano Acoustics

Surface flows as seen by thermal cameras at Pu’u O’o crater, June 27th, 2014. Image: USGS

Hearing the Unheard IIWelcome back to Hearing the UnHeard, Sounding Out‘s series on how the unheard world affects us, which started out with my post on hearing large and small, continued with a piece by China Blue on the sounds of catastrophic impacts, and now continues with the deep sounds of the Earth itself by Earth Scientist Milton Garcés.

Faculty member at the University of Hawaii at Manoa and founder of the Earth Infrasound Laboratory in Kona, Hawaii, Milton Garces is an explorer of the infrasonic, sounds so low that they circumvent our ears but can be felt resonating through our bodies as they do through the Earth. Using global networks of specialized detectors, he explores the deepest sounds of our world from the depths of volcanic eruptions to the powerful forces driving tsunamis, to the trails left by meteors through our upper atmosphere. And while the raw power behind such events is overwhelming to those caught in them, his recordings let us appreciate the sense of awe felt by those who dare to immerse themselves.

In this installment of Hearing the UnHeard, Garcés takes us on an acoustic exploration of volcanoes, transforming what would seem a vision of the margins of hell to a near-poetic immersion within our planet.

– Guest Editor Seth Horowitz

The sun rose over the desolate lava landscape, a study of red on black. The night had been rich in aural diversity: pops, jetting, small earthquakes, all intimately felt as we camped just a mile away from the Pu’u O’o crater complex and lava tube system of Hawaii’s Kilauea Volcano.

The sound records and infrared images captured over the night revealed a new feature downslope of the main crater. We donned our gas masks, climbed the mountain, and confirmed that indeed a new small vent had grown atop the lava tube, and was radiating throbbing bass sounds. We named our acoustic discovery the Uber vent. But, as most things volcanic, our find was transitory – the vent was eventually molten and recycled into the continuously changing landscape, as ephemeral as the sound that led us there in the first place.

Volcanoes are exceedingly expressive mountains. When quiescent they are pretty and fertile, often coyly cloud-shrouded, sometimes snowcapped. When stirring, they glow, swell and tremble, strongly-scented, exciting, unnerving. And in their full fury, they are a menacing incandescent spectacle. Excess gas pressure in the magma drives all eruptive activity, but that activity varies. Kilauea volcano in Hawaii has primordial, fluid magmas that degass well, so violent explosive activity is not as prominent as in volcanoes that have more evolved, viscous material.

Well-degassed volcanoes pave their slopes with fresh lava, but they seldom kill in violence. In contrast, the more explosive volcanoes demolish everything around them, including themselves; seppuku by fire. Such massive, disruptive eruptions often produce atmospheric sounds known as infrasounds, an extreme basso profondo that can propagate for thousands of kilometers. Infrasounds are usually inaudible, as they reside below the 20 Hz threshold of human hearing and tonality. However, when intense enough, we can perceive infrasound as beats or sensations.

Like a large door slamming, the concussion of a volcanic explosion can be startling and terrifying. It immediately compels us to pay attention, and it’s not something one gets used to. The roaring is also disconcerting, especially if one thinks of a volcano as an erratic furnace with homicidal tendencies. But occasionally, amidst the chaos and cacophony, repeatable sound patterns emerge, suggestive of a modicum of order within the complex volcanic system. These reproducible, recognizable patterns permit the identification of early warning signals, and keep us listening.

Each of us now have technology within close reach to capture and distribute Nature’s silent warning signals, be they from volcanoes, tsunamis, meteors, or rogue nations testing nukes. Infrasounds, long hidden under the myth of silence, will be everywhere revealed.

Cookie Monster

The “Cookie Monster” skylight on the southwest flank of Pu`u `O`o. Photo by J. Kauahikaua 27 September 2002

I first heard these volcanic sounds in the rain forests of Costa Rica. As a graduate student, I was drawn to Arenal Volcano by its infamous reputation as one of the most reliably explosive volcanoes in the Americas. Arenal was cloud-covered and invisible, but its roar was audible and palpable. Here is a tremor (a sustained oscillation of the ground and atmosphere) recorded at Arenal Volcano in Costa Rica with a 1 Hz fundamental and its overtones:

.

In that first visit to Arenal, I tried to reconstruct in my minds’ eye what was going on at the vent from the diverse sounds emitted behind the cloud curtain. I thought I could blindly recognize rockfalls, blasts, pulsations, and ground vibrations, until the day the curtain lifted and I could confirm my aural reconstruction closely matched the visual scene. I had imagined a flashing arc from the shock wave as it compressed the steam plume, and by patient and careful observation I could see it, a rapid shimmer slashing through the vapor. The sound of rockfalls matched large glowing boulders bouncing down the volcano’s slope. But there were also some surprises. Some visible eruptions were slow, so I could not hear them above the ambient noise. By comparing my notes to the infrasound records I realized these eruption had left their deep acoustic mark, hidden in plain sight just below aural silence.

Arenal, Costa Rica, May 1, 2010. Image by Flickr user Daniel Vercelli.

Arenal, Costa Rica, May 1, 2010. Image by Flickr user Daniel Vercelli.

I then realized one could chronicle an eruption through its sounds, and recognize different types of activity that could be used for early warning of hazardous eruptions even under poor visibility. At the time, I had only thought of the impact and potential hazard mitigation value to nearby communities. This was in 1992, when there were only a handful of people on Earth who knew or cared about infrasound technology. With the cessation of atmospheric nuclear tests in 1980 and the promise of constant vigilance by satellites, infrasound was deemed redundant and had faded to near obscurity over two decades. Since there was little interest, we had scarce funding, and were easily ignored. The rest of the volcano community considered us a bit eccentric and off the main research streams, but patiently tolerated us. However, discussions with my few colleagues in the US, Italy, France, and Japan were open, spirited, and full of potential. Although we didn’t know it at the time, we were about to live through Gandhi’s quote: “First they ignore you, then they laugh at you, then they fight you, then you win.”

Fast forward 22 years. A computer revolution took place in the mid-90’s. The global infrasound network of the International Monitoring System (IMS) began construction before the turn of the millennium, in its full 24-bit broadband digital glory. Designed by the United Nations’s Comprehensive Nuclear-Test-Ban Treaty Organization (CTBTO), the IMS infrasound detects minute pressure variations produced by clandestine nuclear tests at standoff distances of thousands of kilometers. This new, ultra-sensitive global sensor network and its cyberinfrastructure triggered an Infrasound Renaissance and opened new opportunities in the study and operational use of volcano infrasound.

Suddenly endowed with super sensitive high-resolution systems, fast computing, fresh capital, and the glorious purpose of global monitoring for hazardous explosive events, our community rapidly grew and reconstructed fundamental paradigms early in the century. The mid-naughts brought regional acoustic monitoring networks in the US, Europe, Southeast Asia, and South America, and helped validate infrasound as a robust monitoring technology for natural and man-made hazards. By 2010, infrasound was part of the accepted volcano monitoring toolkit. Today, large portions of the IMS infrasound network data, once exclusive, are publicly available (see links at the bottom), and the international infrasound community has grown to the hundreds, with rapid evolution as new generations of scientists joins in.

In order to capture infrasound, a microphone with a low frequency response or a barometer with a high frequency response are needed. The sensor data then needs to be digitized for subsequent analysis. In the pre-millenium era, you’d drop a few thousand dollars to get a single, basic data acquisition system. But, in the very near future, there’ll be an app for that. Once the sound is sampled, it looks much like your typical sound track, except you can’t hear it. A single sensor record is of limited use because it does not have enough information to unambiguously determine the arrival direction of a signal. So we use arrays and networks of sensors, using the time of flight of sound from one sensor to another to recognize the direction and speed of arrival of a signal. Once we associate a signal type to an event, we can start characterizing its signature.

Consider Kilauea Volcano. Although we think of it as one volcano, it actually consists of various crater complexes with a number of sounds. Here is the sound of a collapsing structure

As you might imagine, it is very hard to classify volcanic sounds. They are diverse, and often superposed on other competing sounds (often from wind or the ocean). As with human voices, each vent, volcano, and eruption type can have its own signature. Identifying transportable scaling relationships as well as constructing a clear notation and taxonomy for event identification and characterization remains one of the field’s greatest challenges. A 15-year collection of volcanic signals can be perused here, but here are a few selected examples to illustrate the problem.

First, the only complete acoustic record of the birth of Halemaumau’s vent at Kilauea, 19 March 2008:

.

Here is a bench collapse of lava near the shoreline, which usually leads to explosions as hot lava comes in contact with the ocean:

.

.

Here is one of my favorites, from Tungurahua Volcano, Ecuador, recorded by an array near the town of Riobamba 40 km away. Although not as violent as the eruptive activity that followed it later that year, this sped-up record shows the high degree of variability of eruption sounds:

.

.

The infrasound community has had an easier time when it comes to the biggest and meanest eruptions, the kind that can inject ash to cruising altitudes and bring down aircraft. Our Acoustic Surveillance for Hazardous Studies (ASHE) in Ecuador identified the acoustic signature of these type of eruptions. Here is one from Tungurahua:

.

Our data center crew was at work when such a signal scrolled through the monitoring screens, arriving first at Riobamba, then at our station near the Colombian border. It was large in amplitude and just kept on going, with super heavy bass – and very recognizable. Such signals resemble jet noise — if a jet was designed by giants with stone tools. These sustained hazardous eruptions radiate infrasound below 0.02 Hz (50 second periods), so deep in pitch that they can propagate for thousands of kilometers to permit robust acoustic detection and early warning of hazardous eruptions.

In collaborations with our colleagues at the Earth Observatory of Singapore (EOS) and the Republic of Palau, infrasound scientists will be turning our attention to early detection of hazardous volcanic eruptions in Southeast Asia. One of the primary obstacles to technology evolution in infrasound has been the exorbitant cost of infrasound sensors and data acquisition systems, sometimes compounded by export restrictions. However, as everyday objects are increasingly vested with sentience under the Internet of Things, this technological barrier is rapidly collapsing. Instead, the questions of the decade are how to receive, organize, and distribute the wealth of information under our perception of sound so as to construct a better informed and safer world.

IRIS Links

http://www.iris.edu/spud/infrasoundevent

http://www.iris.edu/bud_stuff/dmc/bud_monitor.ALL.html, search for IM and UH networks, infrasound channel name BDF

Milton Garcés is an Earth Scientist at the University of Hawaii at Manoa and the founder of the Infrasound Laboratory in Kona. He explores deep atmospheric sounds, or infrasounds, which are inaudible but may be palpable. Milton taps into a global sensor network that captures signals from intense volcanic eruptions, meteors, and tsunamis. His studies underscore our global connectedness and enhance our situational awareness of Earth’s dynamics. You are invited to follow him on Twitter @iSoundHunter for updates on things Infrasonic and to get the latest news on the Infrasound App.

Featured image: surface flows as seen by thermal cameras at Pu’u O’o crater, June 27th, 2014. Image: USGS

tape reel

REWIND! If you liked this post, check out …

SO! Amplifies: Ian Rawes and the London Sound Survey – Ian Rawes

Sounding Out Podcast #14: Interview with Meme Librarian Amanda Brennan – Aaron Trammell

Catastrophic Listening — China Blue

%d bloggers like this: