Sounds of Science: The Mystique of Sonification

NYU_full

Hearing the Unheard IIWelcome to the final installment of Hearing the UnHeardSounding Out!s series on what we don’t hear and how this unheard world affects us. The series started out with my post on hearing, large and small, continued with a piece by China Blue on the sounds of catastrophic impacts, and Milton Garcés piece on the infrasonic world of volcanoes. To cap it all off, we introduce The Sounds of Science by professor, cellist and interactive media expert, Margaret Schedel.

Dr. Schedel is an Associate Professor of Composition and Computer Music at Stony Brook University. Through her work, she explores the relatively new field of Data Sonification, generating new ways to perceive and interact with information through the use of sound. While everyone is familiar with informatics, graphs and images used to convey complex information, her work explores how we can expand our understanding of even complex scientific information by using our fastest and most emotionally compelling sense, hearing.

– Guest Editor Seth Horowitz

With the invention of digital sound, the number of scientific experiments using sound has skyrocketed in the 21st century, and as Sounding Out! readers know, sonification has started to enter the public consciousness as a new and refreshing alternative modality for exploring and understanding many kinds of datasets emerging from research into everything from deep space to the underground. We seem to be in a moment in which “science that sounds” has a special magic, a mystique that relies to some extent on misunderstandings in popular awareness about the processes and potentials of that alternative modality.

For one thing, using sound to understand scientific phenomena is not actually new. Diarist Samuel Pepys wrote about meeting scientist Robert Hooke in 1666 that “he is able to tell how many strokes a fly makes with her wings (those flies that hum in their flying) by the note that it answers to in musique during their flying.” Unfortunately Hooke never published his findings, leading researchers to speculate on his methods. One popular theory is that he tied strings of varying lengths between a fly and an ear trumpet, recognizing that sympathetic resonance would cause the correct length string to vibrate, thus allowing him to calculate the frequency. Even Galileo used sound, showing the constant acceleration of a ball due to gravity by using an inclined plane with thin moveable frets. By moving the placement of the frets until the clicks created an even tempo he was able to come up with a mathematical equation to describe how time and distance relate when an object falls.

Illustration from Robert Hooke's Micrographia (1665)

Illustration from Robert Hooke’s Micrographia (1665)

There have also been other scientific advances using sound in the more recent past. The stethoscope was invented in 1816 for auscultation, listening to the sounds of the body. It was later applied to machines—listening for the operation of the technological gear. Underwater sonar was patented in 1913 and is still used to navigate and communicate using hydroacoustic phenomenon. The Geiger Counter was developed in 1928 using principles discovered in 1908; it is unclear exactly when the distinctive sound was added. These are all examples of auditory display [AD]; sonification-generating or manipulating sound by using data is a subset of AD. As the forward to the The Sonification Handbook states, “[Since 1992] Technologies that support AD have matured. AD has been integrated into significant (read “funded” and “respectable”) research initiatives. Some forward thinking universities and research centers have established ongoing AD programs. And the great need to involve the entire human perceptual system in understanding complex data, monitoring processes, and providing effective interfaces has persisted and increased” (Thomas Hermann, Andy Hunt, John G. Neuhoff, Sonification Handbook, iii)

Sonification clearly enables scientists, musicians and the public to interact with data in a very different way, particularly compared to the more numerous techniques involving vision. Indeed, because hearing functions quite differently than vision, sonification offers an alternative kind of understanding of data (sometimes more accurate), which would not be possible using eyes alone. Hearing is multi-directional—our ears don’t have to be pointing at a sound source in order to sense it. Furthermore, the frequency response of our hearing is thousands of times more accurate than our vision. In order to reproduce a moving image the sampling rate (called frame-rate) for film is 24 frames per second, while audio has to be sampled at 44,100 frames per second in order to accurately reproduce sound. In addition, aural perception works on simultaneous time scales—we can take in multiple streams of audio data at once at many different dynamics, while our pupils dilate and contract, limiting how much visual data we can absorb at a single time. Our ears are also amazing at detecting regular patterns over time in data; we hear these patterns as frequency, harmonic relationships, and timbre.

Image credit: Dr. Kevin Yager, data measured at X9 beamline, Brookhaven National Lab.

Image credit: Dr. Kevin Yager, Brookhaven National Lab.

But hearing isn’t simple, either. In the current fascination with sonification, the fact that aesthetic decisions must be made in order to translate data into the auditory domain can be obscured. Headlines such as “Here’s What the Higgs Boson Sounds Like” are much sexier than headlines such as “Here is What One Possible Mapping of Some of the Data We Have Collected from a Scientific Measuring Instrument (which itself has inaccuracies) Into Sound.” To illustrate the complexity of these aesthetic decisions, which are always interior to the sonification process, I focus here on how my collaborators and I have been using sound to understand many kinds of scientific data.

My husband, Kevin Yager, a staff scientist at Brookhaven National Laboratory, works at the Center for Functional Nanomaterials using scattering data from x-rays to probe the structure of matter. One night I asked him how exactly the science of x-ray scattering works. He explained that X-rays “scatter” off of all the atoms/particles in the sample and the intensity is measured by a detector. He can then calculate the structure of the material, using the Fast Fourier Transform (FFT) algorithm. He started to explain FFT to me, but I interrupted him because I use FFT all the time in computer music. The same algorithm he uses to determine the structure of matter, musicians use to separate frequency content from time. When I was researching this post, I found a site for computer music which actually discusses x-ray scattering as a precursor for FFT used in sonic applications.

To date, most sonifications have used data which changes over time – a fly’s wings flapping, a heartbeat, a radiation signature. Except in special cases Kevin’s data does not exist in time – it is a single snapshot. But because data from x-ray scattering is a Fourier Transform of the real-space density distribution, we could use additive synthesis, using multiple simultaneous sine waves, to represent different spatial modes. Using this method, we swept through his data radially, like a clock hand, making timbre-based sonifications from the data by synthesizing sine waves using with the loudness based on the intensity of the scattering data and frequency based on the position.

We played a lot with the settings of the additive synthesis, including the length of the sound, the highest frequency and even the number of frequency bins (going back to the clock metaphor – pretend the clock hand is a ruler – the number of frequency bins would be the number of demarcations on the ruler) arriving eventually at set of optimized variables.

Here is one version of the track we created using 10 frequency bins:

.

Here is one we created using 2000:

.

And here is one we created using 50 frequency bins, which we settled on:

.

On a software synthesizer this would be like the default setting. In the future we hope to have an interactive graphic user interface where sliders control these variables, just like a musician tweaks the sound of a synth, so scientists can bring out, or mask aspects of the data.

To hear what that would be like, here are a few tracks that vary length:

.

.

.

Finally, here is a track we created using different mappings of frequency and intensity:

.

Having these sliders would reinforce to the scientists that we are not creating “the sound of a metallic alloy,” we are creating one sonic representation of the data from the metallic alloy.

It is interesting that such a representation can be vital to scientists. At first, my husband went along with this sonification project as more of a thought experiment rather than something that he thought would actually be useful in the lab, until he heard something distinct about one of those sounds, suggesting that there was a misaligned sample. Once Kevin heard that glitched sound (you can hear it in the video above), he was convinced that sonification was a useful tool for his lab. He and his colleagues are dealing with measurements 1/25,000th the width of a human hair, aiming an X-ray through twenty pieces of equipment to get the beam focused just right. If any piece of equipment is out of kilter, the data can’t be collected. This is where our ears’ non-directionality is useful. The scientist can be working on his/her computer and, using ambient sound, know when a sample is misaligned.

procedure

It remains to be seen/heard if the sonifications will be useful to actually understand the material structures. We are currently running an experiment using Mechanical Turk to determine this kind of multi-modal display (using vision and audio) is actually helpful. Basically we are training people on just the images of the scattering data, and testing how well they do, and training another group of people on the images plus the sonification and testing how well they do.

I’m also working with collaborators at Stony Brook University on sonification of data. In one experiment we are using ambisonic (3-dimensional) sound to create a sonic map of the brain to understand drug addiction. Standing in the middle of the ambisonic cube, we hope to find relationships between voxels, a cube of brain tissue—analogous to pixels. When neurons fire in areas of the brain simultaneously there is most likely a causal relationship which can help scientists decode the brain activity of addiction. Computer vision researchers have been searching for these relationships unsuccessfully; we hope that our sonification will allow us to hear associations in distinct parts of the brain which are not easily recognized with sight. We are hoping to leverage the temporal pattern recognition of our auditory system, but we have been running into problems doing the sonification; each slice of data from the FMRI has about 300,000 data points. We have it working with 3,000 data points, but either our programming needs to get more efficient, or we have to get a much more powerful computer in order to work with all of the data.

On another project we are hoping to sonify gait data using smartphones. I’m working with some of my music students and a professor of Physical Therapy, Lisa Muratori, who works on understanding the underlying mechanisms of mobility problems in Parkinsons’ Disease (PD). The physical therapy lab has a digital motion-capture system and a split-belt treadmill for asymmetric stepping—the patients are supported by a harness so they don’t fall. PD is a progressive nervous system disorder characterized by slow movement, rigidity, tremor, and postural instability. Because of degeneration of specific areas of the brain, individuals with PD have difficulty using internally driven cues to initiate and drive movement. However, many studies have demonstrated an almost normal movement pattern when persons with PD are provided external cues, including significant improvements in gait with rhythmic auditory cueing. So far the research with PD and sound has be unidirectional – the patients listen to sound and try to match their gait to the external rhythms from the auditory cues.In our system we will use bio-feedback to sonify data from sensors the patients will wear and feed error messages back to the patient through music. Eventually we hope that patients will be able to adjust their gait by listening to self-generated musical distortions on a smartphone.

As sonification becomes more prevalent, it is important to understand that aesthetic decisions are inevitable and even essential in every kind of data representation. We are so accustomed to looking at visual representations of information—from maps to pie charts—that we may forget that these are also arbitrary transcodings. Even a photograph is not an unambiguous record of reality; the mechanics of the camera and artistic choices of the photographer control the representation. So too, in sonification, do we have considerable latitude. Rather than view these ambiguities as a nuisance, we should embrace them as a freedom that allows us to highlight salient features, or uncover previously invisible patterns.

__

Margaret Anne Schedel is a composer and cellist specializing in the creation and performance of ferociously interactive media. She holds a certificate in Deep Listening with Pauline Oliveros and has studied composition with Mara Helmuth, Cort Lippe and McGregor Boyle. She sits on the boards of 60×60 Dance, the BEAM Foundation, Devotion Gallery, the International Computer Music Association, and Organised Sound. She contributed a chapter to the Cambridge Companion to Electronic Music, and is a joint author of Electronic Music published by Cambridge University Press. She recently edited an issue of Organised Sound on sonification. Her research focuses on gesture in music, and the sustainability of technology in art. She ran SUNY’s first Coursera Massive Open Online Course (MOOC) in 2013. As an Associate Professor of Music at Stony Brook University, she serves as Co-Director of Computer Music and is a core faculty member of cDACT, the consortium for digital art, culture and technology.

Featured Image: Dr. Kevin Yager, data measured at X9 beamline, Brookhaven National Lab.

Research carried out at the Center for Functional Nanomaterials, Brookhaven National Laboratory, is supported by the U.S. Department of Energy, Office of Basic Energy Sciences, under Contract No. DE-AC02-98CH10886.

tape reelREWIND! ….. If you liked this post, you might also like:

The Noises of Finance–Nicholas Knouf

Revising the Future of Music Technology–Aaron Trammell

A Brief History of Auto-Tune–Owen Marshall

SO! Amplifies: Mendi+Keith Obadike and Sounding Race in America

M_K_PortraitNew

Document3SO! Amplifies. . .a highly-curated, rolling mini-post series by which we editors hip you to cultural makers and organizations doing work we really really dig.  You’re welcome!

Several years ago—after working on media art, myths, songs about invisible networks and imaginary places—we started a series of sound art projects about America. In making these public sound artworks about our country we ask ourselves questions about funk, austerity, debt and responsibility, aesthetics, and inheritance. We also attempt to reckon with data, that which orders so much of our lives with its presence or absence.

We are interested in how data might be understood differently once sonified or made musical. We want to explore what kinds of codes are embedded in the architecture of American culture.

Big House/Disclosure

image02

The first sound art project in this vein that we completed in 2007 was entitled Big House / Disclosure. Northwestern University commissioned Big House / Disclosure to commemorate the 200th anniversary of the abolition of the British slave trade. We began researching Chicago’s recently (2002) issued Slavery Era Disclosure Ordinance, which states that any business seeking a city contract must publicly disclose (without penalty) its historical relationship, if any, to the slave trade. In that project we interviewed 200 citizens in the Chicago area about that city ordinance, how they (or their ancestors) arrived in this country, the origins of house music, and imaginary plantations, as well as their opinions about the legacy of slavery in their lives. Their answers were woven into a 200 hour house song & public sound installation on the Northwestern campus.

We used custom built software to trigger changes in the sound (drums, bass lines, chords, etc.) of that installation as the stock prices of companies like Lehman Brothers and Wachovia Bank (listed by this city ordinance as having profited from slave trade) rose and fell in 2007. In addition to the sound installation there were a number of performance scores and graphic scores to be performed in the project. The graphic scores were performed at the Stone (John Zorn’s music venue) by bassist Melvin Gibbs, turntablist Val Inc, percussionist Satoshi Takeshi, and pianist Shoko Nagai in New York. The book and album for this project (recorded with percussionist Guillermo Brown, cornetist Taylor Ho Bynum, cellist Okkyung Lee and percussionist Tim Feeney) were released by 1913 Press.

https://www.youtube.com/watch?v=8Bdh5ykfHEE

American Cypher

image00

In 2012-13 we created American Cypher. This project looked at American stories about race and DNA. The stories included narratives about Barack Obama, geneticist James Watson, Oprah Winfrey, and two men in the criminal justice system. At the center of the project was a multi-channel sound installation made from a small 18th century bell that belonged to Sally Hemings (a woman enslaved by Thomas Jefferson and, as indicated by DNA testing, mother to his children). The bell was recorded and altered. It was tuned using DNA information (microsatellite STR analysis) from the Jefferson and Hemings families. That analysis gave us a pitch set that was used to compose the piece. The project was commissioned by Bucknell University’s Samek Gallery and Griot Institute. The exhibition was mounted at the Studio Museum in Harlem and later traveled to the Institute of Visual Art at the University of Milwaukee Wisconsin.

<p><a href=”http://vimeo.com/81574324″>Mendi + Keith Obadike: American Cypher – Samek Gallery and The Studio Museum in Harlem</a> from <a href=”http://vimeo.com/user12307441″>Keith Obadike</a> on <a href=”https://vimeo.com”>Vimeo</a&gt;.</p>

Free/Phase

image01

Free/Phase is our latest project for 2014-15. This work uses the archives of Columbia College’s Center for Black Music Research (Chicago, Illinois) as its foundation. With this work we are doing conceptual remixes of African-American freedom songs found in the archives. We are thinking about how this music has been used over the past couple of centuries and all that is encoded in these songs musically, politically, and spiritually. There are three nodes to this project. These nodes will be presented and produced in several venues throughout the city of Chicago and will include audience participation.

1) Beacon

“Beacon” is made up of a distributed site-specific sound installation that “rings” morning, noon, and evening, playing a short melodic phrase from specific spirituals found in the CBMR archives. Each spiritual chosen contains musical & lyrical messages that could have been used for pre-emancipation navigation on the underground railroad or inspiration.

2) Overcome

“Overcome” is a video work that is inspired by ways that music was used during the American Civil Rights Movement.

3) Dialogue

“Dialogue” is comprised of “listening posts” throughout Chicago. A number of DJs engage audiences in a discussion about the canon of African-American freedom songs.

***

Across this series, we hope to invite new ways of thinking about the archives that hold information about our existence—the records of profit during the era of American slavery, the relationships marked in our genetic information, and the strategies for survival encoded in our music. Our work in this area reflects on the information that sometimes vanishes from view, whether because it is ephemeral or because it has been buried. We hope our sounding the archives invite new ways of listening to the past and the future at the same time.

Mendi + Keith Obadike make music, art and literature. Their works include The Sour Thunder, an Internet opera (Bridge Records), Crosstalk: American Speech Music (Bridge Records), Black.Net.Art Actions, a suite of new media artworks (published in re:skin on M.I.T Press), Big House / Disclosure, a 200 hour public sound installation (Northwestern University), Phonotype, a book & CD of media artworks, and a poetry collection, Armor and Flesh (Lotus Press). They have contributed sounds/music to projects by wide range of artists including loops for soul singer D’Angelo’s first album and a score for playwright Anna Deavere Smith at the Lincoln Center Institute. You can find out more about them at http://obadike.com.

Featured image from authors’ website.

tape reelREWIND!…If you liked this post, you may also dig:

Wayback Sound Machine: Sound Through Time, Space, and Place-Maile Colbert

SO! Amplifies: Regina Bradley’s Outkasted Conversations-Regina Bradley

Or Does it Explode?: Sounding Out the U.S. Metropolis in Hansberry’s A Raisin in the Sun-Liana Silva-Ford

Erratic Furnaces of Infrasound: Volcano Acoustics

Surface flows as seen by thermal cameras at Pu’u O’o crater, June 27th, 2014. Image: USGS

Hearing the Unheard IIWelcome back to Hearing the UnHeard, Sounding Out‘s series on how the unheard world affects us, which started out with my post on hearing large and small, continued with a piece by China Blue on the sounds of catastrophic impacts, and now continues with the deep sounds of the Earth itself by Earth Scientist Milton Garcés.

Faculty member at the University of Hawaii at Manoa and founder of the Earth Infrasound Laboratory in Kona, Hawaii, Milton Garces is an explorer of the infrasonic, sounds so low that they circumvent our ears but can be felt resonating through our bodies as they do through the Earth. Using global networks of specialized detectors, he explores the deepest sounds of our world from the depths of volcanic eruptions to the powerful forces driving tsunamis, to the trails left by meteors through our upper atmosphere. And while the raw power behind such events is overwhelming to those caught in them, his recordings let us appreciate the sense of awe felt by those who dare to immerse themselves.

In this installment of Hearing the UnHeard, Garcés takes us on an acoustic exploration of volcanoes, transforming what would seem a vision of the margins of hell to a near-poetic immersion within our planet.

– Guest Editor Seth Horowitz

The sun rose over the desolate lava landscape, a study of red on black. The night had been rich in aural diversity: pops, jetting, small earthquakes, all intimately felt as we camped just a mile away from the Pu’u O’o crater complex and lava tube system of Hawaii’s Kilauea Volcano.

The sound records and infrared images captured over the night revealed a new feature downslope of the main crater. We donned our gas masks, climbed the mountain, and confirmed that indeed a new small vent had grown atop the lava tube, and was radiating throbbing bass sounds. We named our acoustic discovery the Uber vent. But, as most things volcanic, our find was transitory – the vent was eventually molten and recycled into the continuously changing landscape, as ephemeral as the sound that led us there in the first place.

Volcanoes are exceedingly expressive mountains. When quiescent they are pretty and fertile, often coyly cloud-shrouded, sometimes snowcapped. When stirring, they glow, swell and tremble, strongly-scented, exciting, unnerving. And in their full fury, they are a menacing incandescent spectacle. Excess gas pressure in the magma drives all eruptive activity, but that activity varies. Kilauea volcano in Hawaii has primordial, fluid magmas that degass well, so violent explosive activity is not as prominent as in volcanoes that have more evolved, viscous material.

Well-degassed volcanoes pave their slopes with fresh lava, but they seldom kill in violence. In contrast, the more explosive volcanoes demolish everything around them, including themselves; seppuku by fire. Such massive, disruptive eruptions often produce atmospheric sounds known as infrasounds, an extreme basso profondo that can propagate for thousands of kilometers. Infrasounds are usually inaudible, as they reside below the 20 Hz threshold of human hearing and tonality. However, when intense enough, we can perceive infrasound as beats or sensations.

Like a large door slamming, the concussion of a volcanic explosion can be startling and terrifying. It immediately compels us to pay attention, and it’s not something one gets used to. The roaring is also disconcerting, especially if one thinks of a volcano as an erratic furnace with homicidal tendencies. But occasionally, amidst the chaos and cacophony, repeatable sound patterns emerge, suggestive of a modicum of order within the complex volcanic system. These reproducible, recognizable patterns permit the identification of early warning signals, and keep us listening.

Each of us now have technology within close reach to capture and distribute Nature’s silent warning signals, be they from volcanoes, tsunamis, meteors, or rogue nations testing nukes. Infrasounds, long hidden under the myth of silence, will be everywhere revealed.

Cookie Monster

The “Cookie Monster” skylight on the southwest flank of Pu`u `O`o. Photo by J. Kauahikaua 27 September 2002

I first heard these volcanic sounds in the rain forests of Costa Rica. As a graduate student, I was drawn to Arenal Volcano by its infamous reputation as one of the most reliably explosive volcanoes in the Americas. Arenal was cloud-covered and invisible, but its roar was audible and palpable. Here is a tremor (a sustained oscillation of the ground and atmosphere) recorded at Arenal Volcano in Costa Rica with a 1 Hz fundamental and its overtones:

.

In that first visit to Arenal, I tried to reconstruct in my minds’ eye what was going on at the vent from the diverse sounds emitted behind the cloud curtain. I thought I could blindly recognize rockfalls, blasts, pulsations, and ground vibrations, until the day the curtain lifted and I could confirm my aural reconstruction closely matched the visual scene. I had imagined a flashing arc from the shock wave as it compressed the steam plume, and by patient and careful observation I could see it, a rapid shimmer slashing through the vapor. The sound of rockfalls matched large glowing boulders bouncing down the volcano’s slope. But there were also some surprises. Some visible eruptions were slow, so I could not hear them above the ambient noise. By comparing my notes to the infrasound records I realized these eruption had left their deep acoustic mark, hidden in plain sight just below aural silence.

Arenal, Costa Rica, May 1, 2010. Image by Flickr user Daniel Vercelli.

Arenal, Costa Rica, May 1, 2010. Image by Flickr user Daniel Vercelli.

I then realized one could chronicle an eruption through its sounds, and recognize different types of activity that could be used for early warning of hazardous eruptions even under poor visibility. At the time, I had only thought of the impact and potential hazard mitigation value to nearby communities. This was in 1992, when there were only a handful of people on Earth who knew or cared about infrasound technology. With the cessation of atmospheric nuclear tests in 1980 and the promise of constant vigilance by satellites, infrasound was deemed redundant and had faded to near obscurity over two decades. Since there was little interest, we had scarce funding, and were easily ignored. The rest of the volcano community considered us a bit eccentric and off the main research streams, but patiently tolerated us. However, discussions with my few colleagues in the US, Italy, France, and Japan were open, spirited, and full of potential. Although we didn’t know it at the time, we were about to live through Gandhi’s quote: “First they ignore you, then they laugh at you, then they fight you, then you win.”

Fast forward 22 years. A computer revolution took place in the mid-90’s. The global infrasound network of the International Monitoring System (IMS) began construction before the turn of the millennium, in its full 24-bit broadband digital glory. Designed by the United Nations’s Comprehensive Nuclear-Test-Ban Treaty Organization (CTBTO), the IMS infrasound detects minute pressure variations produced by clandestine nuclear tests at standoff distances of thousands of kilometers. This new, ultra-sensitive global sensor network and its cyberinfrastructure triggered an Infrasound Renaissance and opened new opportunities in the study and operational use of volcano infrasound.

Suddenly endowed with super sensitive high-resolution systems, fast computing, fresh capital, and the glorious purpose of global monitoring for hazardous explosive events, our community rapidly grew and reconstructed fundamental paradigms early in the century. The mid-naughts brought regional acoustic monitoring networks in the US, Europe, Southeast Asia, and South America, and helped validate infrasound as a robust monitoring technology for natural and man-made hazards. By 2010, infrasound was part of the accepted volcano monitoring toolkit. Today, large portions of the IMS infrasound network data, once exclusive, are publicly available (see links at the bottom), and the international infrasound community has grown to the hundreds, with rapid evolution as new generations of scientists joins in.

In order to capture infrasound, a microphone with a low frequency response or a barometer with a high frequency response are needed. The sensor data then needs to be digitized for subsequent analysis. In the pre-millenium era, you’d drop a few thousand dollars to get a single, basic data acquisition system. But, in the very near future, there’ll be an app for that. Once the sound is sampled, it looks much like your typical sound track, except you can’t hear it. A single sensor record is of limited use because it does not have enough information to unambiguously determine the arrival direction of a signal. So we use arrays and networks of sensors, using the time of flight of sound from one sensor to another to recognize the direction and speed of arrival of a signal. Once we associate a signal type to an event, we can start characterizing its signature.

Consider Kilauea Volcano. Although we think of it as one volcano, it actually consists of various crater complexes with a number of sounds. Here is the sound of a collapsing structure

As you might imagine, it is very hard to classify volcanic sounds. They are diverse, and often superposed on other competing sounds (often from wind or the ocean). As with human voices, each vent, volcano, and eruption type can have its own signature. Identifying transportable scaling relationships as well as constructing a clear notation and taxonomy for event identification and characterization remains one of the field’s greatest challenges. A 15-year collection of volcanic signals can be perused here, but here are a few selected examples to illustrate the problem.

First, the only complete acoustic record of the birth of Halemaumau’s vent at Kilauea, 19 March 2008:

.

Here is a bench collapse of lava near the shoreline, which usually leads to explosions as hot lava comes in contact with the ocean:

.

.

Here is one of my favorites, from Tungurahua Volcano, Ecuador, recorded by an array near the town of Riobamba 40 km away. Although not as violent as the eruptive activity that followed it later that year, this sped-up record shows the high degree of variability of eruption sounds:

.

.

The infrasound community has had an easier time when it comes to the biggest and meanest eruptions, the kind that can inject ash to cruising altitudes and bring down aircraft. Our Acoustic Surveillance for Hazardous Studies (ASHE) in Ecuador identified the acoustic signature of these type of eruptions. Here is one from Tungurahua:

.

Our data center crew was at work when such a signal scrolled through the monitoring screens, arriving first at Riobamba, then at our station near the Colombian border. It was large in amplitude and just kept on going, with super heavy bass – and very recognizable. Such signals resemble jet noise — if a jet was designed by giants with stone tools. These sustained hazardous eruptions radiate infrasound below 0.02 Hz (50 second periods), so deep in pitch that they can propagate for thousands of kilometers to permit robust acoustic detection and early warning of hazardous eruptions.

In collaborations with our colleagues at the Earth Observatory of Singapore (EOS) and the Republic of Palau, infrasound scientists will be turning our attention to early detection of hazardous volcanic eruptions in Southeast Asia. One of the primary obstacles to technology evolution in infrasound has been the exorbitant cost of infrasound sensors and data acquisition systems, sometimes compounded by export restrictions. However, as everyday objects are increasingly vested with sentience under the Internet of Things, this technological barrier is rapidly collapsing. Instead, the questions of the decade are how to receive, organize, and distribute the wealth of information under our perception of sound so as to construct a better informed and safer world.

IRIS Links

http://www.iris.edu/spud/infrasoundevent

http://www.iris.edu/bud_stuff/dmc/bud_monitor.ALL.html, search for IM and UH networks, infrasound channel name BDF

Milton Garcés is an Earth Scientist at the University of Hawaii at Manoa and the founder of the Infrasound Laboratory in Kona. He explores deep atmospheric sounds, or infrasounds, which are inaudible but may be palpable. Milton taps into a global sensor network that captures signals from intense volcanic eruptions, meteors, and tsunamis. His studies underscore our global connectedness and enhance our situational awareness of Earth’s dynamics. You are invited to follow him on Twitter @iSoundHunter for updates on things Infrasonic and to get the latest news on the Infrasound App.

Featured image: surface flows as seen by thermal cameras at Pu’u O’o crater, June 27th, 2014. Image: USGS

tape reel

REWIND! If you liked this post, check out …

SO! Amplifies: Ian Rawes and the London Sound Survey – Ian Rawes

Sounding Out Podcast #14: Interview with Meme Librarian Amanda Brennan – Aaron Trammell

Catastrophic Listening — China Blue

Beyoncé’s New Weave Swing, or How to Snatch Wigs With Hair Choreography

BEYONCE HAIR FLIP

Sonic Beyoncé5This September, Sounding Out! challenged a #flawless group of scholars and critics to give Beyoncé Knowles-Carter a close listen, re-examining the complex relationship between her audio and visuals and amplifying what goes unheard, even as her every move–whether on MTV or in that damn elevator–faces intense scrutiny.   Last week, Regina Bradley (writer, scholar, and freelance researcher of African American Life and Culture) introduced us to the sonic ratchetness of Baddie Bey; the week before you heard our Beyoncé roundtable podcast featuring our first two writers, Priscilla Peña Ovalle (English, University of Oregon)  and Kevin Allred (Women and Gender Studies, Rutgers)–as well as Courtney Marshall (English, University of New Hampshire) and Liana Silva (Editor, Women in Higher Education, Managing Editor, Sounding Out!), who will close out our series next week.  Today, madison moore gives us not only great face but killer hair choreo. Mic drop. Hair flip.–Editor-in-Chief Jennifer Stoever    

“Which Beyoncé are you trying to do?,” a sale associate at Beauty Full, the largest beauty supply house in Richmond, Virginia, asked me. It was a good question, because the shop had whole rows of wigs and ponytails that conjured Beyoncé enough for what I needed to do. Choosing just one would be tough.

“This one is very Beyoncé,” he said, pointing to a style reminiscent of the huge, teased out curly Afro Beyoncé worked in the early 2000s. I wasn’t really feeling this particular style but I could tell my sales guy was living it. “It’s not fierce enough!” I told him. “I need something that really moves!” I’d been invited to give an hour-long lecture on Beyoncé at a university, one of my first gigs, and I was out at the last minute shopping for a wig to wear during my talk so that I could give the children a little sip of Beyoncé. That’s when I saw it: a long, black and dark brown two-toned wig with curls for eternity that I knew would look great on stage. Come on, wig!

hair flip 3I wanted to wear a wig “that really moves!” during this talk to demonstrate what I feel is the creative genius of Beyoncé’s performance persona: what I call “hair choreography.” Not unlike dance moves intended for the body, “hair choreography” is a mode of performance that uses hair to add visual drama to the overall texture of sound and it’s the special genius of Beyoncé’s stagecraft. On one occasion some friends and I were drinking wine and downing live Beyoncé videos on YouTube when one of us was like “I am living for her hair choreography!” I’m not sure we invented the concept but the phrase “hair choreography” has certainly stuck with me. Hair choreography is one of the secret weapons of the pop diva, those places in a live performance where she flips and whips her hair in exactly the right point, using “haircrobatics” to punctuate a moment, a feeling, raising the stakes, the sex appeal, and even the energy in the audience.

Hair choreography is exciting because it tells a story, but even more than telling a good story in performance “hair choreography” punctuates everything else happening on stage: the lights, the dance moves, the glitter, the sequins, the music. In this way, “hair choreography” becomes part of the spectacular offering of stage presence; a type of magnetism that, despite everything else happening on stage, draws us into a single performer – the star – whose single energy needs fill up the whole space. “Hair choreography” occurs in those moments of a live performance where the hair is flipped, whipped, dipped, spun and amplified during the most exciting, emotion-filled sounds and dance moves.

Even though many scholars still often approach them as separate practices, sound and motion are so fluidly entangled, as Jennifer Stoever has revealed. In this way, “hair choreography” builds on performance studies scholar Imani Kai Johnson’s call for the “aural-kinesthetic.” The “aural-kinesthetic” is not a method or a theory but simply a way for scholars to think about how music and movement happen at the same time. “Hair choreography” is about the relationship between sound, body and movement, and how each of those comes together to leave a visceral impact on an audience.

One video that shows the importance of hair choreography to Beyoncé’s package is her medley “If I Were A Boy/You Oughta Know,” a mélange of the soft hard rock her own track coupled up with the aggressive rock of Alanis Morissette’s iconic break-up jam of the same title. In the clip, as Beyoncé segues from “If I Were A Boy” into “You Oughta Know” the wind machines appear to blow her hair faster, and with every emotional note or beat she knocks her head to the side with attitude, forcing her straight hair with it. By the time Beyoncé sings “And I’m here, to remind you…,” the most emotional (and recognizable) transition of the song, the hair is already going full blast. Guitars and drums go off while strobe lights engulf the stage in a frenzy of chaos.

At “You, you, you oughta know” she falls to her knees and performs a choreographed head bang while sliding across the floor using only her knees. It’s important to note here that the singing has stopped because this is a moment of “hair choreography,” a transition indicating an impending change in mood.

Everyone loves Beyoncé’s hair. In her will the late comedian Joan Rivers requested “a wind machine so that even in the casket my hair is blowing just like Beyoncé’s.” There are countless YouTube tutorials showing young girls how they too can achieve that Beyoncé look with weaves, wigs and lace fronts. Even the comedian Sommore, who stared in the 2001 film Queens of Comedy , had something to say about Beyoncé’s hair:

Beyoncé is a bad motherfucker. Oh this bitch bad. Let me tell ya’ll how bad this bitch is. I went to see her concert in Atlantic City after she had her baby. I sat in the second row – this bitch was flawless. I mean I’m talking about the bitch was flawless. Only problem I had with Beyoncé…she had on too much hair! This bitch came out she had at least 18 packs of hair on. She came out I thought the bitch was the cowardly lion from The Wiz. I’m sitting there in awe of this bitch neck, I’m like, “This bitch neck is strong as a motherfucker!”

All jokes aside, the mystery of Beyoncé’s hair-–and all of the technologies involved in keeping it moving–is part of the genius of her brand image, particularly because it works to make her ethnically ambiguous.. Having various types of hairstyles allows her creole body to infinitely play with race, and this makes her marketable to nearly everyone. Is she black? Is she Spanish? Is she biracial? Could she be Brazilian or from Latin America? Yes. In this way, her hair choreography not only punctuates her sound, but it shapes the very way it is heard, enabling her to morph into more personalities and fit into more demographics than even Lady Gaga or Madonna. It’s why she’s able to sound sexy or inspirational, “hood” or “classy,” vampy or masculine, vocal or dance-y. Look at a video like “XO,” to me the most mass-marketable song on BEYONCE. First of all she looks fabulous, but I think it’s hard to watch that video and not feel like it’s specifically pitched to 15-year-old white girls in Connecticut. Everything about the video, especially her sweeping hair flourishes, positions Beyoncé as relatable to teenage girls all over the US.

As dance studies scholar Melissa Blanco Borelli sees it the mulatta body engages with a practice she calls “Hip(g)nosis,” or a type of hypnosis enacted by the yellow-bodied performer on fascinated audiences. This type of hypnotics, via the hips, “exposes the male gaze” by thinking through the “pleasure and consumption of the mulatta…” (She Is Cuba, forthcoming, Oxford University Press). Through hip(g)nosis Beyoncé has learned to use her ambiguous skin color and hair optics to her (monetary) advantage as a way to slide in and out of ethnic categories. Indeed, what does the fact that she is the lightest member of Destiny’s Child and also the groups’ most successful member have to do with her celebrity? The irony in all of this race play is that she was recently awarded the Michael Jackson Video Vanguard Award after her jaw-dropping 15-minute performance at the MTV Video Music Awards, and she is one of the few contemporary black pop singers who can play with race in the same way Michael Jackson did.

BEYONCE HAIR FLIP 2When I watched her recent MTV VMA performance I screamed a lot during her show, but the one moment I remember specifically, and still keep rewinding back to, happened right at the end of “Mine,” to me the best track on BEYONCE. She vamps “MTV, Welcome to My World,” and quickly spins and flips that hair back around baby, giving face to the camera, making millions of queens all over America scream YAASSS!!! at the top of their lungs.   Beyoncé herself nodded back to queer performance and performers during a performance of “XO” this year on February 28th 2014 at the O2 Arena in London, after one overzealous fan threw a wig at Beyoncé as she sauntered off the stage and into the crowd (3:17).

When she turned around to pick the wig up she ad libbed “You got me snatching wigs, snatching wigs” into her microphone, knowing perfectly well that “Beyoncé snatching wigs” is one of the most popular fan-created Internet memes. In black gay male performance culture people often talk about “snatching wigs” or “coming for your wig,” and to this end scholars like E. Patrick Johnson and Marlon Bailey have done important work in theorizing the interplay between black gay colloquialisms and performance. If you’re “snatching wigs” then you’re performing better than everybody else while completely eradicating the competition. You’re seemingly indefatigable. Snatching a wig means a particular performance was highly effective or unique, and a snatched wig implies how an audience might surrender itself to a strong performer, as was the case with the aforementioned wig thrower. Beyoncé definitely understands the power of stage presence; a type of magnetism that, despite everything else happening on stage, draws us into a single performer – the star – whose single energy needs fill up the whole space. Filling up an empty stage with a single body is a lot of space to fill if you think about it. And making an audience focus on you when there are 10,000 other things are happening around you is an even more challenging task.

BEYONCE WIG SNATCHBut “snatching wigs” can also mean you’re revealing someone’s deepest secrets, something you know they’re hiding. What’s underneath a wig but a secret – your real hair texture, a bald spot you don’t want anyone else to know about. A snatched wig can mean a break of the illusion. When I wore that wig during my Beyoncé talk to demonstrate hair choreography everyone knew it was fake – I put it on in front of them – but if the wig came off the illusion would have been broken nonetheless.

Part of Beyoncé’s monumental fame has to do with the fact that while she synchronizes, punctuates, captivates, and performs, she never lets us see underneath her wig. She just lets it whip.

madison moore (Ph.D., American Studies, Yale University, 2012) is a research associate in the Department of English at King’s College London. Trained in performance studies and popular culture, madison is a DJ, writer and pop culture scholar with expertise in nightlife culture, fashion, queer studies, contemporary art and performance, alternative subcultures and urban aesthetics. He is a staff writer at Thought Catalog, Splice Today, and his other writing has appeared in Vice, Interview magazine, Art in America, Dancecult: Journal of Electronic Dance Music Culture, the Journal of Popular Music Studies and Theater magazine. He is the author of the Thought Catalog original e-book How to Be Beyoncé. His first book, The Theory of the Fabulous Class, will be published by Yale University Press.

tape reelREWIND!…If you liked this post, check out:

Karaoke and Ventriloquism: Echoes and Divergences-Karen Tongson and Sarah Kessler

On Sound and Pleasure: Meditations on the Human Voice – Yvon Bonefant

“New Wave Saved My Life*”-Wanda Alarcon

 

 

 

%d bloggers like this: