Archive | Sound and Science RSS for this section

SO! Amplifies: The Electric Golem (Trevor Pinch and James Spitznagel)

SO! Amplifies. . .a highly-curated, rolling mini-post series by which we editors hip you to cultural makers and organizations doing work we really really dig.  You’re welcome!

On March 24th, 2019 the record release party for The Electric Golem’s 6th CD Golemology was held at the Loft in Ithaca, New York. The Electric Golem is an avant-garde synthesizer duo featuring Trevor Pinch and James Spitznagel, that has been in existence for about ten years.

Trevor Pinch is a local sound artist and professor at Cornell University. He is an STS (Science and Technology Studies) and Sound Studies scholar. As a key thinker of STS, Trevor is the coproducer of theories about Sociology of Scientific Knowledge, Social Construction of Technology (SCOT), and the role of users in technological history and innovation. However, Trevor’s interest in dates back much farther; he built his first modular synthesizer when he was a physics student in London in the 1970s.

The other half of The Electric Golem, James Spitznagel, is a multi-media artist who uses the iPad as a musical instrument and to create digital paintings. While he has played many roles in the music and culture industries—guitarist in a rock band, record store owner, art gallery and guitar shop investor, and even business manager for the Andy Warhol Museum—he moved to Ithaca to focus on producing abstract art: digital paintings and experimental, improvisational music. Being an energetic and enthusiastic person who has unrestrained fantasies, James finds that everything around him can be his inspiration.

Pinch and Spitznagel formed the group after Spitznagel read Analog Days: The Invention and Impact of the Moog Synthesizer (by Trevor Pinch and Frank Trocco) and realized Pinch also lived in Ithaca. Spitznagel simply looked his name up in the phone book and called him up: “I go, ‘is this Trevor Pinch?’ He said, ‘yes.’ I said, ‘well, you don’t know me, but I just read your book and I love it.’”  And then they got together for a beer and have been best friends and collaborators ever since.  Once Spitznagel heard about Pinch’s homemade synthesizer, he asked Trevor to try to make something together and it turned out to be a fascinating mixture of analog–Trevor’s synth, Moog Prodigy, and a Minimoog–and James’s digital instruments.

Building from this first moment of discovery, The Electric Golem’s music is electronic, experimental, and totally improvised. Typically, the pieces of music last twenty minutes to half an hour and expresses their interaction with the machines and with each other in the studio. James is much more controlling of the tone and rhythm, and patches the sound as he goes along, whereas Trevor is much more about making spontaneous weird sounds. They complement each other and the creation process is usually by random and spontaneous, as Spitznagel describes: “I didn’t tell Trevor what to do or what to play, but I said, here’s the piece of music I’ve written. He just instinctively knew what add to it.” Reciprocally, “he might just play something that I go, oh, I can weave in and out of the ambient sound he’s putting there.”

Trevor Pinch, Electric Golem at Elmira College, 2012

For the duo, the process of producing music becomes a shared experience with their listeners. The music is ever changing and evolving. In addition, unexpected drama adds vitality to the palette. “The iPad might freeze up or synthesizer might break somehow,” Spitznagel notes, “that’s happened to us, but we carry on. Like Trevor looks at me and says, it’s not working there. Or, I look at him and go, I have to reboot my computer, it’s not working. But, those times actually inspire us to try new things and go beyond what we are doing.” James explained. Their inspiration comes from the unknown, which just emerges from their practice. “Generally, this sort of music is completely unique to Electric Golem.” Trevor concluded.

The name “Electric Golem” comes from a series of books with Golem in the titles that Trevor collaborated on with his mentor Harry Collins. “The golem is a creature of Jewish mythology,” Pinch and Collins wrote in The Golem, What You Should Know about Science, “it is a humanoid made by man with clay and water, with incantations and spells. It is powerful, it grows a little more powerful every day.  It will follow orders, do your work, and protect you from the ever threatening enemy.  But it is clumsy and dangerous.  Without control, a golem may destroy its masters with its flailing vigour” (1).  Noting Trevor’s association with the concept of the Golem, Spitznagel added the “Electric” twist not just as a metaphor for their sound but also because “it’s kind of like a retro name.” The Electric Golem mushroomed from there, and in the past decade they have had many invitations and bookings to play out, receiving the first recording contract from the Ricochet Dream label, and have played with a bunch of notable musicians, such as Malcolm Cecil of Tonto’s Expanding Head Band, Simeon of Silver Apples, and “Future Man” (aka Roy Wooten), and they haven’t stopped there.

According to Pinch, the key feature of The Electric Golem’s music is its ability to encompass different moods. “I think Electric Golem has become good at one thing: its changing and transitioning from one sort of mood of music to another. And we have become quite good at those transitions. I think people would say that’s what they kind of like about us.” These sorts of slow transitions construct a unique texture of sound that can be quite cinematic, so much so that in 2012, the Electric Golem performed the accompaniment to the silent movie A Trip to the Moon, a special Cornell cinema event. Overall, as improvised experimental music, it is sometimes challenging to listen to, with no regular rhythm or reliable melody. Trevor produces warm, rich drones from the analog side that contrast with the sharper digital rhythms that James programs. In short, the Electric Golem varies between these two affects but the music goes far beyond the representation of emotional states; sometimes it conjures up the feeling of the vastness of space and time.

Experimental music, is a collaboration and negotiation process between instruments and their users.  No matter if analog or digital, instruments have autonomy; they are non-human actors with their own agency to some extent. As Trevor Pinch intimates, “I understand the general sort of sound that can be produced, but the particular details of how it will work out, you don’t really know, that’s much more spontaneous, you have to react to that.”  Instruments can often be uncontrollable–making their own sounds—so that Electric Golem must respond in kind. “So, it’s sort of like higher level meta-control versus actually doing what you’re doing in response to the instrument that combines together,” Trevor describes, “which I think is the secret to controlling these sorts of instruments.” It is incredible that Pinch and Spitznagel know each other so well—and each know their instruments so well–that they can improvise for long periods with no trouble. Trevor says: “Follow the use of these instruments! Follow the instruments! They are not essentialized. They are just stabilized temporarily.”

On the whole, The Electric Golem shows an artistic form which breaks the traditional paradigm, deconstructs and then reconstructs it, seeking to free sound from the instruments. Their music is beyond pure melody and rhythm, beyond the expression of existence, expressing more of an aesthetic state of transcendence. They challenge what music is, and what musical instruments are; they challenge divisions between the identities of engineer and musician. Electric Golem’s music co-constructs art and technology and binds them together; art, for them, is a mode of presenting technology, and vice versa, technology is a pathway through which art can flourish.

My favorite Electric Golem piece is called “Heart of the Golem.” What is the heart of the Golem? According to Pinch, “It is a mystery, a process of unfolding and discovery. It is somewhere where analog and digital sound meet, and an improvisation.” What the magic is remains unknown and unlimited, just like the future of the Electric Golem.

Featured Image: Courtesy of The Electric Golem

Qiushi Xu is a PhD candidate in the subject of Philosophy of Science and Technology in Tsinghua University, Beijing and in a joint PhD program in the Department of Science and Technology Studies in Cornell University, working with Prof. Trevor Pinch. Her research areas are Sound Studies, STS, Cultural Studies and Gender Studies. Her current research focuses on the sociology of piano sound and the negotiation and construction of piano sound in the recording studio (PhD dissertation), gender issues in recording industry, experimental music, auscultation and sound therapy. She holds an MA in Cultural and Creative Industries from King’s College London; a BA in Recording Arts and a BA in Journalism and Communication from the University of China, Beijing. She is also an amateur pianist, writer, and traditional Chinese painter. As a multiculturalist, she is am fascinated by different forms of art and culture in different cultural contexts.

On the Poetics of Balloon Music: Sounding Air, Body, and Latex (Part One)

I see them in the streets and in the subway, at dollar stores, hospital rooms, and parties. I see them silently dangling from electrical cables and tethered to branches of trees. Balloons are ghost-like entities floating through the cracks of places and memories. They are part of our rituals of loss, celebration and apology. Yet, they are also part of larger systems, weather sciences, warfare and surveillance technologies, colonialist forces and the casual UFO conspiracy theory. For a child, the ephemeral life of the balloon contrasts with the joy of its bright colors and squeaky sounds. Psychologists encourage the use of the balloon as an analogy for death, while astronomers use it as a representation for the cosmological inflation of the universe. In between metaphors of beginning and end, the balloon enables dialogues about air, breath, levity, and vibration.

The philosopher Luce Irigaray argues that Western thought has forgotten air despite being founded on it. “Air does not show itself. As such, it escapes appearing as (a) being. It allows itself to be forgotten,” writes Irigaray. Air is confused with absence because it “never takes place in the mode of an ‘entry into presence.'” Gaston Bachelard, in Air and Dreams, calls for a philosophy of poetic imagination that grows out of air’s movement and fluidity. For Bachelard, an aerial imagination brings forth a sense of the sonorous, of transparency and mobility. In this article, I propose exploring the balloon as a sonic device that turns our attention to the element of air and opens space for musical practices outside classical traditions. Here, the balloon is defined broadly as an envelope for air, breath, and lighter-than-air gases, including toy balloons, weather balloons, hydrogen and hot-air balloons.


Vertical Dimension: Early Experiments in Ballooning, Sounding, and Silence

On September 1939, Jean-Paul Sartre was assigned to serve the French military in a meteorological station in Alsace behind the frontline. His duties consisted of launching weather balloons, monitoring them every two hours and radioing the meteorological observations to another station. Faced with the dread of war and an immediate geography that he compared to a “madmen’s delusion,” Sartre took his gaze upwards to the weather balloon and its surrounding atmosphere to find refuge. In Notebooks from a Phony War, Sartre describes the sky as my vertical dimension, a vertical prolongation of myself, and also abode beyond my reach.” The balloon becomes a vessel for an affective relationship with the atmosphere that is mediated by the sounding of meteorological data. While gazing into the upper air, Sartre experiences a tension between the withdrawn”frozen blackness” of the atmosphere and the pull for feelings of oneness with it.

Falling Stars as Observed From the Balloon, Travels in Air by James Glaisher, 1871

The first balloonists to explore the atmosphere felt similar sensations of belonging by moving along masses of air, and at the same time, experiencing a deep sense of otherworldliness. Despite the dangerous enterprise, early balloon travelers repeatedly recounted expressions of the sublime associated with the acoustic qualities of the upper air. Late 18th and 19th-century balloon literature features countless textual soundscapes of balloon ascents that reveal how the experience of sound and silence helped frame early narratives of “being in air/being one with air.”

Ballooning developed in France and England among the emergent noise of industrialized urban life. The balloon prospect, as the author Jesse Taylor put it, spoke to “the Victorian fantasy of rising above the obscurity of urban experience.”  Floating over the city, the English aeronaut Henry Coxwell describes hearing “the roar of London as one unceasing rich and deep sound.” In the same spirit, the balloonist James Glaisher compares the “deep sound of London” to theroar of the sea,” whose “murmuring noise” is heard at great elevations. Ascending to higher altitudes, Coxwell hears the sounds from the earth become “fainter and fainter, until we were lost in the clouds when a solemn silence reigned.”

L’exploration de l’air, In Histoire des ballons et des aÇronauts cÇläbres, 1887

The balloon not only allowed access to a panoramic and surveilling gaze in the midst of boundless space but also a privileged access to a place of quietude and silence. In the memoir Aeronautica (1838), Thomas Monck Mason speaks to this point when he writes, “no human sound vibrated (…) a universal Silence reigned! An empyrean Calm! Unknown to Mortals upon ‘Earth.” According to Mason, when the balloonist goes “undisturbed by interferences of ordinary impressions,” like the sounds from terrestrial life, “his mind more readily admits the influence of those sublime ideas of extension and space.”

The experience of silence in the upper air brought forward in the Victorian white elite the longing for freedom, individuality, and assertion of social identity. Balloon flights provided a form of escapism from the confines of city walls reverberating with the aural manifestations of the Other. In Victorian Soundscapes, John Picker examines the struggles of London’s upper class of creatives (academics, doctors, artists and clergy) in finding spaces of silence away from the bustling noise of the urban environment. During the mid-19th century, the influx of immigration and the rise of commercial trade and street musicians altered the soundscape of the city. As Picker documents, the English elites rallied against this emergent aurality through racialized listening made evident by the use of sonic descriptors like invasion and containment that underlined anxieties related to the dilution of national identity, culture, class division and territory. For the elite, to physically ascend above the noise of the Other into the silent regions of the atmosphere via balloon, an instrument that dramatizes scientific prowess, validated an auditory construction of whiteness organized around ideals of order, rationality and harmony.

Circular View From the Balloon in Airopaidia by Thomas Baldwin, 1786

The descriptions of balloon ascents featured in James Glaisher’s book Travels in the Air (1871) are a vivid manifestation of these ideals. Experiences of floating at high altitudes were often met with poetic reports on the “sublime harmony of colors, light and silence,” the “perfect stillness,” and the “absolute silence” reigning “supreme in all its sad majesty.” The nineteenth century’s constructs of “harmony” and “quietude,” argues Jennifer Stoever, were markers of whiteness used to segregate and de-humanize those who embodied an alternative way of sounding. The Victorian balloon memoir echoes the construction of this sonic identity rooted in the white privilege of being lighter-than-air and claiming atmospheric silence. The balloonist Camille Flammarion, upon hearing “various noises” from the “dark earthbelow, questions what prompts “the listening ear” to be sensitive to difference. “Is it the universal silence which causes our ears to be more attentive?” asks the aeronaut.

Balloon Prospect, In Airopaidia, Thomas Baldwin, 1786

Balloonist’s encounters with silence in the upper air and the sigh of “boundless planes” andinfinite expanse of sky” were accompanied by feelings of safeness and overwhelming serenity. Elaine Freedgood argues that the balloon with its silk folds and wicker baskets were a perfect container for states of regression and the suspension of the boundaries of the self into an oceanic feeling of at-oneness with the atmosphere. According to the author, the self and sublime become momentarily entangled originating a sense of heroic masculinity, power, and the rehearse of imperial and colonial ventures. This emotional state justified an unprecedented mobility and the sense of losing oneself to the whims of the wind with no preoccupations of where to land. However, in an image that contrasts the privileges of mobility, Frederick Douglass uses the metaphor of the balloon as the terrifying anxiety of uncertain landing – either in freedom or slavery. The novel Washington Black (2018) by Esi Edugyan, deals with similar issues by fictionalizing the balloon ascent and traveling of a young slave, whose hearing is tuned to the “ghostly sound of human suffering coming from beneath.

By late 1780s, thousands of people witnessed the European wave of balloon flights, but only a small fraction had access to them. Mi Gyung Kim, author of The Imagined Empire, draws attention to the silence imposed on the figure of the “balloon spectator” whose dissident voices were erased by the dominant colonial narrative of aerial empire. Mostly, the balloon spectator is featured in Victorian texts within a soundscape of affects characterized by “vociferations of joy, shrieks of fearandexpressions of applausethat advanced the dominant colonial narrative.

Ascent of a Balloon in the Presence of the Court of Charles IV by Antonio Carnicero, 1783

Although explorations in sound were one of the many goals to legitimize the balloon as an instrument in modern natural philosophy, the scientific utility of the balloon succumbed to spectacle and entertainment. Early aeronauts tried to use their voices and speaking trumpets to sound the atmosphere and experiment with echo as a measurement of distance. Derek McCornack in his book Atmospheric Things, says that these balloonists were most of all “generating a sonorous affective-aesthetic experiencewith the atmosphere. Along with scientific tools, balloonists often ascended with musical instruments and, in other instances, the balloon itself became the stage for operatic performances. More than a century before modern composers had transformative encounters with silence in anechoic chambers, aeronauts had already described its subjective qualities and effects in detail. In 1886, the photographer John Doughty and reluctant balloon traveler, while floating in a silent ocean of air, recalls hearing only two bodily sounds: “the blood is plainly heard as it pulses through the brain; while in moments of extra excitement the beating of the heart sounds so loud as almost to constitute an interruption to our thoughts.”

Travels in the Air, James Glaisher, 1871


I feel like a balloon going up into the atmosphere, looking, gathering information, and relaying it back. Rachel Rosenthal, 1985

The first untethered balloon ascents happened between 1783 and 1784. In current literature, this period is most cited for the patent of the steam engine, the beginning of the carbonification of the atmosphere by the burning of coal, and the start of the Anthropocene. In the industrialized society, the balloon floats through irreversibly modified atmospheres. “We are still rooted in air,” writes Philippopoulos-Mihalopoulos. However, this air is partitioned and engineered to facilitate consumerism, war, terror and pollution.

Contemporary art practices using the balloon address some of these concerns. The balloon functions as an atmospheric probe that reveals “invisible topographies” andpolitics of air” such as human interference, air quality, air ownership, borders, surveillance and the privileges of buoyancy. As a playful, non-threatening object, the balloon can elicit practices of inclusivity (e.g. balloon mapping) and affect. The transmission and reception of sound and music through the balloon help manifest air’s qualities and warrants artistic and social encounters with weather systems.

“Travels in the Air” by James Glaisher, 1871

During the 6th Annual Avant-Garde Festival parade going up Central Park West in 1968, the body of the cellist Charlotte Moorman rose a few feet above the floor attached to a bouquet of helium-filled balloons. This led the police to chase her and demand an FCC license for flying, to which Moorman replied: “I’m not flying – I’m floating.” Moorman was performing a piece called Sky Kiss, conceived by the visual artist Jim McWilliams that involved cello playing suspended by balloons.

In an interview for the book Topless Cellist by Joan Rothfuss, McWilliams explains that the original concept of Sky Kiss was to sever the connection between the cello’s endpin and the floor and expand the idea of kiss to an aerial experience. According to Rothfuss, McWilliams intended this piece to be an expression of the ethereal. But Moorman preferred the playfulness and the communal experience of the airspace. Instead of avant-garde music, she played popular tunes like “Up up and away” and “The Daring Young Man on the Flying Trapeze.” Dressed with a super-heroin satin cape, Moorman infused Sky Kiss with humor and visual spectacle, posing a challenge to the restrictive access to buoyancy.

Charlotte Moorman and Nam June Paik, Sky Kiss by Jim McWilliams, above the Sydney Opera House Forecourt, 1976, Kaldor Public Art Project 5, Photo by Karry Dundas

Furthermore, Charlotte Moorman collaborated with sky artist Otto Piene to establish the right quantities of lighter-than-air gas to reach higher altitudes. Otto Piene, was a figure of the postwar movement Zero and coined the term Sky Art to describe his flying sculptures, multimedia balloon operas, and kinetic installations. For Piene, a child growing up during World War II, “the blue sky had been a symbol of terror in the aerial war.” The balloon collaboration between Charlotte Moorman and Otto Piene was a form of acknowledging aerial space in a musical and peaceful way. In his manifesto Paths to Paradise (1961), Piene questions: why do we have no exhibitions in the sky?(…) up to now we have left it to war to dream up a naive light ballet for the night skies, we have left it up to war to light up the sky.

Phil Dadson’s work Breath of Wind (2008) lifts an entire brass band of 24 musicians into the sky with 17 hot-air balloons. Brass instruments, usually associated with moments of revelation in religious texts, serve here as a calling for an aesthetic experience of wind and air currents. Since 1970s, Dadson’s environmental activism has brought forward sonic tensions between the human subject and Aeolian forces, as in Hoop flags (1970), Flutter (2003) or Aerial Farm (2004).

Similarly, the artist Luke Jerram displaces the experience of a concert hall to the sky. His project Sky Orchestra comprises of seven hot-air balloons floating across a city with speakers playing a soundscapes design to induce peaceful dreams. The hot-air balloon orchestra ascends at dawn or dusk so the airborne music can reach people’s homes during sleep or while in states of semi-consciousness. The sound-targeting of residential areas during periods of dimmed awareness exposes the entangling capacities of airspace, and the vulnerability of the private space.

Artist and architect Usman Haquem utilizes a cloud of helium balloons as a platform to identify and sonify changes in the electromagnetic spectrum. This project, Sky Ear (2004), reveals our meddling with the urban Hertzian culture via mobile phones and other electronic devices. Andrea Polli’s environmental work features sonifications of data sets captured by weather balloons. These sonifications provide audiences an emotional window to frame complex climate data. In Sound Ship (descender 1) by Joyce Hinterding and David Haines, an Aelion harp is attached to a weather balloon that ascends into the edges of space. The result is a musical trace of the vertical volume of our atmosphere and the sonification of masses of air as the balloon journeys upwards.

Haines and Hinterding, Sound Ship (decender1), 4-min extract, 2016

Yoko Ono and John Lennon created similar exercise in sounding in the film Apotheosis (1970). A boom microphone and camera attached to a hydrogen balloon ascends over a small English town documenting a sonic geography of the upper air. The artists stay in the ground as the balloon rises. In a period of great media spectacle, the couple choses to stay with trouble while balloon records Earth’s utterances slowly fading into atmospheric silence.

It is important to note that these musical and sound based works that expose the physicality of air movements and assemble affective meanings with atmosphere and weather systems are not particular to contemporary practices. The scholar Jane Randerson draws attention to indigenous modes of knowing and sensing air and the weather that incorporate sounding instruments. In Weather as Medium, Randerson writes: “in Indigenous cosmologies, the sense of interconnectedness “discovered” in late modern meteorological science merely described what many cultures already sensed and encoded in social and environmental lore.”

The balloon has a lighter than air object mediates our relationship with the airspace and offers opportunities to expand our aerial imagination. By sensing changes in the atmosphere, the balloon is a platform that generates knowledge and can help us experiment with new forms of being-in-air some inclusive and empowering, others much more invested in exclusivity sounded through the rare air of silence and the silencing power dynamics fostered via the view from above.

I would like to express my immense gratitude to Jennifer Stoever for editing this paper and for sharing her scholarship and input on this article. Thank you to Phil Dadson for sharing his video.


Featured Image: Scientific Balloon of James Glaisher, 1862, Georges Naudet Collection, Creative Commons

Carlo Patrão is a Portuguese radio producer and independent researcher based in New York city. 

tape reelREWIND! . . .If you liked this post, you may also dig:

Instrumental: Power, Voice, and Labor at the Airport – Asa Mendelsohn

Botanical Rhythms: A Field Guide to Plant Music -Carlo Patrão

Sounding Out! Podcast #58: The Meaning of Silence – Marcella Ernest

Sounds of Science: The Mystique of Sonification

Hearing the Unheard IIWelcome to the final installment of Hearing the UnHeardSounding Out!s series on what we don’t hear and how this unheard world affects us. The series started out with my post on hearing, large and small, continued with a piece by China Blue on the sounds of catastrophic impacts, and Milton Garcés piece on the infrasonic world of volcanoes. To cap it all off, we introduce The Sounds of Science by professor, cellist and interactive media expert, Margaret Schedel.

Dr. Schedel is an Associate Professor of Composition and Computer Music at Stony Brook University. Through her work, she explores the relatively new field of Data Sonification, generating new ways to perceive and interact with information through the use of sound. While everyone is familiar with informatics, graphs and images used to convey complex information, her work explores how we can expand our understanding of even complex scientific information by using our fastest and most emotionally compelling sense, hearing.

– Guest Editor Seth Horowitz

With the invention of digital sound, the number of scientific experiments using sound has skyrocketed in the 21st century, and as Sounding Out! readers know, sonification has started to enter the public consciousness as a new and refreshing alternative modality for exploring and understanding many kinds of datasets emerging from research into everything from deep space to the underground. We seem to be in a moment in which “science that sounds” has a special magic, a mystique that relies to some extent on misunderstandings in popular awareness about the processes and potentials of that alternative modality.

For one thing, using sound to understand scientific phenomena is not actually new. Diarist Samuel Pepys wrote about meeting scientist Robert Hooke in 1666 that “he is able to tell how many strokes a fly makes with her wings (those flies that hum in their flying) by the note that it answers to in musique during their flying.” Unfortunately Hooke never published his findings, leading researchers to speculate on his methods. One popular theory is that he tied strings of varying lengths between a fly and an ear trumpet, recognizing that sympathetic resonance would cause the correct length string to vibrate, thus allowing him to calculate the frequency. Even Galileo used sound, showing the constant acceleration of a ball due to gravity by using an inclined plane with thin moveable frets. By moving the placement of the frets until the clicks created an even tempo he was able to come up with a mathematical equation to describe how time and distance relate when an object falls.

Illustration from Robert Hooke's Micrographia (1665)

Illustration from Robert Hooke’s Micrographia (1665)

There have also been other scientific advances using sound in the more recent past. The stethoscope was invented in 1816 for auscultation, listening to the sounds of the body. It was later applied to machines—listening for the operation of the technological gear. Underwater sonar was patented in 1913 and is still used to navigate and communicate using hydroacoustic phenomenon. The Geiger Counter was developed in 1928 using principles discovered in 1908; it is unclear exactly when the distinctive sound was added. These are all examples of auditory display [AD]; sonification-generating or manipulating sound by using data is a subset of AD. As the forward to the The Sonification Handbook states, “[Since 1992] Technologies that support AD have matured. AD has been integrated into significant (read “funded” and “respectable”) research initiatives. Some forward thinking universities and research centers have established ongoing AD programs. And the great need to involve the entire human perceptual system in understanding complex data, monitoring processes, and providing effective interfaces has persisted and increased” (Thomas Hermann, Andy Hunt, John G. Neuhoff, Sonification Handbook, iii)

Sonification clearly enables scientists, musicians and the public to interact with data in a very different way, particularly compared to the more numerous techniques involving vision. Indeed, because hearing functions quite differently than vision, sonification offers an alternative kind of understanding of data (sometimes more accurate), which would not be possible using eyes alone. Hearing is multi-directional—our ears don’t have to be pointing at a sound source in order to sense it. Furthermore, the frequency response of our hearing is thousands of times more accurate than our vision. In order to reproduce a moving image the sampling rate (called frame-rate) for film is 24 frames per second, while audio has to be sampled at 44,100 frames per second in order to accurately reproduce sound. In addition, aural perception works on simultaneous time scales—we can take in multiple streams of audio data at once at many different dynamics, while our pupils dilate and contract, limiting how much visual data we can absorb at a single time. Our ears are also amazing at detecting regular patterns over time in data; we hear these patterns as frequency, harmonic relationships, and timbre.

Image credit: Dr. Kevin Yager, data measured at X9 beamline, Brookhaven National Lab.

Image credit: Dr. Kevin Yager, Brookhaven National Lab.

But hearing isn’t simple, either. In the current fascination with sonification, the fact that aesthetic decisions must be made in order to translate data into the auditory domain can be obscured. Headlines such as “Here’s What the Higgs Boson Sounds Like” are much sexier than headlines such as “Here is What One Possible Mapping of Some of the Data We Have Collected from a Scientific Measuring Instrument (which itself has inaccuracies) Into Sound.” To illustrate the complexity of these aesthetic decisions, which are always interior to the sonification process, I focus here on how my collaborators and I have been using sound to understand many kinds of scientific data.

My husband, Kevin Yager, a staff scientist at Brookhaven National Laboratory, works at the Center for Functional Nanomaterials using scattering data from x-rays to probe the structure of matter. One night I asked him how exactly the science of x-ray scattering works. He explained that X-rays “scatter” off of all the atoms/particles in the sample and the intensity is measured by a detector. He can then calculate the structure of the material, using the Fast Fourier Transform (FFT) algorithm. He started to explain FFT to me, but I interrupted him because I use FFT all the time in computer music. The same algorithm he uses to determine the structure of matter, musicians use to separate frequency content from time. When I was researching this post, I found a site for computer music which actually discusses x-ray scattering as a precursor for FFT used in sonic applications.

To date, most sonifications have used data which changes over time – a fly’s wings flapping, a heartbeat, a radiation signature. Except in special cases Kevin’s data does not exist in time – it is a single snapshot. But because data from x-ray scattering is a Fourier Transform of the real-space density distribution, we could use additive synthesis, using multiple simultaneous sine waves, to represent different spatial modes. Using this method, we swept through his data radially, like a clock hand, making timbre-based sonifications from the data by synthesizing sine waves using with the loudness based on the intensity of the scattering data and frequency based on the position.

We played a lot with the settings of the additive synthesis, including the length of the sound, the highest frequency and even the number of frequency bins (going back to the clock metaphor – pretend the clock hand is a ruler – the number of frequency bins would be the number of demarcations on the ruler) arriving eventually at set of optimized variables.

Here is one version of the track we created using 10 frequency bins:


Here is one we created using 2000:


And here is one we created using 50 frequency bins, which we settled on:


On a software synthesizer this would be like the default setting. In the future we hope to have an interactive graphic user interface where sliders control these variables, just like a musician tweaks the sound of a synth, so scientists can bring out, or mask aspects of the data.

To hear what that would be like, here are a few tracks that vary length:




Finally, here is a track we created using different mappings of frequency and intensity:


Having these sliders would reinforce to the scientists that we are not creating “the sound of a metallic alloy,” we are creating one sonic representation of the data from the metallic alloy.

It is interesting that such a representation can be vital to scientists. At first, my husband went along with this sonification project as more of a thought experiment rather than something that he thought would actually be useful in the lab, until he heard something distinct about one of those sounds, suggesting that there was a misaligned sample. Once Kevin heard that glitched sound (you can hear it in the video above), he was convinced that sonification was a useful tool for his lab. He and his colleagues are dealing with measurements 1/25,000th the width of a human hair, aiming an X-ray through twenty pieces of equipment to get the beam focused just right. If any piece of equipment is out of kilter, the data can’t be collected. This is where our ears’ non-directionality is useful. The scientist can be working on his/her computer and, using ambient sound, know when a sample is misaligned.


It remains to be seen/heard if the sonifications will be useful to actually understand the material structures. We are currently running an experiment using Mechanical Turk to determine this kind of multi-modal display (using vision and audio) is actually helpful. Basically we are training people on just the images of the scattering data, and testing how well they do, and training another group of people on the images plus the sonification and testing how well they do.

I’m also working with collaborators at Stony Brook University on sonification of data. In one experiment we are using ambisonic (3-dimensional) sound to create a sonic map of the brain to understand drug addiction. Standing in the middle of the ambisonic cube, we hope to find relationships between voxels, a cube of brain tissue—analogous to pixels. When neurons fire in areas of the brain simultaneously there is most likely a causal relationship which can help scientists decode the brain activity of addiction. Computer vision researchers have been searching for these relationships unsuccessfully; we hope that our sonification will allow us to hear associations in distinct parts of the brain which are not easily recognized with sight. We are hoping to leverage the temporal pattern recognition of our auditory system, but we have been running into problems doing the sonification; each slice of data from the FMRI has about 300,000 data points. We have it working with 3,000 data points, but either our programming needs to get more efficient, or we have to get a much more powerful computer in order to work with all of the data.

On another project we are hoping to sonify gait data using smartphones. I’m working with some of my music students and a professor of Physical Therapy, Lisa Muratori, who works on understanding the underlying mechanisms of mobility problems in Parkinsons’ Disease (PD). The physical therapy lab has a digital motion-capture system and a split-belt treadmill for asymmetric stepping—the patients are supported by a harness so they don’t fall. PD is a progressive nervous system disorder characterized by slow movement, rigidity, tremor, and postural instability. Because of degeneration of specific areas of the brain, individuals with PD have difficulty using internally driven cues to initiate and drive movement. However, many studies have demonstrated an almost normal movement pattern when persons with PD are provided external cues, including significant improvements in gait with rhythmic auditory cueing. So far the research with PD and sound has be unidirectional – the patients listen to sound and try to match their gait to the external rhythms from the auditory cues.In our system we will use bio-feedback to sonify data from sensors the patients will wear and feed error messages back to the patient through music. Eventually we hope that patients will be able to adjust their gait by listening to self-generated musical distortions on a smartphone.

As sonification becomes more prevalent, it is important to understand that aesthetic decisions are inevitable and even essential in every kind of data representation. We are so accustomed to looking at visual representations of information—from maps to pie charts—that we may forget that these are also arbitrary transcodings. Even a photograph is not an unambiguous record of reality; the mechanics of the camera and artistic choices of the photographer control the representation. So too, in sonification, do we have considerable latitude. Rather than view these ambiguities as a nuisance, we should embrace them as a freedom that allows us to highlight salient features, or uncover previously invisible patterns.


Margaret Anne Schedel is a composer and cellist specializing in the creation and performance of ferociously interactive media. She holds a certificate in Deep Listening with Pauline Oliveros and has studied composition with Mara Helmuth, Cort Lippe and McGregor Boyle. She sits on the boards of 60×60 Dance, the BEAM Foundation, Devotion Gallery, the International Computer Music Association, and Organised Sound. She contributed a chapter to the Cambridge Companion to Electronic Music, and is a joint author of Electronic Music published by Cambridge University Press. She recently edited an issue of Organised Sound on sonification. Her research focuses on gesture in music, and the sustainability of technology in art. She ran SUNY’s first Coursera Massive Open Online Course (MOOC) in 2013. As an Associate Professor of Music at Stony Brook University, she serves as Co-Director of Computer Music and is a core faculty member of cDACT, the consortium for digital art, culture and technology.

Featured Image: Dr. Kevin Yager, data measured at X9 beamline, Brookhaven National Lab.

Research carried out at the Center for Functional Nanomaterials, Brookhaven National Laboratory, is supported by the U.S. Department of Energy, Office of Basic Energy Sciences, under Contract No. DE-AC02-98CH10886.

tape reelREWIND! ….. If you liked this post, you might also like:

The Noises of Finance–Nicholas Knouf

Revising the Future of Music Technology–Aaron Trammell

A Brief History of Auto-Tune–Owen Marshall

Erratic Furnaces of Infrasound: Volcano Acoustics

Hearing the Unheard IIWelcome back to Hearing the UnHeard, Sounding Out‘s series on how the unheard world affects us, which started out with my post on hearing large and small, continued with a piece by China Blue on the sounds of catastrophic impacts, and now continues with the deep sounds of the Earth itself by Earth Scientist Milton Garcés.

Faculty member at the University of Hawaii at Manoa and founder of the Earth Infrasound Laboratory in Kona, Hawaii, Milton Garces is an explorer of the infrasonic, sounds so low that they circumvent our ears but can be felt resonating through our bodies as they do through the Earth. Using global networks of specialized detectors, he explores the deepest sounds of our world from the depths of volcanic eruptions to the powerful forces driving tsunamis, to the trails left by meteors through our upper atmosphere. And while the raw power behind such events is overwhelming to those caught in them, his recordings let us appreciate the sense of awe felt by those who dare to immerse themselves.

In this installment of Hearing the UnHeard, Garcés takes us on an acoustic exploration of volcanoes, transforming what would seem a vision of the margins of hell to a near-poetic immersion within our planet.

– Guest Editor Seth Horowitz

The sun rose over the desolate lava landscape, a study of red on black. The night had been rich in aural diversity: pops, jetting, small earthquakes, all intimately felt as we camped just a mile away from the Pu’u O’o crater complex and lava tube system of Hawaii’s Kilauea Volcano.

The sound records and infrared images captured over the night revealed a new feature downslope of the main crater. We donned our gas masks, climbed the mountain, and confirmed that indeed a new small vent had grown atop the lava tube, and was radiating throbbing bass sounds. We named our acoustic discovery the Uber vent. But, as most things volcanic, our find was transitory – the vent was eventually molten and recycled into the continuously changing landscape, as ephemeral as the sound that led us there in the first place.

Volcanoes are exceedingly expressive mountains. When quiescent they are pretty and fertile, often coyly cloud-shrouded, sometimes snowcapped. When stirring, they glow, swell and tremble, strongly-scented, exciting, unnerving. And in their full fury, they are a menacing incandescent spectacle. Excess gas pressure in the magma drives all eruptive activity, but that activity varies. Kilauea volcano in Hawaii has primordial, fluid magmas that degass well, so violent explosive activity is not as prominent as in volcanoes that have more evolved, viscous material.

Well-degassed volcanoes pave their slopes with fresh lava, but they seldom kill in violence. In contrast, the more explosive volcanoes demolish everything around them, including themselves; seppuku by fire. Such massive, disruptive eruptions often produce atmospheric sounds known as infrasounds, an extreme basso profondo that can propagate for thousands of kilometers. Infrasounds are usually inaudible, as they reside below the 20 Hz threshold of human hearing and tonality. However, when intense enough, we can perceive infrasound as beats or sensations.

Like a large door slamming, the concussion of a volcanic explosion can be startling and terrifying. It immediately compels us to pay attention, and it’s not something one gets used to. The roaring is also disconcerting, especially if one thinks of a volcano as an erratic furnace with homicidal tendencies. But occasionally, amidst the chaos and cacophony, repeatable sound patterns emerge, suggestive of a modicum of order within the complex volcanic system. These reproducible, recognizable patterns permit the identification of early warning signals, and keep us listening.

Each of us now have technology within close reach to capture and distribute Nature’s silent warning signals, be they from volcanoes, tsunamis, meteors, or rogue nations testing nukes. Infrasounds, long hidden under the myth of silence, will be everywhere revealed.

Cookie Monster

The “Cookie Monster” skylight on the southwest flank of Pu`u `O`o. Photo by J. Kauahikaua 27 September 2002

I first heard these volcanic sounds in the rain forests of Costa Rica. As a graduate student, I was drawn to Arenal Volcano by its infamous reputation as one of the most reliably explosive volcanoes in the Americas. Arenal was cloud-covered and invisible, but its roar was audible and palpable. Here is a tremor (a sustained oscillation of the ground and atmosphere) recorded at Arenal Volcano in Costa Rica with a 1 Hz fundamental and its overtones:


In that first visit to Arenal, I tried to reconstruct in my minds’ eye what was going on at the vent from the diverse sounds emitted behind the cloud curtain. I thought I could blindly recognize rockfalls, blasts, pulsations, and ground vibrations, until the day the curtain lifted and I could confirm my aural reconstruction closely matched the visual scene. I had imagined a flashing arc from the shock wave as it compressed the steam plume, and by patient and careful observation I could see it, a rapid shimmer slashing through the vapor. The sound of rockfalls matched large glowing boulders bouncing down the volcano’s slope. But there were also some surprises. Some visible eruptions were slow, so I could not hear them above the ambient noise. By comparing my notes to the infrasound records I realized these eruption had left their deep acoustic mark, hidden in plain sight just below aural silence.

Arenal, Costa Rica, May 1, 2010. Image by Flickr user Daniel Vercelli.

Arenal, Costa Rica, May 1, 2010. Image by Flickr user Daniel Vercelli.

I then realized one could chronicle an eruption through its sounds, and recognize different types of activity that could be used for early warning of hazardous eruptions even under poor visibility. At the time, I had only thought of the impact and potential hazard mitigation value to nearby communities. This was in 1992, when there were only a handful of people on Earth who knew or cared about infrasound technology. With the cessation of atmospheric nuclear tests in 1980 and the promise of constant vigilance by satellites, infrasound was deemed redundant and had faded to near obscurity over two decades. Since there was little interest, we had scarce funding, and were easily ignored. The rest of the volcano community considered us a bit eccentric and off the main research streams, but patiently tolerated us. However, discussions with my few colleagues in the US, Italy, France, and Japan were open, spirited, and full of potential. Although we didn’t know it at the time, we were about to live through Gandhi’s quote: “First they ignore you, then they laugh at you, then they fight you, then you win.”

Fast forward 22 years. A computer revolution took place in the mid-90’s. The global infrasound network of the International Monitoring System (IMS) began construction before the turn of the millennium, in its full 24-bit broadband digital glory. Designed by the United Nations’s Comprehensive Nuclear-Test-Ban Treaty Organization (CTBTO), the IMS infrasound detects minute pressure variations produced by clandestine nuclear tests at standoff distances of thousands of kilometers. This new, ultra-sensitive global sensor network and its cyberinfrastructure triggered an Infrasound Renaissance and opened new opportunities in the study and operational use of volcano infrasound.

Suddenly endowed with super sensitive high-resolution systems, fast computing, fresh capital, and the glorious purpose of global monitoring for hazardous explosive events, our community rapidly grew and reconstructed fundamental paradigms early in the century. The mid-naughts brought regional acoustic monitoring networks in the US, Europe, Southeast Asia, and South America, and helped validate infrasound as a robust monitoring technology for natural and man-made hazards. By 2010, infrasound was part of the accepted volcano monitoring toolkit. Today, large portions of the IMS infrasound network data, once exclusive, are publicly available (see links at the bottom), and the international infrasound community has grown to the hundreds, with rapid evolution as new generations of scientists joins in.

In order to capture infrasound, a microphone with a low frequency response or a barometer with a high frequency response are needed. The sensor data then needs to be digitized for subsequent analysis. In the pre-millenium era, you’d drop a few thousand dollars to get a single, basic data acquisition system. But, in the very near future, there’ll be an app for that. Once the sound is sampled, it looks much like your typical sound track, except you can’t hear it. A single sensor record is of limited use because it does not have enough information to unambiguously determine the arrival direction of a signal. So we use arrays and networks of sensors, using the time of flight of sound from one sensor to another to recognize the direction and speed of arrival of a signal. Once we associate a signal type to an event, we can start characterizing its signature.

Consider Kilauea Volcano. Although we think of it as one volcano, it actually consists of various crater complexes with a number of sounds. Here is the sound of a collapsing structure

As you might imagine, it is very hard to classify volcanic sounds. They are diverse, and often superposed on other competing sounds (often from wind or the ocean). As with human voices, each vent, volcano, and eruption type can have its own signature. Identifying transportable scaling relationships as well as constructing a clear notation and taxonomy for event identification and characterization remains one of the field’s greatest challenges. A 15-year collection of volcanic signals can be perused here, but here are a few selected examples to illustrate the problem.

First, the only complete acoustic record of the birth of Halemaumau’s vent at Kilauea, 19 March 2008:


Here is a bench collapse of lava near the shoreline, which usually leads to explosions as hot lava comes in contact with the ocean:



Here is one of my favorites, from Tungurahua Volcano, Ecuador, recorded by an array near the town of Riobamba 40 km away. Although not as violent as the eruptive activity that followed it later that year, this sped-up record shows the high degree of variability of eruption sounds:



The infrasound community has had an easier time when it comes to the biggest and meanest eruptions, the kind that can inject ash to cruising altitudes and bring down aircraft. Our Acoustic Surveillance for Hazardous Studies (ASHE) in Ecuador identified the acoustic signature of these type of eruptions. Here is one from Tungurahua:


Our data center crew was at work when such a signal scrolled through the monitoring screens, arriving first at Riobamba, then at our station near the Colombian border. It was large in amplitude and just kept on going, with super heavy bass – and very recognizable. Such signals resemble jet noise — if a jet was designed by giants with stone tools. These sustained hazardous eruptions radiate infrasound below 0.02 Hz (50 second periods), so deep in pitch that they can propagate for thousands of kilometers to permit robust acoustic detection and early warning of hazardous eruptions.

In collaborations with our colleagues at the Earth Observatory of Singapore (EOS) and the Republic of Palau, infrasound scientists will be turning our attention to early detection of hazardous volcanic eruptions in Southeast Asia. One of the primary obstacles to technology evolution in infrasound has been the exorbitant cost of infrasound sensors and data acquisition systems, sometimes compounded by export restrictions. However, as everyday objects are increasingly vested with sentience under the Internet of Things, this technological barrier is rapidly collapsing. Instead, the questions of the decade are how to receive, organize, and distribute the wealth of information under our perception of sound so as to construct a better informed and safer world.

IRIS Links, search for IM and UH networks, infrasound channel name BDF

Milton Garcés is an Earth Scientist at the University of Hawaii at Manoa and the founder of the Infrasound Laboratory in Kona. He explores deep atmospheric sounds, or infrasounds, which are inaudible but may be palpable. Milton taps into a global sensor network that captures signals from intense volcanic eruptions, meteors, and tsunamis. His studies underscore our global connectedness and enhance our situational awareness of Earth’s dynamics. You are invited to follow him on Twitter @iSoundHunter for updates on things Infrasonic and to get the latest news on the Infrasound App.

Featured image: surface flows as seen by thermal cameras at Pu’u O’o crater, June 27th, 2014. Image: USGS

tape reel

REWIND! If you liked this post, check out …

SO! Amplifies: Ian Rawes and the London Sound Survey — Ian Rawes

Sounding Out Podcast #14: Interview with Meme Librarian Amanda Brennan — Aaron Trammell

Catastrophic Listening — China Blue

%d bloggers like this: