Archive | Acoustics RSS for this section

The Eldritch Voice: H. P. Lovecraft’s Weird Phonography

global-mosaic-of-pluto-in-true-color
Weird Tales CoverWelcome to the last installment of Sonic Shadows, Sounding Out!’s limited series pursuing the question of what it means to have a voice. In the most recent post, Dominic Pettman encountered several traumatized birds who acoustically and uncannily mirror the human, a feedback loop that composes what he called “the creaturely voice.” This week, James Steintrager investigates the strange meaning of a “metallic” voice in the stories of H.P. Lovecraft, showing how early sound recording technology exposed an alien potential lingering within the human voice. This alien voice – between human and machine – was fodder for  techniques of defamiliarizing the world of the reader.
 
I’ll leave James to tell us more. Thanks for reading!

— Guest Editor Julie Beth Napolin

A decade after finding itself downsized to a dwarf planet, Pluto has managed to spark wonder in the summer of 2015 as pictures of its remarkable surface features and those of its moon are delivered to us by NASA’s New Horizons space probe. As scientists begin to tentatively name these features, they have drawn from speculative fiction for what they see on the moon Charon, giving craters names including Spock, Sulu, Uhuru, and—mixing franchises—Skywalker . From Doctor Who there will be a Tardis Chasma and a Gallifrey Macula. Pluto’s features stretch back a bit further, where there will also be a Cthulhu Regio, named after the unspeakable interstellar monster-cum-god invented by H. P. Lovecraft.

We can imagine that Lovecraft would have been thrilled, since back when Pluto was first discovered in early 1930 and was the evocative edge of the solar system, he had turned the planet into the putative home of secretive alien visitors to Earth in his short story “The Whisperer in Darkness.” First published in the pulp magazine Weird Tales in 1931, “The Whisperer in Darkness” features various media of communication—telegraphs, telephones, photographs, and newspapers—as well as the possibilities of their manipulation and misconstruing. The phonograph, however, plays the starring role in this tale about gathering and interpreting the eerie and otherworldly—the eldritch, in a word—signs of possible alien presence in backwoods Vermont.

In the story, Akeley, a farmer with a degree of erudition and curiosity, captures something strange on a record. This something, when played back by the protagonist Wilmarth, a folklorist at Lovecraft’s fictional Miskatonic University, goes like this:

Iä! Shub-Niggurath! The Black Goat of the Woods with a Thousand Young! (219)

The sinister resonance of a racial epithet in what appears to be a foreign or truly alien tongue notwithstanding, this story features none of the more obvious and problematic invocations of race and ethnicity—the primitive rituals in the swamps of Louisiana of “The Call of Cthulhu” or the anti-Catholic immigrant panic of “The Horror at Red Hook”—for which Lovecraft has achieved a degree of infamy. Moreover, the understandable concern with Lovecraft’s social Darwinism and bad biology in some ways tends to miss how for the author—and for us as well—power and otherness are bound up with technology.

weirdtalesThe transcription of these exclamations, recorded on a “blasphemous waxen cylinder,” is prefaced with an emphatic remark about their sonic character: “A BUZZING IMITATION OF HUMAN SPEECH” (219-220). The captured voice is further described as an “accursed buzzing which had no likeness to humanity despite the human words which it uttered in good English grammar and a scholarly accent” (218). It is glossed yet again as a “fiendish buzzing… like the drone of some loathsome, gigantic insect ponderously shaped into the articulate speech of an alien species” (220). If such a creature tried to utter our tongue and to do so in our manner—both of which would be alien to it—surely we might expect an indication of the difference in vocal apparatuses: a revelatory buzzing. Lovecraft’s story figures this “eldritch sound” as it is transduced through the corporeal: as the timbral indication of something off when the human voice is embodied in a fundamentally different sort of being. It is the sound that happens when a fungoid creature from Yuggoth—the supposedly native term for Pluto—speaks our tongue with its insectile mouthparts.

Yet, reading historically, we might understand this transduction as the sound of technical mediation itself: the brazen buzz of phonography, overlaying and, in a sense, inhabiting the human voice.

For listeners to early phonographic recordings, metallic sounds—inevitable given the materials used for styluses, tone arms, diaphragms, amplifying horns—were simply part of the experience. Far from capturing “the unimaginable real” or registering “acoustic events as such,” as media theorist Friedrich Kittler once put the case about Edison’s invention, which debuted in 1877, phonography was not only technically incapable of recording anything like an ambient soundscape but also drew attention to the very noise of itself (23).

For the first several decades of the medium’s existence, patent registers and admen’s pitches show that clean capture and reproduction were elusive rather than given. An account in the Literary Digest Advertiser of Valdemar Poulsen’s Telegraphone, explains the problem:

telegraphoneThe talking-machine records sound by the action of a steel point upon some yielding substance like wax, and reproduces it by practically reversing the operation. The making of the record itself is accompanied by a necessary but disagreeably mechanical noise—that dominating drone—that ‘b-r-r-r-r’ that is never in the human voice, and always in its mechanical imitations. One hears metallic sounds from a brazen throat—uncanny and inhuman. The brittle cylinder drops on the floor, breaks—and the neighbors rejoice!

The Telegraphone, which recorded sounds “upon imperishable steel through the intangible but potent force of electromagnetism” such that no “foreign or mechanical noise is heard or is possible,” of course promised to make the neighbors happy not by breaking the cylinder but rather by taking the inhuman ‘b-r-r-r-r’ out of the phonographically reproduced voice. Nonetheless, etching sound on steel, the Telegraphone was still a metal machine and unlikely to overcome the buzz entirely.

In his account of “weird stories” and why the genre suited him best, Lovecraft explained that one of his “strongest and most persistent wishes” was “to achieve, momentarily, the illusion of some strange suspension or violation of the galling limitations of time, space, and natural law which forever imprison us and frustrate our curiosity about the infinite cosmic spaces beyond the radius of our sight and analysis.” In “The Whisperer in Darkness,” Lovecraft put to work a technology that was rapidly becoming commonplace to introduce a buzz into the fabric of the everyday. This is the eldritch effect of Lovecraft’s evocation of phonography. While we might wonder whether a photograph has been tampered with, who really sent a telegram, or with whom we are actually speaking over a telephone line—all examples from Lovecraft’s tale—the central, repeated conundrums for the scholar Wilmarth remain not only whose voice is captured on the recorded cylinder but also why it sounds that way.

auxetophoneThe phonograph transforms the human voice, engineers a cosmic transduction, suggesting that within our quotidian reality something strange might lurk. This juxtaposition and interplay of the increasingly ordinary and the eldritch is also glimpsed in an account of Charles Parson’s invention the Auxetophone, which used a column of pressurized air rather than the usual metallic diaphragm. Here is an account of the voice of the Auxetophone from the “Matters Musical” column of The Bystander Magazine from 1905: “Long ago reconciled to the weird workings of the phonograph, we had come to regard as inevitable the metallic nature of its inhuman voice.” The new invention might well upset our listening habits, for Mr. Parson’s invention “bids fair to modify, if not entirely to remove,” the phonograph’s “somewhat unpleasant timbre.”

What the phonograph does as a medium is to make weird. And what making weird means is that instead of merely reproducing the human voice—let alone rendering acoustic events as such—it transforms the latter into its own: an uncanny approximation, which fails to simulate perfectly with regard to timbre in particular. Phonography reveals that the materials of reproduction are not vocal chords, breath, labial and dental friction—not flesh and spirit, but vibrating metal.

Although we can only speculate in this regard, I would suggest that “The Whisperer in Darkness” was weirder for readers for whom phonographs still spoke with metallic timbre. The rasping whisper of the needle on cylinder created what the Russian Formalist Viktor Shklovsky was formulating at almost exactly the same time as the function of the literary tout court: defamiliarization or, better, estrangement. Nonetheless, leading the reader to infer an alien presence behind this voice was equally necessary for the effect. After all, if we are to take the Auxetophone as our example—an apparatus announced in 1905, a quarter of a decade before Lovecraft composed his tale, and that joined a marketplace burgeoning with metallic-voice reducing cabinets, styluses, dampers, and other devices—phonographic listeners had long since become habituated to the inhumanity of the medium. That inhumanity had to be recalled and reactivated in the context of Lovecraft’s story.

dictaphoneTo understand fully the nature of this reactivation, moreover, we need to know precisely what Lovecraft’s evocative phonograph was. When Akeley takes his phonograph into the woods, he adds that he brought “a dictaphone attachment and a wax blank” (209). Further, to play back the recording, Wilmarth must borrow the “commercial machine” from the college administration (217). The device most consistent with Lovecraft’s descriptions and terms is not a record player, as we might imagine, but Columbia Gramophone Company’s Dictaphone. By the time of the story’s setting, Edison Phonograph’s had long since switched to more durable celluloid cylinders (Blue Amberol Records; 1912-1929) in an effort to stave off competition from flat records. Only Dictaphones, aimed at businessmen rather than leisure listeners, still used wax cylinders, since recordings could be scraped off and the cylinder reused. The vinyl Dictabelt, which eventually replaced them, would not arrive until 1947.

Meanwhile, precisely when the events depicted in “The Whisperer in Darkness” are supposed to have taken place, phonography was experiencing a revolutionary transformation: electronic sound technologies developed by radio engineers were hybridizing the acoustic machines, and electro-acoustic phonographs were in fact becoming less metallic in tone. Yet circa 1930, as the buzz slipped toward silence, phonography was still the best means of figuring the sonic uncanny valley. It was a sort of return of the technologically repressed: a reminder of the original eeriness of sound reproduction—recalled from childhood or perhaps parental folklore—at the very moment that new technologies promised to hide such inhumanity from sensory perception. Crucially, in Lovecraft’s tale, estrangement is not merely a literary effect. Rather, the eldritch is what happens when the printed word at a given moment of technological history calls up and calls upon other media of communication, phonography not the least.

I have remarked the apparent absence of race as a concern in “The Whisperer in Darkness,” but something along the lines of class is subtly but insistently at work in the tale. The academic Wilmarth and his erudite interlocutor Akeley are set in contrast with the benighted, uncomprehending agrarians of rural Vermont. Both men also display a horrified fascination with the alien technology that will allow human brains to be fitted into hearing, seeing, and speaking machines for transportation to Yuggoth. These machines are compared to phonographs: cylinders for storing brains much like those for storing the human voice. In this regard, the fungoid creatures resemble not so much bourgeois users or consumers of technology as scientists and engineers. Moreover, they do so just as a discourse of technocracy—rule by a technologically savvy elite—was being articulated in the United States. Here we might see the discovery of Pluto as a pretext for exploring anxieties closer to home: how new technologies were redistributing power, how their improvement—the fading of the telltale buzz—was making it more difficult to determine where humanity stopped and technology began, and whether acquiescence in this changes was laudable or resistance feasible. As usual with Lovecraft, these topics are handled with disconcerting ambivalence.

James A. Steintrager is a professor of English, Comparative Literature, and European Languages and Studies at the University of California, Irvine. He writes on a variety of topics, including libertinism, world cinema, and auditory cultures. His translation of and introduction to Michel Chion’s Sound: An Acoulogical Treatise will be published by Duke University Press in fall of 2015.

Featured image: Taken from “Global Mosaic of Pluto in True Color” in NASA’s New Horizons Image Gallery, public domain. All other images courtesy of the author.

tape reelREWIND! . . .If you liked this post, you may also dig:

Sound and Sanity: Rallying Against “The Voice” — Mark Brantner

DIANE… The Personal Voice Recorder in Twin Peaks — Tom McEnaney

Reproducing Traces of War: Listening to Gas Shell Bombardment, 1918 — Brían Hanrahan

Future Memory: Womb Sound As Shared Experience Crossing Time and Space

Odette cry

This Month will feature a two-part post by SO! regular writer Maile Colbert.  Look for Part Two on Monday, January 19th.

I was a child obsessed with time travel. Beyond favorites such as A Wrinkle in Time and Time Bandits, I perpetually daydreamed of the ability to pause, reverse, and fast-forward my life. I had a book on the “olden days” and it amazed me that my great-grandparents, whom I had the fortune to know, had lived them. I wanted to fast forward and see myself their current age, telling stories to the next generations of a good life lived. I used to entertain the thought that if I let my breath go and let myself sink to the bottom of a body of water, I could pause time, or at least slow it down, as the sound of the fluid world around me seemed to suggest. Whenever my family moved, I made a time capsule, and I always scanned the ocean for long lost bottled messages. These were the beginnings of my future in time-based media–both image and sound–my love for found footage, and my recent research and writing on sound back in time.

Now as a new mother, I am beginning to think about the future in a way I hadn’t before. I see my mother in my daughter, and I see her mother, and my partner’s mother. I recognize my grandfather’s eyebrow when furrowed, and her grandfather’s nose. My mouth when smiling, my partner’s mouth when in concentration.

And our ears. . .our very sensitive hearing, almost like a punch line. Our daughter is truly the daughter of sound artists. In this first post of a two part series on humans’ earliest interactions with sound, I document our work sounding and listening together, which began in a future-oriented past I am still learning about.

Womb

There was a study in which doctors gave babies only a day old pacifiers connected to tape recorders. Depending on the pattern of the new babies suck, the tape recorder would either switch on the sound of the mother’s voice, or a stranger’s.

“Within 10 to 20 minutes, the babies learned to adjust their sucking rate on the pacifier to turn on their own mother’s voice,” says the study’s coauthor William Fifer, Ph.D., an associate professor of psychiatry and pediatrics at Columbia University’s College of Physicians and Surgeons. “This not only points out a newborn’s innate love for his mother’s voice but also a baby’s unique ability to learn quickly.”

What Babies Learn in the Womb,” 2014, Lara Flynn Maccarthy, Parenting

My daughter Odette knew my voice the moment she was born. In a strange, bright, cold new world, it seemed one constant she could rely upon. When she was first placed upon my chest, I started to sing to her, and she was calming, staring at me, as much as her newborn eyes would let her, with an expression of surprised recognition, as this familiar voice sang a familiar song, one I sang her often in the womb.  One I knew by heart because my mother would sing it to me when I was a child.

 

Are you going to Scarborough Fair

Parsley, sage, rosemary and thyme

Remember me to the one who lives there

She once was a true love of mine. . .

The mother’s voice comes to the fetus not solely as ambient sound through the abdomen, as other external sounds and voices would, but also through the vocal cords’ internal vibration. There is a direct connection, a shared space. As early as the seventh month, a fetal heartbeat will slow and calm to the sound of the mother’s voice, and research has shown newborns even prefer a similar version of their mother’s voice to what they heard in the womb, muffled and low. When Odette suffered colic in her early months, one sure way to help comfort her was to sing to her while she was on my chest. Aside from the close contact of skin, the familiar smell, the warmth, it could be that hearing my voice also through the chest mimicked the womb filter.

In the tape recorder study, researchers also noted that newborns would suck more intensely to recordings of people speaking in the language of their mothers, most likely picking up on the melody and rhythm. We are beginning to understand that learning starts in the womb.

Fetal Soap Addiction

Carmen Bank found her 1985 pregnancy rather boring. So, to pass the time, she started doing something she would never have dreamed of: watching a soap opera.

Unexpectedly, she found herself hooked. And so she spent almost every morning in front of her television set, ready for the familiar theme of “Ryan’s Hope.” After Melissa was born that October, Bank bought a videocassette recorder so she could tape the show when she was too busy to watch.

Bank isn’t sure when she discovered the behavior, but, shortly after Melissa was born, Bank realized that the baby seemed to recognize the “Ryan’s Hope” theme and would stop fussing when the program began.

“She’d just sit there and watch the whole introduction and then she would start imitating what they do on the show,” Bank said. “This has been going on forever.”

-The Very Young and Restless, Do Soaps Hook the Unborn? June 28, 1988, Allan Parachini, The New York Times

 

My third trimester was a rough one.   I was a walking swimming pool of about forty pounds of baby and amniotic fluid. My pelvis had gone completely out of line, making even that pregnancy waddle slow and difficult. Needless to say, I was less and less mobile. I was lucky that much of my remaining work was writing and studio based, but often found myself having to take mental breaks as well. My body/mind chemistry was working overtime. Something that happens with pregnancy when preparing mentally for your new, shared life is to think a lot about your own childhood. I was lucky to have a happy one, and so strong nostalgic feelings and memories would come up, particularly around the television show Dr. Who.  I used to spend a happy hour with my father once a week watching reruns from the 70’s in the 80’s.

Dr. Who returned to broadcast in the 2000s, in a few new successful regenerations.  The new iteration uses a lot of the classic themes, characters, and even remixes and re-masters the the original opening score written by Ron Grainer and realized by the great Delia Derbyshire for the BBC Radiophonic Workshop in 1963; the Dr. Who theme was one of the very first signature electronic music tunes, and performed well before commercial synthesizers were even available. Derbyshire used musique concrete techniques, cutting each note individually on analogue tape, speeding up and slowing down to create the notes from recordings of a single plucked string, white noise, and the simple harmonic waveforms of test-tone oscillators. (Grainer was famous for asking after hearing Derbyshire’s magic, “Did I write that?”. Derbyshire replied “Most of it.” The BBC, who kept members of the Radiophonic Workshop anonymous, prevented Grainer from giving Derbyshire a co-composer credit and a share of the royalties.)

It is a really, really catchy tune:

While Odette was in the womb, I watched all of those decades addictively, one after another. When I came across the soap opera study after she was born, I decided my obsessive Who-watching had set up a perfect laboratory to try it out myself. We started in 1963 and moved through time with the Doctor. Odette looked up in surprise and her brow furrowed in concentration. She looked around slowly at first, then faster and faster. She smiled; she cooed; she laughed. She started to flap her arms.

.

When I finally turned it off, she stopped everything and looked concerned. I turned it on again and we danced together in clear recognition of this already-shared future past sonic moment, one I had with my father and now with her. Now I understood that as I consumed Dr. Who, Odette was not only hearing, she was learning, and beginning the act of listening.

Sounds have a surprising impact upon the fetal heart rate: a five second stimulus can cause changes in heart rate and movement which last up to an hour. Some musical sounds can cause changes in metabolism. “Brahm’s Lullaby,” for example, played six times a day for five minutes in a premature baby nursery produced faster weight gain than voice sounds played on the same schedule (Chapman, 1975)

-The Fetal Sense, A Classical View, David B. Chamberlain, Birth Psychology

Wombscapes 

Odette’s very first movements, her first “quickening”, was in response to David Bowie’s “Starman”.  This was around 16 weeks, often the time for first movements in the fetus, and interestingly also the time when the hearing has developed.  The fetus floats in a rich and complex soundscape; it is anything but quiet. The womb filter…amniotic fluid, embryonic membranes, uterus, the maternal abdomen-low frequencies, and blood in veins whooshing, then Mother’s voice and body noises such as hiccups and the gurgles of digestion and of course, the heartbeat. The Mother’s heartbeat can be as loud as a vacuum cleaner and ultra sounds as loud as a subway car arriving in a train station.We can try to mimic the womb-scape, imagining sounds being filtered through the body. We can use a hydrophone–a pressure microphone designed to be sensitive to soundwaves through fluid matter–on the abdomen to get an idea and sample for our womb-scape.

Perhaps it would sound something like this…

…reactive listening begins eight weeks before the ear is structurally complete at about 24 weeks. These findings indicate the complexity of hearing, lending support to the idea that receptive hearing begins with the skin and skeletal framework, skin being a multireceptor organ integrating input from vibrations, thermo receptors, and pain receptors. This primal listening system is then amplified with vestibular and cochlear information as it becomes available. With responsive listening proven at 16 weeks, hearing is clearly a major information channel operating for about 24 weeks before birth.

-The Fetal Sense, A classical view

Sound artist and Acoustic Ecologist Andrea Williams has been recently working on a composition for Bellybuds, for her yet born nephew. Bellybuds are “a specialized speaker system that gently adheres to your belly & safely plays memory-shaping sound directly to the womb.”  Much of her work is composed with space in mind, using room sounds in a live performance situation. Williams told me it was interesting thinking about the womb as a new “venue,” with her little developing nephew as her audience. “What is he hearing?”  she asked,  “will he recognize me right away upon meeting him for the first time if he only hears the sound of my voice through the Bellybuds while he is a fetus?” I love the idea that she could send a “hello” from one place to her nephew in the womb in another.

The more we understand and realize about fetal hearing and processing sound, the more we understand how fetuses can detect subtle changes and process complex information. Memory starts to form around 30 weeks, and it’s possible early sound interventions at this time could help babies with detected abnormal development. Speaking and singing to the unborn fetus, allowing them to experience different soundscapes while still in the womb, helps shape their brains. This is probably why the urge to do so is there.

. . .Odette’s first dance. Odette’s first songs. . . transcending time and space.

dedicated to Odette Helen, and to the family, daughter, and memory of Steven Miller

Featured Image: Odette’s Birth Cry, photo credit Rui Costa

The album Future Memory, for Odette will be released in 2015 through Wild Silence.  A dedication album to a newborn daughter…a mix of her parents’ recorded and shared sounds, memories, hopes, and dreams towards a future with her. Sounds of her womb-scape, birth, and first year…music in collaboration with friends and family across oceans and land…an album of lullabies for Odette.

Maile Colbert is a multi-media artist with a concentration on sound and video who relocated from Los Angeles, US to Lisbon, Portugal. She is a regular writer for Sounding Out!

tape reelREWIND! . . .If you liked this post, you may also dig:

On Sound and Pleasure: Meditations on the Human Voice– Yvon Bonenfant

This Is Your Body on the Velvet Underground– Jacob Smith

Sound Designing Motherhood: Irene Lusztig & Maile Colbert Open The Motherhood Archives– Maile Colbert

 

Sound at SEM 2014

"Musician" by Flickr user Joanna, CC BY-NC 2.0
http://www.flickr.com/photos/magandafille/2259728042/

Hot on the heels of the American Musicological Society and Society for Music Theory’s joint annual meeting in Milwaukee, the Society for Ethnomusicology will hold its 59th Annual Meeting in Pittsburgh, November 13-16, 2014, hosted by the University of Pittsburgh. SEM is arguably one of the conferences most hospitable  to sound studies, and several panels feature strong papers.

On Wednesday, Nov. 12, the “Music and Labor” pre-conference symposium features some fascinating papers of interest to sound scholars and includes a keynote address by Dr. Marcus Rediker, Distinguished Professor of Atlantic History at the University of Pittsburgh. With panels titled “(Re) Conceptualizing Music and Labor,” “The Labor of Music in Transitioning Economies,” “Art as Work: Defying Capitalist Hegemony and National Narrative through Musical Activism and Creative Adaptation,” and “Transformation of Music Labor Regimes in Socialist and Post-Socialist Southeastern Europe,” even the papers that aren’t especially sound studies-related have the potential to demonstrate deft interdisciplinary approaches that would be applicable (and fruitful) in sound studies research.

One of the first sound studies events of the conference program is the annual meeting of the Sound Studies Special Interest Group. Dr. Allen Roda, Jane and Morgan Whitney Research Fellow at the Metropolitan Museum of Art in New York City, and I are currently co-chairs of the SIG; anyone interested in sound studies will not want to miss our meeting on Thursday, November 13 at 12:30-1:30 PM in the Duquesne Room. This year’s meeting will mark the SIG’s 6th anniversary since it was formed in 2009. The group now has over 100 members and is represented on several panels at the 2014 conference in Pittsburgh. One co-chair seat will become vacant this year, and the group will hold elections to fill this position at the meeting; we also plan to discuss plans for more visibility online and among the academic community.

Before the meeting, come early to the 8:00-10:30 AM session in that same room to catch Molly McBride’s paper, “The Sounds of Humor: Listening to Gender in Early Barn Dance Radio,” or see a whole sound studies panel titled “Auditory Histories of the Indian Ocean: Hearing the Soundworlds of the Past” in the Alleghany Room.

"The Cathedral of Learning at UPitt" by Flickr user Carlos Hernandez, CC BY-NC-SA 2.0

“The Cathedral of Learning at UPitt” by Flickr user Carlos Hernandez, CC BY-NC-SA 2.0

If you can’t make those early panels on the first day, the convention boasts numerous, high-quality sound studies sessions, many of which convene simultaneously. There have been several sound studies-related panels and individual papers at past meetings, but the number of high-quality papers is certainly trending in favor of more sound studies.

Also, the last several annual meetings have featured a soundwalk hosted by the Sound Studies SIG. This year is no different; however, rather than having a guided walk around the host city, this year’s soundwalk will be self-guided. Using the Twitter hashtag #semsoundwalk, participants will listen to Pittsburgh, the acoustic environment of the conference itself, the coffee shop where they stop for refreshment, or wherever they happen to find themselves between 1:15 – 6:00PM on Friday, Nov. 14. Be sure to follow the hashtag – even if you’re not in Pittsburgh – to “listen” along with conference participants.

I am delighted to see that this year’s conference unites the SEM’s commitment to the study of world musics and cultures and sound studies, particularly in panels such as “Auditory Histories of the Indian Ocean: Hearing the Soundworlds of the Past,” “Contemplating Voice in Cross-Cultural Perspective,” and “Regulating Space, Regulating Sound: Musical Practice and Institutional Mediation in São Paulo, Brazil.” This year also highlights the SEM’s strong interdisciplinary bent and makes even more room at the epistemological table for the examination of technoculture and its implications for sound studies and the larger ethnomusicological community.

Because of the sheer volume of sound studies activities, rather than listing my “picks” for the conference, I’ve listed most of the relevant papers and sessions, leaving the hard decision up to you. In fact, there are so many genuine sound studies panels and papers (or papers on closely related topics) its easy to see why the blurry line that demarcates “sound studies” from “music studies” seems blurriest at SEM. For those who cannot attend the conference, some of this year’s panels will be live-streamed. The Special Interest Groups for Sound Studies and Ecomusicology are also co-hosting a roundtable on Saturday morning. For more information about the conference and to catch the live-streamed sessions, visit the conference website at http://www.indiana.edu/~semhome/2014/.

Michael Austin is Assistant Professor of Media, Journalism, and Film and coordinator of the Interdisciplinary Studies Program in the School of Communications at Howard University where he teaches courses in music production, sound design for film and audio production. He holds a Ph.D. in Humanities – Aesthetic Studies (with a specialization in Arts and Technology) from the University of Texas at Dallas and music degrees from UT-San Antonio and UT-Austin. He is also affiliated with the Laboratoire Musique et Informatique de Marseille, an audio/music technology and informatics lab in Marseille, France, and is co-chair of the Society for Ethnomusiciology’s Special Interest Group for Sound Studies.

Featured image: “Musician” by Flickr user Joanna, CC BY-NC 2.0

"Cathedral of learning/Stephen Foster Memorial - Painted by Light" by Flickr user Sriram Bala, CC BY-NC 2.0

“Cathedral of learning/Stephen Foster Memorial – Painted by Light” by Flickr user Sriram Bala, CC BY-NC 2.0

WEDNESDAY, November 12

8:00 am – 8:00 pm

Ballroom 3, Wyndham Grand Pittsburgh Downtown Hotel
Pre-Conference Symposium: “Music and Labor”

THURSDAY, November 13

8:30 – 10:30 am

Duquesne Room
“The Sounds of Humor: Listening to Gender on Early Barn Dance Radio,” Molly McBride, Memorial University of Newfoundland

Alleghany Room
Session: Auditory Histories of the Indian Ocean: Hearing the Soundworlds of the Past
“Wonders and Strange Things: Practices of Auditory History before Recorded Sound,” Katherine Butler Schofield, King’s College London
“Notes in the Margins: Sumatran Religious Hybridity and the Efficacy of Sound, “ Julia Byl, King’s College London
“Contact, Contestation and Compromise: Sound and Space in 19th-Century Singapore,” Jenny McCallum, King’s College London
“A ‘Wayang of the Orang Puteh’?: Theatres, Music Halls and Audiences in High-Imperial, Calcutta, Madras, Penang and Singapore,” David Lunn, King’s College London

10:45am -12:15 pm

Sterling 3 Room
“Sounding Neoliberalism in the Richmond City Jail,” Andrew C. McGraw, University of Richmond

Heinz Room
“The Color of Sound: Timbre in Ralph Ellison’s Invisible Man,” Sydney A. Boyd, Rice University

12:30 – 1:30 pm

Duquesne Room
Special Interest Group for Sound Studies

1:45 – 3:45 pm

Sterlings 1 Room
“Radio Archives and the Art of Persuasion: Preserving Social Hierarchies in the Airwaves of Lima” Carlos Odria, Florida State University

Ft. Pitt Room
Session: Mediated Musics, Mediated Lives
“Uploading Matepe: The Role of Online Learning Communities and the Desire to Connect to Northeastern Zimbabwe,” Jocelyn A. Moon, University of Washington; Zachary Moon, Independent Scholar
“Staging Overcoming: Disability, Meritocracy, and the Envoicing of Dreams,” William Cheng, Dartmouth University
“As Time Goes By: Car Radio and Spatiotemporal Manipulations of the Travel Experience in 20th-Century America,” Sarah Messbauer, University of California, Davis
“’How Can We Live in a Country Like This?’ Music, Talk Radio, and Moral Anxiety,” Karl Haas, Boston University

Sterling 3 Room
Session: Oxide and Memory: Tape Culture and the Communal Archive
Oxide and Memory: Tape Culture and the Communal Archive
“Magnetic Tape, Materiality, and the Interpretation of Non-Commercial Cassette and Reel-to-Reel Recordings from Quebec’s Gaspé Peninsula,” Laura Risk, McGill University
“Family Sense and Family Sound: Home Recordings and Greek-American Identity,” Panayotis League, Harvard University
“The Memory of Media: Autoarchivization and Empowerment in 1970s Jazz,” Michael C. Heller, University of Massachusetts, Boston
“Reimagining the Community Sound Archive: Cultural Memory and the Case for ‘Slow’ Archiving in a Gaspesian Village,” Glenn Patterson, Memorial University of Newfoundland

4:00 – 5:30 pm

Sterlings 1 Room
Panel: Contemplating Voice in Cross-Cultural Perspective
“The Gravest of Female Voices: Women and the Alto in Sacred Harp,” Sarah E. Kahre, Florida State University
“Re-sounding Waljinah: Aging and the Voice in Indonesia,” Russ P. Skelchy, University of California, Riverside
“Katajjaq: Between Vocal Games, Place and Identity,” Raj S. Singh, York University

Sterlings 3 Room
Session: Rumors, Sound Leakages and Individual Tales: Disruptive Listening in Zones of Conflict
“From the Struggle for Citizenship to the Fragmentation of Justice: Reflections on the Place of Dinka Songs in South Sudan’s Transitional Justice Process,” Angela Impey, School of Oriental and African Studies (SOAS), University of London
“Internet Rumors and the Changing Sounds of Uyghur Religiosity: The Case of the Snake Monkey Woman,” Rachel Harris, School of Oriental and African Studies (SOAS), University of London
“The Cantor and the Muezzin’s Duet at the Western Wall: Contesting Sound Spaces on the Frayed Seams of the Israel-Palestine Conflict,” Abigail Wood, University of Haifa

Heinz Room
Session: Historiography, Historicity, and Biography
“A Sonic Historiography of Early Sample-Based Hip-Hop Recordings,” Patrick Rivers, University of New Haven
“Biography as Methodology in the Study of Okinawan Folk Song,” Kirk A. King, University of British Columbia
“Sounding the Silent Image: Uilleann Piper as Ethnographic Object in Early Hollywood Film,” Ivan Goff, New York University

Untitled by Flickr user David Kent, CC BY-NC-ND 2.0

Untitled by Flickr user David Kent, CC BY-NC-ND 2.0

FRIDAY, November 14

7:00 – 8:00 am

Special Interest Group for Voice Studies

8:30 – 10:30 am

Commonwealth 1-2 Room, live streaming
Session: Sound Networks: Socio-Political Identity, Engagement, and Mobilization through Music in Cyberspace and Independent Media
*Sponsored by the Popular Music Section and Special Interest Group for Sound Studies
“Technological Factors Conditioning the Socio-Political Power of Music in Cyberspace,” Michael Frishkopf, University of Alberta
“Cyber-Mobilization, Informational Intimacy, and Musical Frames in Ukraine’s EuroMaidan Protests,” Adriana Helbig, University of Pittsburgh
“Countering Spirals of Silence: Protest Music and the Anonymity of Cyberspace in the Japanese Antinuclear Movement,” Noriko Manabe, Princeton University
“Living (and Dying) the Rock and Roll Dream: Alternative Media and the Politics of ‘Making It’ as an Iranian Underground Musician,” Farzaneh Hemmasi, University of Toronto

Sterling 1 Room
Session: Affective Environments and the Bioregional Soundscape
*Sponsored by the Special Interest Group for Ecomusicology
“’Landscape is Not Just What Your Eyes See’: Battery Radio, the Technological Soundscape, and Sonically Knowing the Battery, Kate Galloway, Memorial University of Newfoundland
“Re-sounding Caribou: Musical Posthumanism in Being Caribou,” Erin Scheffer, University of Toronto
“Cold, Crisp, and Dry: Inuit and Southern Concepts of the Northern Soundscape,” Jeffrey van den Scott, Northwestern University
Discussant, Nancy Guy, University of California, San Diego

Duquesne Room
“The Sound of Affective Fact,” Matthew Sumera, University of Minnesota

1:15 – 6:30 pm

Soundwalk: A Sonic Environmental Survey of the SEM Annual Meeting
*Sponsored by the Special Interest Groups for Sound Studies and Ecomusicology. Follow the walk on Twitter: #semsoundwalk
(Meet in Wyndham Grand main lobby at 1:15pm. Reconvene in lobby at 6:00)

1:45 – 3:45 pm

Smithfield Room
Session: Strident Voices: Material and Political Alignments
*Sponsored by the Special Interest Group for Voice Studies
“Registering Protest: Voice, Precarity, and Assertion in Crisis Portugal,”Lila Ellen Gray, University of Amsterdam
“Quiet, Racialized Vocality at Fisk University,” Marti Newland, Columbia University
“’The Rough Voice of Tenderness’: Chavela Vargas and Mexican Song,” Kelley Tatro, North Central College
Discussant: Amanda Weidman, Bryn Mawr College

4:00 – 5:30 pm

Heinz Room
Session: Celebratory Sounds and the Politics of Engagement
“Creating Zakopower in Postsocialist Poland,” Louise J. Wrazen, York University
“Merry-Making and Loyalty to the Movement: Conviviality as a Core Parameter of Traditionalism in Aysén, Chile,” Gregory J. Robinson, George Mason University
“Sounding the Carnivalesque: Changing Identities for a Sonic Icon of the Popular,” Michael S. O’Brien, College of Charleston

"Musical Mystery" by Flickr user Robert Wilhoit, CC BY-NC-SA 2.0

“Musical Mystery” by Flickr user Robert Wilhoit, CC BY-NC-SA 2.0

SATURDAY, November 15

8:30 – 10:30 am

Sterlings 1 Room
Roundtable: Sound Studies, Ecomusicology, and Post-Humanism In/For/With Ethnomusicology
*Sponsored by the Special Interests Groups for Ecomusicology and for Sound Studies
P. Allen Roda, The Metropolitan Museum of Art
Jennifer Post, University of Arizona
Mark Pedelty, University of Minnesota
Michael Silvers, University of Illinois at Urbana-Champaign
Ben Tausig, Stony Brook University
Zeynep Bulut, King’s College London

10:45 am – 12:15 pm

Benedum Room, live streaming
Musical Instruments, Material Cultures, and Sound Ecologies
“Bulgarian Acoustemological Tales: Narrativity, Agrarian Ecology, and the Kaval’s Voice,” Donna A. Buchanan, University of Illinois at Urbana-Champaign

Sterling 1 Room
Session: Theorizing Sound
“Water Sounds: Distance Swimmers and Ecomusicology,” Niko Higgins, Columbia University
“Telephone, Vacuum Cleaner, Couch: Senses and Sounds of the Everyday in Postwar Japan,” Miki Kaneda, Boston University
Discussant: Benjamin Tausig, Stony Brook University

SUNDAY, November 16

8:30 – 10:30 am

Birmingham Room
Session: Regulating Space, Regulating Sound: Musical Practice and Institutional Mediation in São Paulo, Brazil
*Sponsored by the Latin American and Caribbean Section
“Music under Control? São Paulo’s Anti-Noise Agency in Action,” Leonardo Cardoso, University of Texas at Austin
“Music Producers in São Paulo’s Cultural Policy Worlds,” Daniel Gough, University of Chicago
“’Small Universes’: The Creation of Social Intimacy through Aesthetic Infrastructures in São Paulo’s Underground,” Shannon Garland, Columbia University
Discussant, Morgan Lurker, Reed College

Heinz Room
“Hear What You Want: Sonic Politics, Blackness, and Racism-Canceling Headphones,” Alex Blue, University of California, Santa Barbara

Alleghany Room
“Sound and Silence in Festivals of the French Revolution: Sonic Analysis in History,” Rebecca D. Geoffroy-Schwinden, Duke University

10:45 am – 12:15 pm

Liberty Room
Session: Sounding Nations
“Building the Future through the Past: The Revival Movement in Iranian Classical Music and the Reconstruction of National Identity in the 1960s and the 1970s,” Hadi Milanloo, Memorial University of Newfoundland
“Sounding Citizenship in Southern Africa: Malawian Musicians and the Social Worlds of Recording Studios and Music Education Centers,” Richard M. Deja, University of Illinois
“Unity in (Spite of) Diversity: Tensions and Contradictions in Performing Surinamese National Identity,” Corinna S. Campbell, Williams College

"Music" by Flickr user Rich McPeek, CC BY-NC 2.0

“Music” by Flickr user Rich McPeek, CC BY-NC 2.0

Acousmatic Surveillance and Big Data

11928222826_d311dabe2a_o

Sound and Surveilance4

It’s an all too familiar movie trope. A bug hidden in a flower jar. A figure in shadows crouched listening at a door. The tape recording that no one knew existed, revealed at the most decisive of moments. Even the abrupt disconnection of a phone call manages to arouse the suspicion that we are never as alone as we may think. And although surveillance derives its meaning the latin “vigilare” (to watch) and French “sur-“ (over), its deep connotations of listening have all but obliterated that distinction.

Moving on from cybernetic games to modes of surveillance that work through composition and patterns. Here, Robin James challenges us to consider the unfamiliar resonances produced by our IP addresses, search histories, credit trails, and Facebook posts. How does the NSA transform our data footprints into the sweet, sweet, music of surveillance? Shhhhhhhh! Let’s listen in. . . -AT

Kate Crawford has argued that there’s a “big metaphor gap in how we describe algorithmic filtering.” Specifically, its “emergent qualities” are particularly difficult to capture. This process, algorithmic dataveillance, finds and tracks dynamic patterns of relationships amongst otherwise unrelated material. I think that acoustics can fill the metaphor gap Crawford identifies. Because of its focus on identifying emergent patterns within a structure of data, rather than its cause or source, algorithmic dataveillance isn’t panoptic, but acousmatic. Algorithmic dataveillance is acousmatic because it does not observe identifiable subjects, but ambient data environments, and it “listens” for harmonics to emerge as variously-combined data points fall into and out of phase/statistical correlation.

Dataveillance defines the form of surveillance that saturates our consumer information society. As this promotional Intel video explains, big data transcends the limits of human perception and cognition – it sees connections we cannot. And, as is the case with all superpowers, this is both a blessing and a curse. Although I appreciate emails from my local supermarket that remind me when my favorite bottle of wine is on sale, data profiling can have much more drastic and far-reaching effects. As Frank Pasquale has argued, big data can determine access to important resources like jobs and housing, often in ways that reinforce and deepen social inequities. Dataveillance is an increasingly prominent and powerful tool that determines many of our social relationships.

The term dataveillance was coined in 1988 by Roger Clarke, and refers to “the systematic use of personal data systems in the investigation or monitoring of the actions or communications of one or more persons.” In this context, the person is the object of surveillance and data is the medium through which that surveillance occurs. Writing 20 years later, Michael Zimmer identifies a phase-shift in dataveillance that coincides with the increased popularity and dominance of “user-generated and user-driven Web technologies” (2008). These technologies, found today in big social media, “represent a new and powerful ‘infrastructure of dataveillance,’ which brings about a new kind of panoptic gaze of both users’ online and even their offline activities” (Zimmer 2007). Metadataveillance and algorithmic filtering, however, are not variations on panopticism, but practices modeled—both historically/technologically and metaphorically—on acoustics.

In 2013, Edward Snowden’s infamous leaks revealed the nuts and bolts of the National Security Administration’s massive dataveillance program. They were collecting data records that, according to the Washington Post, included “e-mails, attachments, address books, calendars, files stored in the cloud, text or audio or video chats and ‘metadata’ that identify the locations, devices used and other information about a target.” The most enduringly controversial aspect of NSA dataveillance programs has been the bulk collection of Americans’ data and metadata—in other words, the “big data”-veillance programs.

 

Borrowed fro thierry ehrmann @Flickr CC BY.

Borrowed from thierry ehrmann @Flickr CC BY.

Instead of intercepting only the communications of known suspects, this big dataveillance collects everything from everyone and mines that data for patterns of suspicious behavior; patterns that are consistent with what algorithms have identified as, say, “terrorism.” As Cory Doctorow writes in BoingBoing, “Since the start of the Snowden story in 2013, the NSA has stressed that while it may intercept nearly every Internet user’s communications, it only ‘targets’ a small fraction of those, whose traffic patterns reveal some basis for suspicion.” “Suspicion,” here, is an emergent property of the dataset, a pattern or signal that becomes legible when you filter communication (meta)data through algorithms designed to hear that signal amidst all the noise.

Hearing a signal from amidst the noise, however, is not sufficient to consider surveillance acousmatic. “Panoptic” modes of listening and hearing, though epitomized by the universal and internalized gaze of the guards in the tower, might also be understood as the universal and internalized ear of the confessor. This is the ear that, for example, listens for conformity between bodily and vocal gender presentation. It is also the ear of audio scrobbling, which, as Calum Marsh has argued, is a confessional, panoptic music listening practice.

Therefore, when President Obama argued that “nobody is listening to your telephone calls,” he was correct. But only insofar as nobody (human or AI) is “listening” in the panoptic sense. The NSA does not listen for the “confessions” of already-identified subjects. For example, this court order to Verizon doesn’t demand recordings of the audio content of the calls, just the metadata. Again, the Washington Post explains:

The data doesn’t include the speech in a phone call or words in an email, but includes almost everything else, including the model of the phone and the “to” and “from” lines in emails. By tracing metadata, investigators can pinpoint a suspect’s location to specific floors of buildings. They can electronically map a person’s contacts, and their contacts’ contacts.

NSA dataveillance listens acousmatically because it hears the patterns of relationships that emerge from various combinations of data—e.g., which people talk and/or meet where and with what regularity. Instead of listening to identifiable subjects, the NSA identifies and tracks emergent properties that are statistically similar to already-identified patterns of “suspicious” behavior. Legally, the NSA is not required to identify a specific subject to surveil; instead they listen for patterns in the ambience. This type of observation is “acousmatic” in the sound studies sense because the sounds/patterns don’t come from one identifiable cause; they are the emergent properties of an aggregate.

Borrowed from david @Flickr CC BY-NC.

Borrowed from david @Flickr CC BY-NC.

Acousmatic listening is a particularly appropriate metaphor for NSA-style dataveillance because the emergent properties (or patterns) of metadata are comparable to harmonics or partials of sound, the resonant frequencies that emerge from a specific combination of primary tones and overtones. If data is like a sound’s primary tone, metadata is its overtones. When two or more tones sound simultaneously, harmonics emerge whhen overtones vibrate with and against one another. In Western music theory, something sounds dissonant and/or out of tune when the harmonics don’t vibrate synchronously or proportionally. Similarly, tones that are perfectly in tune sometimes create a consonant harmonic. The NSA is listening for harmonics. They seek metadata that statistically correlates to a pattern (such as “terrorism”), or is suspiciously out of correlation with a pattern (such as US “citizenship”). Instead of listening to identifiable sources of data, the NSA listens for correlations among data.

Both panopticism and acousmaticism are technologies that incite behavior and compel people to act in certain ways. However, they both use different methods, which, in turn, incite different behavioral outcomes. Panopticism maximizes efficiency and productivity by compelling conformity to a standard or norm. According to Michel Foucault, the outcome of panoptic surveillance is a society where everyone synchs to an “obligatory rhythm imposed from the outside” (151-2), such as the rhythmic divisions of the clock (150). In other words, panopticism transforms people into interchangeable cogs in an industrial machine.  Methodologically, panopticism demands self-monitoring. Foucault emphasizes that panopticism functions most efficiently when the gaze is internalized, when one “assumes responsibility for the constraints of power” and “makes them play…upon himself” (202). Panopticism requires individuals to synchronize themselves with established compulsory patterns.

Acousmaticism, on the other hand, aims for dynamic attunement between subjects and institutions, an attunement that is monitored and maintained by a third party (in this example, the algorithm). For example, Facebook’s News Feed algorithm facilitates the mutual adaptation of norms to subjects and subjects to norms. Facebook doesn’t care what you like; instead it seeks to transform your online behavior into a form of efficient digital labor. In order to do this, Facebook must adjust, in part, to you. Methodologically, this dynamic attunement is not a practice of internalization, but unlike Foucault’s panopticon, big dataveillance leverages outsourcing and distribution. There is so much data that no one individual—indeed, no one computer—can process it efficiently and intelligibly. The work of dataveillance is distributed across populations, networks, and institutions, and the surveilled “subject” emerges from that work (for example, Rob Horning’s concept of the “data self”). Acousmaticism tunes into the rhythmic patterns that synch up with and amplify its cycles of social, political, and economic reproduction.

Sonic Boom! Borrowed from NASA's Goddard Space Flight Center @Flickr CC BY.

Sonic Boom! Borrowed from NASA’s Goddard Space Flight Center @Flickr CC BY.

Unlike panopticism, which uses disciplinary techniques to eliminate noise, acousmaticism uses biopolitical techniques to allow profitable signals to emerge as clearly and frictionlessly as possible amid all the noise (for more on the relation between sound and biopolitics, see my previous SO! essay). Acousmaticism and panopticism are analytically discrete, yet applied in concert. For example, certain tiers of the North Carolina state employee’s health plan require so-called “obese” and tobacco-using members to commit to weight-loss and smoking-cessation programs. If these members are to remain eligible for their selected level of coverage, they must track and report their program-related activities (such as exercise). People who exhibit patterns of behavior that are statistically risky and unprofitable for the insurance company are subject to extra layers of surveillance and discipline. Here, acousmatic techniques regulate the distribution and intensity of panoptic surveillance. To use Nathan Jurgenson’s turn of phrase, acousmaticism determines “for whom” the panoptic gaze matters. To be clear, acousmaticism does not replace panopticism; my claim is more modest. Acousmaticism is an accurate and productive metaphor for theorizing both the aims and methods of big dataveillance, which is, itself, one instrument in today’s broader surveillance ensemble.

Featured image “Big Brother 13/365” by Dennis Skley CC BY-ND.

Robin James is Associate Professor of Philosophy at UNC Charlotte. She is author of two books: Resilience & Melancholy: pop music, feminism, and neoliberalism will be published by Zer0 books this fall, and The Conjectural Body: gender, race and the philosophy of music was published by Lexington Books in 2010. Her work on feminism, race, contemporary continental philosophy, pop music, and sound studies has appeared in The New Inquiry, Hypatia, differences, Contemporary Aesthetics, and the Journal of Popular Music Studies. She is also a digital sound artist and musician. She blogs at its-her-factory.com and is a regular contributor to Cyborgology.

tape reelREWIND!…If you liked this post, check out:

“Cremation of the senses in friendly fire”: on sound and biopolitics (via KMFDM & World War Z)–Robin James

The Dark Side of Game Audio: The Sounds of Mimetic Control and Affective ConditioningAaron Trammell

Listening to Whisperers: Performance, ASMR Community, and Fetish on YouTube–Joshua Hudelson

%d bloggers like this: