Archive | Disability Studies RSS for this section

Misophonia: Towards a Taxonomy of Annoyance

chewing

World Listening Month3This is the second post in Sounding Out!’s 4th annual July forum on listening in observation of World Listening Day on July 18th, 2015.  World Listening Day is a time to think about the impacts we have on our auditory environments and, in turn, their effects on us.  For Sounding Out! World Listening Day necessitates discussions of the politics of listening and listening, and, as Carlo Patrão shares today, an examination of sounds that disturb, annoy, and threaten our mental health and well being.   –Editor-in-Chief JS

An important factor in coming to dislike certain sounds is the extent to which they are considered meaningful. The noise of the roaring sea, for example, is not far from white radio noise (…) We still seek meaning in nature and therefore the roaring of the sea is a blissful soundTorben Sangild, The Aesthetics of Noise

When hearing bodily sounds, we often react with discomfort, irritation, or even shame. The sounds of the body remind us of its fallible and vulnerable nature, calling to mind French surgeon René Leriche’s statement that “health is life lived in the silence of the organs” (1936). The mind rests when the inner works of the body are forgotten. Socially, sounds coming from the organic functions of the body such as chewing, lip smacking, breathing, sniffling, coughing, sneezing or slurping are considered annoying and perceived as intrusions. A recent study by Trevor Cox suggests that our reactions of disgust towards sounds of bodily excretions and secretions may be socially-learned and vary according to whether it is considered acceptable or unacceptable to make such sounds in public.

hamm gif

However, for people suffering from a condition called Misophonia, these bodily sounds aren’t simply annoying, rather they become sudden triggers of aggressive impulses and involuntary fight or flight responses. Misophonia, meaning hatred of sound, is a chronic condition characterized by highly negative emotional responses to auditory triggers, which include repetitive and social sounds produced by another person, like hearing someone eating an apple, crunching chips, slurping on a soup spoon or even breathing.

The consequences of Misophonia can be very troublesome, leading to social isolation or the continuous avoidance of certain places and situations such as family dinners, the workplace and recreational activities like going to the cinema. While rate of occurrence of new cases of Misophonia in the population is still under investigation, the fast growing number of online communities gathered around the dislike of certain sounds may indicate that this condition is more common than previously thought.

But why do people with Misophonia feel such strong reactions to trigger sounds? This fundamental question remains up for debate. Some audiologists suggest these heightened emotional responses can be explained by hyperconnectivity between the auditory, limbic and autonomic nervous systems. However, we continue to lack a comprehensive theoretical model to understand Misophonia, as well as an effective treatment to help sufferers of Misophonia cope with intrusive sound triggers.

misophonia awareness

The Art of Annoyance: is it possible to reframe misophonic trigger sounds as misophonic music?

Between 1966 and 1967, John Cage and Morton Feldman recorded four open-ended radio conversations, called Radio Happenings (WBAI, NYC). Among many topics, Feldman and Cage address the problem of being constantly intruded upon by unpleasant sounds. Feldman narrates his annoyance with the sounds blasted from several radios on a trip to the beach. Cage’s commentary on the growing annoyance of his friend reveals a shift of perception in dealing with unwanted sounds:

Well, you know how I adjusted to that problem of the radio in the environment (…) I simply made a piece using radios. Now, whenever I hear radios – even a single one, not just twelve at a time, as you must have heard on the beach, at least – I think, “Well, they’re just playing my piece.- John Cage, Radio Happenings.

Cage proposes a remedy via appropriation of environmental intrusions. The negative emotional charge associated with them is neutralized. Sound intrusions no longer exist as absolute external entities trying to intrude their way in. They become part of the self. Ultimately, there are no sonic intrusions, as the entire field of sound is desirable for composition.

Cage’s immersive compositional anticipated an important strategy to build resilience towards aversive sound: exposure-based cognitive-behavioral therapy, which  proposes a gradual immersion in trigger sounds. And I suggest we can mine the history of avant-garde practice to productively further the idea of immersion; in the realms of sound poetry, utterance based music, Fluxus events and many other sound art practices, bodily sounds have consistently been exalted as source of composition and performance. Much like Cage did with what he perceived as intrusive radio sounds, by performing chewing, coughs, slurps and hiccups, assembling snores and nose whistles, and singing the poetics of throat clearing, we may be able to elevate our body awareness and challenge the way we perceive unwanted sounds. In what follows, I sample these works with an ear toward misophonia, discussing their interventions in the often jarring world of everyday irritation.

Oral Oddities

As pointed out by Nancy Perloff, while the avant-garde progressively expanded to incorporate the entire scope of sound into composition, sound poetry followed a similar course by playing with the non-semantic proprieties of language and exploring new vocal techniques. The Russian avant-garde (zaum), the Italian futurists (parole in libertà) and the German Dada (Hugo Ball’s verse ohne Worte) built the foundations of a new oral hyper-expression of the body through moans, clicks, hisses, hums, whooshes, whizzes, spits, and breaths.

Henri Chopin, Les Pirouettes Vocales Pour Les Pirouettements Vocaux

Sound poets like Henri Chopin created uncanny sonic textures by only using ‘vocal micro-particles’, revealing a sounding body that can be violent and intrusive. François Dufrêne and Gil J Wolman brought forward more raw and glottal performances.

Bridging the gap between the Schwitter’s Dada-constructivism and a contemporary approach to sound poetry, Jaap Blonk’s inventive vocal performances cover a wide range of mouth sounds. In the same vein, Paul Dutton explores the limits of his voice, glottis, tongue, lips and nose as the medium for compositions — as can be heard on the record Mouth Pieces: Solo Soundsinging.

Paul Dutton, Lips Is, Mouth Pieces: Solo Soundsinging

Fluxus: Eat, Chew, Burp, Cough, Perform!

 The Event is a metarealistic trigger: it makes the viewer’s or user’s experience special. (…) Rather than convey their own emotional world abstractly, Fluxus artists directed their audiences’ attention to concrete everyday stuff addressing aesthetic metareality in the broadest sense. Hannah HigginsFluxus Experience

 The emergence of Fluxus is strongly linked to Cage’s 1957-59 class at New School for Social Research in NYC. George Bretch’s Event Score was one of the best known innovations to emerge from these classes. The Event Score was a performance technique drawn from short instructions that framed everyday life actions as minimal performances. Daily acts like chewing, coughing, licking, eating or preparing food were considered by themselves ready-made works of art. Many Fluxus artists such as Shigeko KubotaYoko Ono, Mieko Shiomi, and Alison Knowles saw these activities as forms of social music.

For instance, Alison Knowles produced several famous Fluxus food events such as Make a Salad (1962), Make a Soup (1962), and The Identical Lunch (1967-73).

 

Also, Mieko Shiomi‘s Shadow Piece No. 3 calls attention to the sound of amplified mastication, while Philip Corner’s piece Carrot Chew Performance is solely centered in the activity of chewing a carrot.

Philip Corner, Carrot Chew Performance, Tellus #24

In Nivea Cream Piece (1962), Alison Knowles invites the performers to rub their hands with cream in front of a microphone, producing a deluge of squeezing sounds:

Alison Knowles – Nivea Cream Piece (1962) – for Oscar (Emmett) Williams

Coughing is a form of love.

yokoIn 1961, the Fluxus artist Yoko Ono composed a 32 minute, 31 second audio recording called Cough Piece, a precursor to her instruction Keep coughing a year (Grapefruit). In this recording, the sound of Ono’s cough emerges periodically from the indistinct background noise. The Cough Piece plays with the concept of time, prolonging the duration of an activity beyond what is considered socially acceptable. While listening to this piece, Yoko Ono brings us close to her body’s automatic reflexes, pulling back the veil of an indistinct inner turmoil. Coughing can be a bodily response to an irritating tickling feeling, troubled breathing, a sore throat or a reaction to foreign particles or microbes. In response, coughing is a way of clearing, a freeing re-flux of air, a way out. Coughing is a form of love.

Yoko Ono – Cough Piece

cough-piece-yoko-ono

Sonic Skin

In the work The Ego and the Id (1923), Sigmund Freud stated that the ego is ultimately derived from bodily sensations. The psychoanalyst Didier Anzieu expanded this idea by suggesting that early experiences of sound are crucial to consolidate the infant’s ego. The bath of sounds surrounding the child created by the parent’s voices and their soothing sounds provides a sonorous envelope or an audio-phonic skin that protects the child against ego-assailing noises and helps the creation of the first boundaries between the inside and the external world. The lack of a satisfactory sound envelope may compromise the development of a proper sense of self, leaving it vulnerable to invasions from outside.

It’s no surprise that conditions like Misophonia exist and are very common among us, considering how important our early exposure to sound is in building our sense of self and our sensory limits. For Misophonics, the everyday sounds we make without even thinking about them can be the source of a fractured and disruptive experience that we should not dismiss as the overreactions of a sensitive person. During the month we observe World Listening Day, our discourse usually praises the pleasures of listening and tends to focus on the sounds that soothe rather than annoy. However, conditions like Misophonia show us that there is much more that needs to be said on the subject of unpleasant sound experience. I can’t help but notice a disconnect between the vast exploration of annoying and irritating sounds in the avant-garde and the critical discourse in our sound communities that is dominated by the pleasures of listening. Cage’s call to embrace intrusive sounds urges us to consider all sounds regardless of where they fall on the spectrum of our emotions. For all of us who would consider ourselves philophonics, let’s create a critical discourse that addresses the struggles of listening as much as its pleasures.

Thanks to Jennifer Stoever for the thoughtful suggestions.

Carlo Patrão is a Portuguese radio artist and producer of the show Zepelim. His radio work began as a member of the Portuguese freeform station Radio Universidade de Coimbra (RUC). In his pieces, he aims to explore the diverse possibilities of radiophonic space through the medium of sound collage. He has participated in projects like Basic.fm, Radio Boredcast, and his work has been featured in several international sound festivals and has also been commissioned by Radio Arts (UK). He is currently working on a radio show for the Portuguese national public radio station RTP. In addition to his work in radio, he has a master’s in clinical psychology.

tape reelREWIND! . . .If you liked this post, you may also dig:

Optophones and Musical Print– Mara Mills

Optophones and Musical Print

The word type, as scanned by the optophone.

From E.E. Fournier d’Albe, The Moon Element (New York: D. Appleton & Company, 1924), 141.

 

In 1912, British physicist Edmund Fournier d’Albe built a device that he called the optophone, which converted light into tones. The first model—“the exploring optophone”—was meant to be a travel aid; it converted light into a sound of analogous intensity. A subsequent model, “the reading optophone,” scanned print using lamp-light separated into beams by a perforated disk. The pattern of light reflected back from a given character triggered a corresponding set of tones in a telephone receiver. d’Albe initially worked with 8 beams, producing 8 tones based on a diatonic scale. He settled on 5 notes: lower G, and then middle C, D, E and G. (Sol, do, re, mi, sol.) The optophone became known as a “musical print” machine. It was popularized by Mary Jameson, a blind student who achieved reading speeds of 60 words per minute.

Photograph of the optophone, an early scanner with a rounded glass bookrest.

Reading Optophone, held at Blind Veterans UK (formerly St. Dunstan’s). Photographed by the author. With thanks to Robert Baker for helping me search through the storeroom to locate this item.

Scientific illustration of the optophone, showing a book on the bookrest and a pair of headphones for listening to the tonal output.

Schematic of optophone from Vetenskapen och livet (1922)

In the field of media studies, the optophone has become renowned through its imaginary repurposings by a number of modernist artists. For one thing, the optophone finds brief mention in Finnegan’s Wake. In turn, Marshall McLuhan credited James Joyce’s novel for being a new medium, turning text into sound. In “New Media as Political Forms,” McLuhan says that Joyce’s own “optophone principle” releases us from “the metallic and rectilinear embrace of the printed page.” More familiar within media studies today, Dada artist Raoul Hausmann patented (London 1935), but did not successfully build, an optophone presumably inspired by d’Albe’s model, which he hoped would be employed in audiovisual performances. This optophone was meant to convert sound into light as well as the reverse. It was part of a broader contemporary impulse to produce color music and synaesthetic art. Hausmann also wrote optophonetic poetry, based on the sounds and rhythms of “pure phonemes” and non-linguistic noises. In response, Francis Picabia painted two optophone portraits in 1921 and 22. Optophone I, below, is composed of lines that might be sound waves, with a pattern that disorders vision.

Francis Picabia's Optophone I, a series of concentric black circles with a female figure at the center.

Francis Picabia, Optophone I (1922)

Theorists have repeatedly located Hausmann’s device at the origin of new media. Authors in the Audiovisuology, Media Archaeology, and Beyond Art: A Third Culture anthologies credit Hausmann’s optophone with bringing-into-being cybernetics, digitization, the CD-ROM, audiovisual experiments in video art, and “primitive computers.” It seems to have escaped notice that d’Albe also used the optophone to create electrical music. In his book, The Moon Element, he writes:

Needless to say, any succession or combination of musical notes can be picked out by properly arranged transparencies, and I have succeeded in transcribing a number of musical compositions in this manner, which are, of course, only audible in the telephone. These notes, in the absence of all other sounding mechanism, are particularly pure and free from overtones. Indeed, a musical optophone worked by this intermittent light, has been arranged by means of a simple keyboard, and some very pleasant effects may thus be obtained, more especially as the loudness and duration of the different notes is under very complete and separate control.

E.E. Fournier d’Albe, The Moon Element (New York: D. Appleton & Company, 1924), 107.

d’Albe’s device is typically portrayed as a historical cul-de-sac, with few users and no real technical influence. Yet optophones continued to be designed for blind people throughout the twentieth century; at least one model has users even today. Musical print machines, or “direct translators,” co-existed with more complex OCR-devices—optical character recognizers that converted printed words into synthetic speech. Both types of reading machine contributed to today’s procedures for scanning and document digitization. Arguably, reading optophones intervened more profoundly into the order of print than did Hausmann’s synaesthetic machine: they not only translated between the senses, they introduced a new symbolic system by which to read. Like braille, later vibrating models proposed that the skin could also read.

In December 1922, the Optophone was brought to the United States from the United Kingdom for a demonstration before a number of educators who worked with blind children; only two schools ordered the device. Reading machine development accelerated in the U.S. around World War II. In his position as chair of the National Defense Research Committee, Vannevar Bush established a Committee on Sensory Devices in 1944, largely for the purpose of rehabilitating blind soldiers. The other options for reading—braille and Talking Books—were relatively scarce and had a high cost of production. Reading machines promised to give blind readers access to magazines and ephemeral print (recipes, signs, mail), which was arguably more important than access to books.

Piechowski, wearing a suit, scans the pen of the A-2 reader over a document.

Joe Piechowski with the A-2 reader. Courtesy of Rob Flory.

At RCA (Radio Corporation of America), the television innovator Vladimir Zworykin became involved with this project. Zworykin had visited Fournier d’Albe in London in the 19-teens and seen a demonstration of the optophone. Working with Les Flory and Winthrop Pike, Zworykin built an initial machine known as the A-2 that operated on the same principles, but used a different mechanism for scanning—an electric stylus, which was publicized as “the first pen that reads.” Following the trail of citations for RCA’s “Reading Aid for the Blind” patent (US 2420716A, filed 1944), it is clear that the “pen” became an aid in domains far afield from blindness. It was repurposed as an optical probe for measuring the oxygen content of blood (1958); an “optical system for facsimile scanners” (1972); and, in a patent awarded to Burroughs Corporation in 1964, a light gun. This gun, in turn, found its way into the handheld controls for the first home video game system, produced by Sanders Associates.

The A-2 optophone was tested on three blind research subjects, including ham radio enthusiast Joe Piechowski, who was more of a technical collaborator. According to the reports RCA submitted to the CSD, these readers were able to correlate the “chirping” or “tweeting” sounds of the machine with letters “at random with about eighty percent accuracy” after 60 hours of practice. Close spacing on a printed page made it difficult to differentiate between letters; readers also had difficulty moving the stylus at a steady pace and in a straight line. Piechowski achieved reading speeds of 20 words per minute, which RCA deemed too slow.

Attempts were made to incorporate “human factors” and create a more efficient tonal code, to reduce reading time as well as learning time and confusion between letters. One alternate auditory display was known as the compressed optophone. Rather than generate multiple tones or chords for a single printed letter, which was highly redundant and confusing to the ear, the compressed version identified only certain features of a printed letter: such as the presence of an ascender or descender. Below is a comparison between the tones of the original optophone and the compressed version, recorded by physicist Patrick Nye in 1965. The following eight lower case letters make up the source material: f, i, k, j, p, q, r, z.

Original record in the author’s possession. With thanks to Elaine Nye, who generously tracked down two of her personal copies at the author’s request. The second copy is now held at Haskins Laboratories.

An image of the letter r as scanned by the optophone and compressed optophone.

From Patrick Nye, “An Investigation of Audio Outputs for a Reading Machine,” AFB Research Bulletin (July 1965): 30.

 

Because of the seeming limitations of tonal reading, RCA engineers re-directed their research to add character recognition to the scanning process. This was controversial, direct translators like the optophone being perceived as too difficult because they required blind people to do something akin to learning to read print—learning a symbolic tonal or tactile code. At an earlier moment, braille had been critiqued on similar grounds; many in the blind community have argued that mainstream anxieties about braille sprang from its symbolic difference. Speed, moreover, is relative. Reading machine users protested that direct translators like the optophone were inexpensive to build and already available—why wait for the refinement of OCR and synthetic speech? Nevertheless, between November 1946 and May 1947, Zworykin, Flory, and Pike worked on a prototype “letter reading machine,” today widely considered to be the first successful example of optical character recognition (OCR). Before reliable synthetic speech, this device spelled out words letter by letter using tape recordings. The Letter-Reader was too massive and expensive for personal use, however. It also had an operating speed of 20 words per minute—thus it was hardly an improvement over the A-2 translator.

Haskins Laboratories, another affiliate of the Committee on Sensory Devices, began working on the reading machine problem around the same time, ultimately completing an enormous amount of research into synthetic speech and—as argued by Donald Shankweiler and Carol Fowler—the “speech code” itself. In the 1940s, before workable text-to-speech, researchers at Haskins wanted to determine whether tones or artificial phonemes (“speech-like speech”) were easier to read by ear. They developed a “machine dialect of English,” named wuhzi: “a transliteration of written English which preserved the phonetic patterns of the words.” An example can be played below. The eight source words are: With, Will, Were, From, Been, Have, This, That.

Original record in the author’s possession. From Patrick Nye, “An Investigation of Audio Outputs for a Reading Machine” (1965). With thanks to Elaine Nye.

Based on the results of tests with several human subjects, the Haskins researchers concluded that aural reading via speech-like sounds was necessarily faster than reading musical tones. Like the RCA engineers, they felt that a requirement of these machines should be a fast rate of reading. Minimally, they felt that reading speed should keep pace with rapid speech, at about 200 words per minute.

Funded by the Veterans Administration, members of Mauch Laboratories in Ohio worked on both musical optophones and spelled-speech recognition machines from the 1950s into the 1970s. One of their many devices, the Visotactor, was a direct-translator with vibro-tactile output for four fingers. Another, the Visotoner, was a portable nine-channel optophone. All of the Mauch machines were tested by Harvey Lauer, a technology transfer specialist for the Veterans Administration for over thirty years, himself blind. Below is an excerpt from a Visotoner demonstration, recorded by Lauer in 1971.

Visotoner demonstration. Original 7” open reel tape in author’s possession. With thanks to Harvey Lauer for sharing items from his impressive collection and for collaborating with the author over many years.

Lauer's fingers are pictured in the finger-rests of the Visotactor, scanning a document.

Harvey Lauer reading with the Visotactor, a text-to-tactile translator, 1977.

Later on the same tape, Lauer discusses using the Visotoner to read mail, identify currency, check over his own typing, and read printed charts or graphics. He achieved reading speeds of 40 words per minute with the device. Lauer has also told me that he prefers the sound of the Visotoner to that of other optophone models—he compares its sound to Debussy, or the music for dream sequences in films.

Mauch also developed a spelled speech OCR machine called the Cognodictor, which was similar to the RCA model but made use of synthetic speech. In the recording below, Lauer demonstrates this device by reading a print-out about IBM fonts. He simultaneously reads the document with the Visotoner, which reveals glitches in the Cognodictor’s spelling.

Original 7” open reel tape in the author’s possession. With thanks to Harvey Lauer.

A hand uses the metal probe of the Cognodictor to scan a typed document.

The Cognodictor. Glendon Smith and Hans Mauch, “Research and Development in the Field of Reading Machines for the Blind,” Bulletin of Prosthetics Research (Spring 1977): 65.

In 1972, at the request of Lauer and other blind reading machine users, Mauch assembled a stereo-optophone with ten channels, called the Stereotoner. This device was distributed through the VA but never marketed, and most of the documentation exists in audio format, specifically in sets of training tapes that were made for blinded veterans who were the test subjects. Some promotional materials, such as the short video below, were recorded for sighted audiences—presumably teachers, rehabilitation specialists, or funding agencies.

Mauch Stereo Toner from Sounding Out! on Vimeo.

Video courtesy of Harvey Lauer.

Mary Jameson corresponded with Lauer about the stereotoner, via tape and braille, in the 1970s. In the braille letter pictured below she comments, “I think that stereotoner signals are the clearest I have heard.”

Scan of a braille letter from Jameson to Lauer.

Letter courtesy of Harvey Lauer. Transcribed by Shafeka Hashash.

In 1973, with the marketing of the Kurzweil Reader, funding for direct translation optophones ceased. The Kurzweil Reader was advertised as the first machine capable of multi-font OCR; it was made up of a digital computer and flatbed scanner and it could recognize a relatively large number of typefaces. Kurzweil recalls in his book The Age of Spiritual Machines that this technology quickly transferred to Lexis-Nexis as a way to retrieve information from scanned documents. As Lauer explained to me, the abandonment of optophones was a serious problem for people with print disabilities: the Kurzweil Readers were expensive ($10,000-$50,000 each); early models were not portable and were mostly purchased by libraries. Despite being advertised as omnifont readers, they could not in fact recognize most printed material. The very fact of captchas speaks to the continued failures of perfect character recognition by machines. And, as the “familiarization tapes” distributed to blind readers indicate, the early synthetic speech interface was not transparent—training was required to use the Kurzweil machines.

Original cassette in the author’s possession. 

A young Kurzweil stands by his reading machine, demonstrated by Jernigan, who is seated.

Raymond Kurzweil and Kenneth Jernigan with the Kurzweil Reading Machine (NFB, 1977). Courtesy National Federation of the Blind.

Lauer always felt that the ideal reading machine should have both talking OCR and direct-translation capabilities, the latter being used to get a sense of the non-text items on a printed page, or to “preview material and read unusual and degraded print.” Yet the long history of the optophone demonstrates that certain styles of decoding have been more easily naturalized than others—and symbols have increasingly been favored if they bear a close relation to conventional print or speech. Finally, as computers became widely available, the focus for blind readers shifted, as Lauer puts it, “from reading print to gaining access to computers.” Today, many electronic documents continue to be produced without OCR, and thus cannot be translated by screen readers; graphical displays and videos are largely inaccessible; and portable scanners are far from universal, leaving most “ephemeral” print still unreadable.

Mara Mills is an Assistant Professor of Media, Culture, and Communication at New York University, working at the intersection of disability studies and media studies. She is currently completing a book titled On the Phone: Deafness and Communication Engineering. Articles from this project can be found in Social Text, differences, the IEEE Annals of the History of Computing, and The Oxford Handbook of Sound Studies. Her second book project, Print Disability and New Reading Formats, examines the reformatting of print over the course of the past century by blind and other print disabled readers, with a focus on Talking Books and electronic reading machines. This research is supported by NSF Award #1354297.

Sounds of Science: The Mystique of Sonification

NYU_full

Hearing the Unheard IIWelcome to the final installment of Hearing the UnHeardSounding Out!s series on what we don’t hear and how this unheard world affects us. The series started out with my post on hearing, large and small, continued with a piece by China Blue on the sounds of catastrophic impacts, and Milton Garcés piece on the infrasonic world of volcanoes. To cap it all off, we introduce The Sounds of Science by professor, cellist and interactive media expert, Margaret Schedel.

Dr. Schedel is an Associate Professor of Composition and Computer Music at Stony Brook University. Through her work, she explores the relatively new field of Data Sonification, generating new ways to perceive and interact with information through the use of sound. While everyone is familiar with informatics, graphs and images used to convey complex information, her work explores how we can expand our understanding of even complex scientific information by using our fastest and most emotionally compelling sense, hearing.

– Guest Editor Seth Horowitz

With the invention of digital sound, the number of scientific experiments using sound has skyrocketed in the 21st century, and as Sounding Out! readers know, sonification has started to enter the public consciousness as a new and refreshing alternative modality for exploring and understanding many kinds of datasets emerging from research into everything from deep space to the underground. We seem to be in a moment in which “science that sounds” has a special magic, a mystique that relies to some extent on misunderstandings in popular awareness about the processes and potentials of that alternative modality.

For one thing, using sound to understand scientific phenomena is not actually new. Diarist Samuel Pepys wrote about meeting scientist Robert Hooke in 1666 that “he is able to tell how many strokes a fly makes with her wings (those flies that hum in their flying) by the note that it answers to in musique during their flying.” Unfortunately Hooke never published his findings, leading researchers to speculate on his methods. One popular theory is that he tied strings of varying lengths between a fly and an ear trumpet, recognizing that sympathetic resonance would cause the correct length string to vibrate, thus allowing him to calculate the frequency. Even Galileo used sound, showing the constant acceleration of a ball due to gravity by using an inclined plane with thin moveable frets. By moving the placement of the frets until the clicks created an even tempo he was able to come up with a mathematical equation to describe how time and distance relate when an object falls.

Illustration from Robert Hooke's Micrographia (1665)

Illustration from Robert Hooke’s Micrographia (1665)

There have also been other scientific advances using sound in the more recent past. The stethoscope was invented in 1816 for auscultation, listening to the sounds of the body. It was later applied to machines—listening for the operation of the technological gear. Underwater sonar was patented in 1913 and is still used to navigate and communicate using hydroacoustic phenomenon. The Geiger Counter was developed in 1928 using principles discovered in 1908; it is unclear exactly when the distinctive sound was added. These are all examples of auditory display [AD]; sonification-generating or manipulating sound by using data is a subset of AD. As the forward to the The Sonification Handbook states, “[Since 1992] Technologies that support AD have matured. AD has been integrated into significant (read “funded” and “respectable”) research initiatives. Some forward thinking universities and research centers have established ongoing AD programs. And the great need to involve the entire human perceptual system in understanding complex data, monitoring processes, and providing effective interfaces has persisted and increased” (Thomas Hermann, Andy Hunt, John G. Neuhoff, Sonification Handbook, iii)

Sonification clearly enables scientists, musicians and the public to interact with data in a very different way, particularly compared to the more numerous techniques involving vision. Indeed, because hearing functions quite differently than vision, sonification offers an alternative kind of understanding of data (sometimes more accurate), which would not be possible using eyes alone. Hearing is multi-directional—our ears don’t have to be pointing at a sound source in order to sense it. Furthermore, the frequency response of our hearing is thousands of times more accurate than our vision. In order to reproduce a moving image the sampling rate (called frame-rate) for film is 24 frames per second, while audio has to be sampled at 44,100 frames per second in order to accurately reproduce sound. In addition, aural perception works on simultaneous time scales—we can take in multiple streams of audio data at once at many different dynamics, while our pupils dilate and contract, limiting how much visual data we can absorb at a single time. Our ears are also amazing at detecting regular patterns over time in data; we hear these patterns as frequency, harmonic relationships, and timbre.

Image credit: Dr. Kevin Yager, data measured at X9 beamline, Brookhaven National Lab.

Image credit: Dr. Kevin Yager, Brookhaven National Lab.

But hearing isn’t simple, either. In the current fascination with sonification, the fact that aesthetic decisions must be made in order to translate data into the auditory domain can be obscured. Headlines such as “Here’s What the Higgs Boson Sounds Like” are much sexier than headlines such as “Here is What One Possible Mapping of Some of the Data We Have Collected from a Scientific Measuring Instrument (which itself has inaccuracies) Into Sound.” To illustrate the complexity of these aesthetic decisions, which are always interior to the sonification process, I focus here on how my collaborators and I have been using sound to understand many kinds of scientific data.

My husband, Kevin Yager, a staff scientist at Brookhaven National Laboratory, works at the Center for Functional Nanomaterials using scattering data from x-rays to probe the structure of matter. One night I asked him how exactly the science of x-ray scattering works. He explained that X-rays “scatter” off of all the atoms/particles in the sample and the intensity is measured by a detector. He can then calculate the structure of the material, using the Fast Fourier Transform (FFT) algorithm. He started to explain FFT to me, but I interrupted him because I use FFT all the time in computer music. The same algorithm he uses to determine the structure of matter, musicians use to separate frequency content from time. When I was researching this post, I found a site for computer music which actually discusses x-ray scattering as a precursor for FFT used in sonic applications.

To date, most sonifications have used data which changes over time – a fly’s wings flapping, a heartbeat, a radiation signature. Except in special cases Kevin’s data does not exist in time – it is a single snapshot. But because data from x-ray scattering is a Fourier Transform of the real-space density distribution, we could use additive synthesis, using multiple simultaneous sine waves, to represent different spatial modes. Using this method, we swept through his data radially, like a clock hand, making timbre-based sonifications from the data by synthesizing sine waves using with the loudness based on the intensity of the scattering data and frequency based on the position.

We played a lot with the settings of the additive synthesis, including the length of the sound, the highest frequency and even the number of frequency bins (going back to the clock metaphor – pretend the clock hand is a ruler – the number of frequency bins would be the number of demarcations on the ruler) arriving eventually at set of optimized variables.

Here is one version of the track we created using 10 frequency bins:

.

Here is one we created using 2000:

.

And here is one we created using 50 frequency bins, which we settled on:

.

On a software synthesizer this would be like the default setting. In the future we hope to have an interactive graphic user interface where sliders control these variables, just like a musician tweaks the sound of a synth, so scientists can bring out, or mask aspects of the data.

To hear what that would be like, here are a few tracks that vary length:

.

.

.

Finally, here is a track we created using different mappings of frequency and intensity:

.

Having these sliders would reinforce to the scientists that we are not creating “the sound of a metallic alloy,” we are creating one sonic representation of the data from the metallic alloy.

It is interesting that such a representation can be vital to scientists. At first, my husband went along with this sonification project as more of a thought experiment rather than something that he thought would actually be useful in the lab, until he heard something distinct about one of those sounds, suggesting that there was a misaligned sample. Once Kevin heard that glitched sound (you can hear it in the video above), he was convinced that sonification was a useful tool for his lab. He and his colleagues are dealing with measurements 1/25,000th the width of a human hair, aiming an X-ray through twenty pieces of equipment to get the beam focused just right. If any piece of equipment is out of kilter, the data can’t be collected. This is where our ears’ non-directionality is useful. The scientist can be working on his/her computer and, using ambient sound, know when a sample is misaligned.

procedure

It remains to be seen/heard if the sonifications will be useful to actually understand the material structures. We are currently running an experiment using Mechanical Turk to determine this kind of multi-modal display (using vision and audio) is actually helpful. Basically we are training people on just the images of the scattering data, and testing how well they do, and training another group of people on the images plus the sonification and testing how well they do.

I’m also working with collaborators at Stony Brook University on sonification of data. In one experiment we are using ambisonic (3-dimensional) sound to create a sonic map of the brain to understand drug addiction. Standing in the middle of the ambisonic cube, we hope to find relationships between voxels, a cube of brain tissue—analogous to pixels. When neurons fire in areas of the brain simultaneously there is most likely a causal relationship which can help scientists decode the brain activity of addiction. Computer vision researchers have been searching for these relationships unsuccessfully; we hope that our sonification will allow us to hear associations in distinct parts of the brain which are not easily recognized with sight. We are hoping to leverage the temporal pattern recognition of our auditory system, but we have been running into problems doing the sonification; each slice of data from the FMRI has about 300,000 data points. We have it working with 3,000 data points, but either our programming needs to get more efficient, or we have to get a much more powerful computer in order to work with all of the data.

On another project we are hoping to sonify gait data using smartphones. I’m working with some of my music students and a professor of Physical Therapy, Lisa Muratori, who works on understanding the underlying mechanisms of mobility problems in Parkinsons’ Disease (PD). The physical therapy lab has a digital motion-capture system and a split-belt treadmill for asymmetric stepping—the patients are supported by a harness so they don’t fall. PD is a progressive nervous system disorder characterized by slow movement, rigidity, tremor, and postural instability. Because of degeneration of specific areas of the brain, individuals with PD have difficulty using internally driven cues to initiate and drive movement. However, many studies have demonstrated an almost normal movement pattern when persons with PD are provided external cues, including significant improvements in gait with rhythmic auditory cueing. So far the research with PD and sound has be unidirectional – the patients listen to sound and try to match their gait to the external rhythms from the auditory cues.In our system we will use bio-feedback to sonify data from sensors the patients will wear and feed error messages back to the patient through music. Eventually we hope that patients will be able to adjust their gait by listening to self-generated musical distortions on a smartphone.

As sonification becomes more prevalent, it is important to understand that aesthetic decisions are inevitable and even essential in every kind of data representation. We are so accustomed to looking at visual representations of information—from maps to pie charts—that we may forget that these are also arbitrary transcodings. Even a photograph is not an unambiguous record of reality; the mechanics of the camera and artistic choices of the photographer control the representation. So too, in sonification, do we have considerable latitude. Rather than view these ambiguities as a nuisance, we should embrace them as a freedom that allows us to highlight salient features, or uncover previously invisible patterns.

__

Margaret Anne Schedel is a composer and cellist specializing in the creation and performance of ferociously interactive media. She holds a certificate in Deep Listening with Pauline Oliveros and has studied composition with Mara Helmuth, Cort Lippe and McGregor Boyle. She sits on the boards of 60×60 Dance, the BEAM Foundation, Devotion Gallery, the International Computer Music Association, and Organised Sound. She contributed a chapter to the Cambridge Companion to Electronic Music, and is a joint author of Electronic Music published by Cambridge University Press. She recently edited an issue of Organised Sound on sonification. Her research focuses on gesture in music, and the sustainability of technology in art. She ran SUNY’s first Coursera Massive Open Online Course (MOOC) in 2013. As an Associate Professor of Music at Stony Brook University, she serves as Co-Director of Computer Music and is a core faculty member of cDACT, the consortium for digital art, culture and technology.

Featured Image: Dr. Kevin Yager, data measured at X9 beamline, Brookhaven National Lab.

Research carried out at the Center for Functional Nanomaterials, Brookhaven National Laboratory, is supported by the U.S. Department of Energy, Office of Basic Energy Sciences, under Contract No. DE-AC02-98CH10886.

tape reelREWIND! ….. If you liked this post, you might also like:

The Noises of Finance–Nicholas Knouf

Revising the Future of Music Technology–Aaron Trammell

A Brief History of Auto-Tune–Owen Marshall

Pleasure Beats: Using Sound for Experience Enhancement 

"Biophonic Garden" by Flickr user Rene Passet, CC BY-NC-ND 2.0

Sound and Pleasure2After a rockin’ (and seriously informative) series of podcasts from Leonard J. Paul, a Drrty South banger dropped by SO! Regular Regina Bradley, a screamtastic meditation from Yvon Bonenfant, a heaping plate of food sounds from Steph Ceraso,  and crowd chants courtesy of  Kariann Goldschmidts work on live events in Brazil, our summer Sound and Pleasure comes to a stirring (and more intimate) conclusion.  Tune into Justyna Stasiowskas frequency below. And thanks for engaging the pleasure principle this summer!--JS, Editor-in-Chief

One of my greatest pleasures is lying in bed, eyes closed and headphones on. I attune to a single stimuli while being enveloped in sound. Using sensory deprivation techniques like blindfolding and isolating headphones is a simple recipe for relaxation, but the website Digital Drugs offers you more. A user can play their mp3 files and surround themselves with an acoustical downpour that increases and then develops into gradient waves. The user feels as if in a hailstorm, surrounded by this constant gritty aural movement. Transfixed by the feeling of noise, the outside seems indistinguishable from inside.

Screenshot courtesy of the author

Screenshot courtesy of the author

Sold by the i-Doser company, Digital Drugs use mp3 files to deliver binaural beats in order to “simulate a desired experience.” The user manual advises lying in a dark and silent room with headphones on when listening to the recording. Simply purchase the mp3, and fill the prescription by listening. Depending on user needs, the experience can be preprogrammed with a specific scenario. This way users can condition themselves using Digital Drugs in order to feel a certain way. The user can control the experience by choosing the “student” or “confidence” dose suggestive of whether you’d like your high like a mild dose of marijuana or an intense dose of cocaine. The receiver is able to perceive every reaction of their body as a drug experience, which they themselves produced. The “dosing” of these aural drugs is restricted by a medical warning and “dose advisors” are available for consultation.

Screenshot, courtesy of the author

Screenshot courtesy of the author

Thus, the overall presentation of Digital Drugs resembles a crisscross of medicine and narcotic clichés with the slogan “Binaural Brainwave doses for every imaginable mood.” While researching the phenomena of Digital Drugs, I have tried not to dismiss them as another gimmick or a new age meditation prop. Rather, I argue the I-Doser company offers a simulation of a drug experience by using the discourse of psychoactive substances to describe sounds: the user becomes an actor taking part in a performance.

By tracing these strategies on a macro and micro scale I show a body emerging from a new paradigm of health. I argue that we have become a psychosomatic creature called the inFORMational body: a body that is formed by information, which shapes practices of health undertaken to feel good and form us. This body is networked, much like a fractal, and connects different agencies operating both in macro (society) and micro (individual) scales.

Macroscale Epidemy: The Power of Drug Representation 

Heinrich Wilhelm Dove described binaural beats in 1839 as a specific brain stimuli resulting in low-frequency pulsations perceivable when two tones at slightly different frequencies are presented separately through stereo headphones to each of the subject’s ears. The difference between tones must be relatively small, only up to 30 Hz, and the tones themselves must not exceed 1000 Hz. Subsequently, scientific authorities presented the phenomena as a tool in stimulating the brain in neurological affliction therapy. Gerard Oster described the applications in 1968 and the Monroe Institute later continued this research in order to use binaural beats in meditation and “expanding consciousness” as a crucial part of self-improvement programs.

I-Doser then molded this foundational research into a narrative presenting binaural beats as a brain stimulation for a desired experience. The binaural beats can be simply understood as an acoustic phenomena with application in practices like meditation or medical therapy.

I-Doser also employs the unverified claims about binaural beats into a narration that consists of the scattered information about research; it connects these authorities with YouTube recordings of human reactions to Digital Drugs. Video testimonies of Digital Drugs users caused a considerable stir among both parents and teachers in American schools two years ago. An American school even banned mp3 players as a precautionary measure. In the You Tube video one can see a person lying with headphones on. After a while we see an involuntary body movement that in some videos might resemble a seizure. Losing control over one’s body becomes the highlight of the footage alongside a subjective account also present in the video. The body movements are framed as a drug experience both for the viewer who is a vicarious witness and the participant who has an active experience.

This type of footage as evidence was popularized as early as the 1960s when military footage showed reactions to psychoactive substances such as LSD.

In the same manner as the Digital Drugs video, the army footage highlights the process of losing control over one’s body, complete with subjective testimonies as evidence of the psychoactive substance’s power.

This kind of visualization is usually fueled by paranoia, akin to Cold War fears, depicting daily attacks by an invisible enemy upon unaware subjects. The information of the authority agencies about binaural beats created a reference base that fueled the concern framing the You Tube videos as evidence of drug experience. It shows that the angst isn’t triggered by technology, in this case Digital Drugs, but by the form in which the “invisible attack” is presented: through sound waves. The manner of framing is more important than the hypothetical action itself. Context then changes recognition.

Microscale Paradigm Shift: Health as Feeling 

On an individual level, did feeling better always mean being healthy? In Histoire des pratiques de santé. Le sain et le malsain depuis le MoyenAge, Georges Vigarello, continuator of the Foucault School of Biopolitics, explains that well-being became a medicalized condition in the 20th century with growing attention to mental health. Being healthy was no longer only about the good condition of the body but became a state of mind; feeling was important as an overall recognition of oneself. In the biopolitical perspective, Vigarello points out, health became more than just the government’s concern for individual well-being but was maintained by medical techniques and technologies.

In the case of Digital Drugs the well-being of children was safely governed by parents and media coverage creating prevention in schools from the “sound drugs.” Similarly, the UAE called for a ban on “hypnotic music” citing it as an illegal drug like cannabis or ecstasy. Using this perspective, I would add that feeling better, then, becomes a never-ending warfare; well-being becomes understood as a state (as in condition and as in governed territory).

Well-being is also an obligation to society, carried out by specific practices. What does a healthy lifestyle actually mean? Its meaning includes self-governance: controlling yourself, keeping fit, discipline (embodying the rules). In order to do it you need guidance: the need for authorities (health experts and trainers) and common knowledge (the “google it” modus operandi). All of these agencies create a strategy to make you feel good every day and have a high performance rate. Digital Drugs, then, become products that promise to boost up your energy, make you more endurable, and extend your mind capabilities. High performance is redefined as a state that enables instant access to happiness, pleasure, relaxation.

"Submerged" by Flickr user Rene Passet, CC BY-NC-ND 2.0

“Submerged” by Flickr user Rene Passet, CC BY-NC-ND 2.0

inFORMational Body 

Vigarello reflects that understanding health in terms of low/high performance—itself based on the logic of consumption—created the concept of a limitless enhancement. Here, he refers to the information model, connecting past assumptions about health with a technique of self-governing. It is based on senses and an awareness of oneself using “intellectual” practices like relaxation and “probing oneself” (or knowing what vitamins you should take). The medical apparatus’s priority, moreover, shifted from keeping someone in good health to maintaining well-being. The subjective account became the crucial element of a diagnosis, supporting itself on information from different sources in order to imply the feeling of a limitless “better.” This strategy relies strongly on the use of technologies, the consideration of a sensual aspect and self-recognition—precisely the methodology used for Digital Drugs’ focus on enhancing wellbeing.

Still, this inFORMational body needs a regulatory system. How do we know that we really feel better? Apart from the media well-being campaign (and the amount of surveillance it involves), we are constantly asked about our health status in the common greeting phrase, but its unheimlich-ness only becomes apparent for non-anglo-saxon speakers. These checkpoint techniques become an everyday instrument of discipline and rely on an obligation to express oneself in social interactions.

So how do we feel? As for now, everything seems “OK.”

Featured image: “Biophonic Garden” by Flickr user Rene Passet, CC BY-NC-ND 2.0

Justyna Stasiowska is a PhD student in the Performance Studies Department at Jagiellonian University. She is preparing a dissertation under the working title: “Noise. Performativity of Sound Perception” in which she argue that frequencies don’t have a strictly programmed effect on the receiver and the way of experiencing sounds is determined by the frames or modes of perception, established by the situation and cognitive context. Justyna earned her M.A in Drama and Theater Studies. Her thesis was devoted to the notion of liveness in the context of the strategies used by contemporary playwrights to manipulate the recipients’ cognitive apparatus using the DJ figure. You can find her on Twitter and academia.edu.

tape reelREWIND!…If you liked this post, check out:

Papa Sangre and the Construction of Immersion in Audio Games–Enongo Lumumba-Kasongo

On Sound and Pleasure: Meditations on the Human VoiceYvon Bonenfant

This is Your Body on the Velvet Underground–Jacob Smith

 

%d bloggers like this: