Archive | Disability Studies RSS for this section

Deaf Latin@ Performance: Listening with the Third Ear


World Listening Month3This is the fourth and final post in Sounding Out!’s 4th annual July forum on listening in observation of World Listening Day on July 18th, 2015.  World Listening Day is a time to think about the impacts we have on our auditory environments and, in turn, their effects on us.  For Sounding Out! World Listening Day necessitates discussions of the politics of listening and listening, and, as Trevor Boffone prescribes, a much wider and more corporeal understanding of the practice that goes beyond an emphasis on the ear and even on sound itself.   –Editor-in-Chief JS

As Kent, a Deaf man, stands on stage in Tamales de Puerco, signing his story of struggling and growing up in a hearing family, the only aural sounds in the theater come from the audience: the sounds of crying. Performed in English, Spanish, and American Sign Language (ASL), Tamales offers a glimpse into the seldom seen realities of life as a single mother to a Deaf child as it intersects with Latinidad. The play presents the story of Norma, a young mother who confronts her abusive husband and challenges a country that rejects and oppresses her as an undocumented immigrant. She overcomes the hardships of being Latina, undocumented, and having a Deaf child (Mauricio) without any support from her husband, her mother, and local and state institutions. Ultimately, Norma must negotiate cultural citizenship and notions of belonging to the Deaf Latin@ community so that her son can have more opportunities. The play uses—and calls attention to—silence as an essential building block in the process of constructing, remixing, and performing the complexities of Latin@ identity.

Third Ear Image 2 - TdP - Norma and Tana

Listening to the silences in Latin@ theatre performance offers crucial insight into how the Latin@ population and Latinidad fit into the fabric of the United States in the 21st Century, as Marci R. McMahon notes in “Soundscapes of Narco Silence.” In Tamales, the staging of Deafness creates a particular kind of silence that promotes new listening strategies. What I find most compelling is how Deafness on stage–and the particular silences Deafness can create–opens up a space for what Steph Ceraso calls multimodal listening,” listening as a full-bodied event not solely linked to the ears, but rather connected to “bodies, affects, behaviors, design, space, and aesthetics.” Calling attention to the body as it does, the silences in the play give weight to Kent’s story and affects the viewer beyond the limits of voiced acting by encouraging spectators to concentrate on the actors’ physical emotions and how actors’ bodies work to transmit messages without verbal cues. I argue Tamales promotes multimodal listening by forcing spectators to use their “Third Ear”—a mode of listening across domains of silence, sound, and the moving body—as a device to understand a seemingly silent world.

To do this, I engage with the playscript and recordings of the 2013 production of Mercedes Floresislas’s Tamales de Puerco at CASA 0101 Theater under Edward Padilla’s direction. While Floresislas’s script raised many complex issues surrounding the Deaf Latin@ community, Padilla’s staging focused on the intersections of Deafness and Latinidad by foregrounding the use of silence in the production. [Note: I use the capitalized versions of Deaf and Deafness. A standard dictionary definition of “deaf” represents one who is partially or unable to hear (deaf and hearing impaired are essentially interchangeable). Deaf with a capital D, however, refers to the community that self-identifies as belonging to the Deaf culture. Deafness, therefore, is a sign of health and prognosis of well-being among sign language dependent hearing-impaired people. Likewise, hearing versus Hearing represents a similar biological/cultural binary.]

In Hearing Difference: The Third Ear in Experimental, Deaf, and Multicultural Theater, one of the few studies to devote critical attention to Deaf theater as it relates to multicultural experience and identity, Kanta Kochhar-Lindgren introduces the “Third Ear,” a useful term that facilitates focusing one’s attention on the performative forms of expression. Blending sensory, spatial, and visual elements generates a Third Ear that acts as a “Deaf-gain,” a hybrid mode of hearing and coming to know the world. When specific senses are lost, the mind becomes dynamic in such a way that continues to allow affected individuals to actively engage with their surroundings, with their community. Deaf people, therefore, do not lack a vital sense, but rather they gain a new sense—one typically inaccessible to hearing individuals– that enables them to successfully navigate their surroundings. Kochhar-Lindgren’s work focuses attention on the “sense” of performance and the different movements that work together to form speech sensed by the “Third Ear.” For audience members, learning to perceive the mixing of forms together as communication is fundamental to understanding the messages presented on stage; inevitably, the Third Ear promotes auditory silence yet it establishes that a lack of sound does not necessarily correspond with a lack of understanding. By removing all sound, silence gains power.

Third Ear Image 1 - TdP Poster Art (1)

The evocation of the Third Ear separates Tamales from the majority of Latin@ theater productions grounded in aural languages such as English, Spanish, and Spanglish. Deafness is seldom represented onstage in any type of theater, aside from revivals of William Gibson’s The Miracle Worker and Mark Medoff’s Children of a Lesser God, more contemporary works such as Suzan Zeder’s Ware Trilogy and Bruce Norris’s Clybourne Park, and the work of Deaf West Theatre in Hollywood, whose most recent production, Spring Awakening received rave reviews and will move Broadway in September 2015. The work of Deaf West has been of particular interest to Sound Studies scholars for its unique contributions to the American Theatre. In Cara Cardinale’s 2012 SO! post, she discusses Deaf West’s production of Tennessee Williams’ A Streetcar Named Desire in which the roles were reversed. The production’s interpreters were for the hearing audience and, thus, sign language took center stage. Yet, all of these more well-known works focus on Anglo experiences, neglecting the specific intersectional challenges that Deaf people of color face such as limited access to state-funded resources such as counseling services, educational inequality and the achievement gap, not to mention that the majority of Deaf Latin@s do not have parents who can sign with them (re: effectively communicate).

The Third Ear, as evoked in Tamales, seems especially suited for representing Latin@ Deafness onstage and evoking a concomitant visceral understanding in audiences. Floresislas’s writing and Padilla’s direction work together to strategically allow audience members to develop a Third Ear at key moments in the play, enabling them to fill silences they might have otherwise perceived as gaps. Entering Tamales’ silent world not only compels hearing audiences to recognize their supposed privilege, but pushes toward a deeper understanding of the relativity of hearing-as-privilege. In a Deaf world, hearing is not a privilege, but rather one of many ways to come to know the world. In this regard, Tamales reiterates Liana Silva’s argument that “deafness complicates what it means to listen” by calling attention to the many non-auditory signals that are vital to the act.

2B63E42B-FE77-6FF9-25FDAD4EE2D67726In addition, Tamales deliberately fosters moments of uncomfortable silences that are one of the production’s strengths. For example, silence plays a key role in an early scene in which Norma decides to leave her abusive husband, Reynaldo. In this violent episode–either by a deafening blow or disassociation–everything in her world goes silent. While Reynaldo yells at her and throws things around the house, his voice fades out. However, as Norma sits in silence, she becomes better able to navigate her abusive marriage. Norma hears the silence. Her hypervigilance increases her ability to identify potential threat(s) and, ultimately, she takes her son and flees from the situation. While Norma taps into her Third Ear on stage, the audience also enters a silent world in which they must seek alternative methods to actively engage with the production. By “losing” their hearing along with Norma, the audience must pay a different kind of attention to her to gain an understanding of the scene.

Along with recognizing certain hearing privileges, listening with the Third Ear both connects and separates the audience. For instance, in the scene in which Norma attends an AA meeting for Deaf people, Padilla’s direction activates the Third Ear by removing sound from the stage. In the original playscript, Floresislas wanted Kent’s monologue to include a voice-over, but during rehearsals, Padilla saw the potential to foreground the silence in this scene (and throughout the piece, as well); his direction transformed the staging from an aural scene to a silent one. Listening with the Third Ear enables the audience to blend sensory and visual hearing in order to understand the emotional depth of the action transpiring on stage. As Kent stands in silence, signing his story about the difficulties of connecting with his hearing father, many in the audience were audibly moved. During Kent’s monologue, the actor remained silent while supertitles revealed his speech:

Yesterday, my father had a heart attack and I got called to his bedside at the hospital. I had not seen him for almost 15 years! I had never had a conversation with my father; yes, he was hearing and I was his only deaf child. (…) I always believed by dad hated me; nothing I did was ever good enough. He was always watching me and looking angry for everything I ever did or asked. I actually wished he’d ignore me like the rest of the family! (15)

Third Ear Image 3 - TdP - Kent (Dickie Hearts)

Particularly gripping, this scene acts as a crucial building block in the necessity of creating opportunities for her son that drives Norma’s story forward, not to mention that it calls attention to the fact that reading isn’t necessarily a silent act. Kent’s story reveals much to a hearing audience who may be unfamiliar with the Deaf Latin@ community. Kent’s experience is typical of Deaf Latin@s, only 20% of whom have parents that can sign. It compels an understanding of the reasons why Norma learns ASL and pushes for a better life for her son. She does not want him to be in the same position that Kent finds himself in. And, she does not want to have the regret of having never learned to communicate with him. Kent continues:

Yesterday, he looked frail; he was paralyzed on one side. When he saw me, he moved his hand like this (brushes his left hand up the center of his chest then points at). At first, I didn’t understand what he was doing. But when he did it again, I understood. He said, “I’m proud of you.” Then he signed “I love you.” (…) My niece told me he had been learning ASL for the last 3 months because he wanted to tell me how sorry he was for not being able to talk to me. My dad didn’t hate me; he hated himself for not being able to talk to me! (…) But yesterday, I also had my first and last conversation with my dad he signed for me! That…makes me feel very proud! (15-16)

As Kent stands in silence, his emotional journey is given life through his hands and body. Interestingly, the silences enacted onstage by Tamales actually create sound, amplifying the sobbing that emanates from the audience in both its auditory and visual manifestations. The way in which silence allows the audiences’ sonic reactions to become part of the play itself suggests that how—and why–the audience responds may actually be more important than the performance itself. How much are the sobs about the heartbreaking nature of Kent’s story and how much of it is recognizing one’s own privileges? How much of it is the audience connecting with the story? How much of it is about seeing themselves represented? And how does silence amplify “listening” to Kent’s story?



While not exhaustive, my reading of Tamales widens the conversation about the intricacies of Deaf Latin@ performance. The 2013 production of Tamales best hints at the possibilities of Latin@ performance in Boyle Heights and how community-based theater companies such as CASA 0101 can work to provide more access to Deaf people, thus forging both an inclusive community and theater company. More plays featuring Deaf characters, incorporating Deaf actors, and Deaf dramatists are needed, something Floresislas is already exploring. Still, much research remains as to how Deaf Latinidad is heard and how this identity fits into a performance framework. Through multimodal listening, Tamales urges spectators to leave the theater considering how they may or may not alter their actions to better benefit underprivileged and underrepresented communities such as the Latin@ Deaf community. Quite frankly, Tamales opens the “eyes and ears” of audiences. Now is the time to listen to Deaf Latinidad. What will we choose to hear in the silence?

Still Images from Tamales de Puerco, permission courtesy of CASA 0101 Theatre. Featured Image: Olin Tonatiuh and Cristal Gonzalez in “Tamales De Puerco.” Photo by Ed Krieger.

Trevor Boffone is a Houston-based scholar, educator, dramaturge, and producer. He is a co-founder of Amaranto Productions and a member of the Latina/o Theatre Commons Steering Committee. Trevor is a doctoral candidate in the Department of Hispanic Studies at the University of Houston where he holds a Graduate Certificate in Women’s, Gender, & Sexuality Studies. His dissertation, Performing Eastside Latinidad: Josefina López and Theater for Social Change in Boyle Heights, is a study of theater and performance in East Los Angeles, focusing primarily on Josefina López’s role as a playwright, mentor, and community leader. He has published and presented original research on Chicana Feminist Teatro, the body in performance, Deaf Latinidad, Queer Latinidad, as well as the theater of Adelina Anthony, Nilo Cruz, Virginia Grise, Josefina López, Cherríe Moraga, Monica Palacios, and Carmen Peláez. Trevor recently served as a Research Fellow at LLILAS Benson Latin American Studies and Collections at the University of Texas at Austin for his project Bridging Women in Mexican-American Theater from Villalongín to Tafolla (1848-2014).


Misophonia: Towards a Taxonomy of Annoyance


World Listening Month3This is the second post in Sounding Out!’s 4th annual July forum on listening in observation of World Listening Day on July 18th, 2015.  World Listening Day is a time to think about the impacts we have on our auditory environments and, in turn, their effects on us.  For Sounding Out! World Listening Day necessitates discussions of the politics of listening and listening, and, as Carlo Patrão shares today, an examination of sounds that disturb, annoy, and threaten our mental health and well being.   –Editor-in-Chief JS

An important factor in coming to dislike certain sounds is the extent to which they are considered meaningful. The noise of the roaring sea, for example, is not far from white radio noise (…) We still seek meaning in nature and therefore the roaring of the sea is a blissful soundTorben Sangild, The Aesthetics of Noise

When hearing bodily sounds, we often react with discomfort, irritation, or even shame. The sounds of the body remind us of its fallible and vulnerable nature, calling to mind French surgeon René Leriche’s statement that “health is life lived in the silence of the organs” (1936). The mind rests when the inner works of the body are forgotten. Socially, sounds coming from the organic functions of the body such as chewing, lip smacking, breathing, sniffling, coughing, sneezing or slurping are considered annoying and perceived as intrusions. A recent study by Trevor Cox suggests that our reactions of disgust towards sounds of bodily excretions and secretions may be socially-learned and vary according to whether it is considered acceptable or unacceptable to make such sounds in public.

hamm gif

However, for people suffering from a condition called Misophonia, these bodily sounds aren’t simply annoying, rather they become sudden triggers of aggressive impulses and involuntary fight or flight responses. Misophonia, meaning hatred of sound, is a chronic condition characterized by highly negative emotional responses to auditory triggers, which include repetitive and social sounds produced by another person, like hearing someone eating an apple, crunching chips, slurping on a soup spoon or even breathing.

The consequences of Misophonia can be very troublesome, leading to social isolation or the continuous avoidance of certain places and situations such as family dinners, the workplace and recreational activities like going to the cinema. While rate of occurrence of new cases of Misophonia in the population is still under investigation, the fast growing number of online communities gathered around the dislike of certain sounds may indicate that this condition is more common than previously thought.

But why do people with Misophonia feel such strong reactions to trigger sounds? This fundamental question remains up for debate. Some audiologists suggest these heightened emotional responses can be explained by hyperconnectivity between the auditory, limbic and autonomic nervous systems. However, we continue to lack a comprehensive theoretical model to understand Misophonia, as well as an effective treatment to help sufferers of Misophonia cope with intrusive sound triggers.

misophonia awareness

The Art of Annoyance: is it possible to reframe misophonic trigger sounds as misophonic music?

Between 1966 and 1967, John Cage and Morton Feldman recorded four open-ended radio conversations, called Radio Happenings (WBAI, NYC). Among many topics, Feldman and Cage address the problem of being constantly intruded upon by unpleasant sounds. Feldman narrates his annoyance with the sounds blasted from several radios on a trip to the beach. Cage’s commentary on the growing annoyance of his friend reveals a shift of perception in dealing with unwanted sounds:

Well, you know how I adjusted to that problem of the radio in the environment (…) I simply made a piece using radios. Now, whenever I hear radios – even a single one, not just twelve at a time, as you must have heard on the beach, at least – I think, “Well, they’re just playing my piece.- John Cage, Radio Happenings.

Cage proposes a remedy via appropriation of environmental intrusions. The negative emotional charge associated with them is neutralized. Sound intrusions no longer exist as absolute external entities trying to intrude their way in. They become part of the self. Ultimately, there are no sonic intrusions, as the entire field of sound is desirable for composition.

Cage’s immersive compositional anticipated an important strategy to build resilience towards aversive sound: exposure-based cognitive-behavioral therapy, which  proposes a gradual immersion in trigger sounds. And I suggest we can mine the history of avant-garde practice to productively further the idea of immersion; in the realms of sound poetry, utterance based music, Fluxus events and many other sound art practices, bodily sounds have consistently been exalted as source of composition and performance. Much like Cage did with what he perceived as intrusive radio sounds, by performing chewing, coughs, slurps and hiccups, assembling snores and nose whistles, and singing the poetics of throat clearing, we may be able to elevate our body awareness and challenge the way we perceive unwanted sounds. In what follows, I sample these works with an ear toward misophonia, discussing their interventions in the often jarring world of everyday irritation.

Oral Oddities

As pointed out by Nancy Perloff, while the avant-garde progressively expanded to incorporate the entire scope of sound into composition, sound poetry followed a similar course by playing with the non-semantic proprieties of language and exploring new vocal techniques. The Russian avant-garde (zaum), the Italian futurists (parole in libertà) and the German Dada (Hugo Ball’s verse ohne Worte) built the foundations of a new oral hyper-expression of the body through moans, clicks, hisses, hums, whooshes, whizzes, spits, and breaths.

Henri Chopin, Les Pirouettes Vocales Pour Les Pirouettements Vocaux

Sound poets like Henri Chopin created uncanny sonic textures by only using ‘vocal micro-particles’, revealing a sounding body that can be violent and intrusive. François Dufrêne and Gil J Wolman brought forward more raw and glottal performances.

Bridging the gap between the Schwitter’s Dada-constructivism and a contemporary approach to sound poetry, Jaap Blonk’s inventive vocal performances cover a wide range of mouth sounds. In the same vein, Paul Dutton explores the limits of his voice, glottis, tongue, lips and nose as the medium for compositions — as can be heard on the record Mouth Pieces: Solo Soundsinging.

Paul Dutton, Lips Is, Mouth Pieces: Solo Soundsinging

Fluxus: Eat, Chew, Burp, Cough, Perform!

 The Event is a metarealistic trigger: it makes the viewer’s or user’s experience special. (…) Rather than convey their own emotional world abstractly, Fluxus artists directed their audiences’ attention to concrete everyday stuff addressing aesthetic metareality in the broadest sense. Hannah HigginsFluxus Experience

 The emergence of Fluxus is strongly linked to Cage’s 1957-59 class at New School for Social Research in NYC. George Bretch’s Event Score was one of the best known innovations to emerge from these classes. The Event Score was a performance technique drawn from short instructions that framed everyday life actions as minimal performances. Daily acts like chewing, coughing, licking, eating or preparing food were considered by themselves ready-made works of art. Many Fluxus artists such as Shigeko KubotaYoko Ono, Mieko Shiomi, and Alison Knowles saw these activities as forms of social music.

For instance, Alison Knowles produced several famous Fluxus food events such as Make a Salad (1962), Make a Soup (1962), and The Identical Lunch (1967-73).


Also, Mieko Shiomi‘s Shadow Piece No. 3 calls attention to the sound of amplified mastication, while Philip Corner’s piece Carrot Chew Performance is solely centered in the activity of chewing a carrot.

Philip Corner, Carrot Chew Performance, Tellus #24

In Nivea Cream Piece (1962), Alison Knowles invites the performers to rub their hands with cream in front of a microphone, producing a deluge of squeezing sounds:

Alison Knowles – Nivea Cream Piece (1962) – for Oscar (Emmett) Williams

Coughing is a form of love.

yokoIn 1961, the Fluxus artist Yoko Ono composed a 32 minute, 31 second audio recording called Cough Piece, a precursor to her instruction Keep coughing a year (Grapefruit). In this recording, the sound of Ono’s cough emerges periodically from the indistinct background noise. The Cough Piece plays with the concept of time, prolonging the duration of an activity beyond what is considered socially acceptable. While listening to this piece, Yoko Ono brings us close to her body’s automatic reflexes, pulling back the veil of an indistinct inner turmoil. Coughing can be a bodily response to an irritating tickling feeling, troubled breathing, a sore throat or a reaction to foreign particles or microbes. In response, coughing is a way of clearing, a freeing re-flux of air, a way out. Coughing is a form of love.

Yoko Ono – Cough Piece


Sonic Skin

In the work The Ego and the Id (1923), Sigmund Freud stated that the ego is ultimately derived from bodily sensations. The psychoanalyst Didier Anzieu expanded this idea by suggesting that early experiences of sound are crucial to consolidate the infant’s ego. The bath of sounds surrounding the child created by the parent’s voices and their soothing sounds provides a sonorous envelope or an audio-phonic skin that protects the child against ego-assailing noises and helps the creation of the first boundaries between the inside and the external world. The lack of a satisfactory sound envelope may compromise the development of a proper sense of self, leaving it vulnerable to invasions from outside.

It’s no surprise that conditions like Misophonia exist and are very common among us, considering how important our early exposure to sound is in building our sense of self and our sensory limits. For Misophonics, the everyday sounds we make without even thinking about them can be the source of a fractured and disruptive experience that we should not dismiss as the overreactions of a sensitive person. During the month we observe World Listening Day, our discourse usually praises the pleasures of listening and tends to focus on the sounds that soothe rather than annoy. However, conditions like Misophonia show us that there is much more that needs to be said on the subject of unpleasant sound experience. I can’t help but notice a disconnect between the vast exploration of annoying and irritating sounds in the avant-garde and the critical discourse in our sound communities that is dominated by the pleasures of listening. Cage’s call to embrace intrusive sounds urges us to consider all sounds regardless of where they fall on the spectrum of our emotions. For all of us who would consider ourselves philophonics, let’s create a critical discourse that addresses the struggles of listening as much as its pleasures.

Thanks to Jennifer Stoever for the thoughtful suggestions.

Carlo Patrão is a Portuguese radio artist and producer of the show Zepelim. His radio work began as a member of the Portuguese freeform station Radio Universidade de Coimbra (RUC). In his pieces, he aims to explore the diverse possibilities of radiophonic space through the medium of sound collage. He has participated in projects like, Radio Boredcast, and his work has been featured in several international sound festivals and has also been commissioned by Radio Arts (UK). He is currently working on a radio show for the Portuguese national public radio station RTP. In addition to his work in radio, he has a master’s in clinical psychology.

tape reelREWIND! . . .If you liked this post, you may also dig:

Optophones and Musical Print– Mara Mills

Optophones and Musical Print

The word type, as scanned by the optophone.

From E.E. Fournier d’Albe, The Moon Element (New York: D. Appleton & Company, 1924), 141.


In 1912, British physicist Edmund Fournier d’Albe built a device that he called the optophone, which converted light into tones. The first model—“the exploring optophone”—was meant to be a travel aid; it converted light into a sound of analogous intensity. A subsequent model, “the reading optophone,” scanned print using lamp-light separated into beams by a perforated disk. The pattern of light reflected back from a given character triggered a corresponding set of tones in a telephone receiver. d’Albe initially worked with 8 beams, producing 8 tones based on a diatonic scale. He settled on 5 notes: lower G, and then middle C, D, E and G. (Sol, do, re, mi, sol.) The optophone became known as a “musical print” machine. It was popularized by Mary Jameson, a blind student who achieved reading speeds of 60 words per minute.

Photograph of the optophone, an early scanner with a rounded glass bookrest.

Reading Optophone, held at Blind Veterans UK (formerly St. Dunstan’s). Photographed by the author. With thanks to Robert Baker for helping me search through the storeroom to locate this item.

Scientific illustration of the optophone, showing a book on the bookrest and a pair of headphones for listening to the tonal output.

Schematic of optophone from Vetenskapen och livet (1922)

In the field of media studies, the optophone has become renowned through its imaginary repurposings by a number of modernist artists. For one thing, the optophone finds brief mention in Finnegan’s Wake. In turn, Marshall McLuhan credited James Joyce’s novel for being a new medium, turning text into sound. In “New Media as Political Forms,” McLuhan says that Joyce’s own “optophone principle” releases us from “the metallic and rectilinear embrace of the printed page.” More familiar within media studies today, Dada artist Raoul Hausmann patented (London 1935), but did not successfully build, an optophone presumably inspired by d’Albe’s model, which he hoped would be employed in audiovisual performances. This optophone was meant to convert sound into light as well as the reverse. It was part of a broader contemporary impulse to produce color music and synaesthetic art. Hausmann also wrote optophonetic poetry, based on the sounds and rhythms of “pure phonemes” and non-linguistic noises. In response, Francis Picabia painted two optophone portraits in 1921 and 22. Optophone I, below, is composed of lines that might be sound waves, with a pattern that disorders vision.

Francis Picabia's Optophone I, a series of concentric black circles with a female figure at the center.

Francis Picabia, Optophone I (1922)

Theorists have repeatedly located Hausmann’s device at the origin of new media. Authors in the Audiovisuology, Media Archaeology, and Beyond Art: A Third Culture anthologies credit Hausmann’s optophone with bringing-into-being cybernetics, digitization, the CD-ROM, audiovisual experiments in video art, and “primitive computers.” It seems to have escaped notice that d’Albe also used the optophone to create electrical music. In his book, The Moon Element, he writes:

Needless to say, any succession or combination of musical notes can be picked out by properly arranged transparencies, and I have succeeded in transcribing a number of musical compositions in this manner, which are, of course, only audible in the telephone. These notes, in the absence of all other sounding mechanism, are particularly pure and free from overtones. Indeed, a musical optophone worked by this intermittent light, has been arranged by means of a simple keyboard, and some very pleasant effects may thus be obtained, more especially as the loudness and duration of the different notes is under very complete and separate control.

E.E. Fournier d’Albe, The Moon Element (New York: D. Appleton & Company, 1924), 107.

d’Albe’s device is typically portrayed as a historical cul-de-sac, with few users and no real technical influence. Yet optophones continued to be designed for blind people throughout the twentieth century; at least one model has users even today. Musical print machines, or “direct translators,” co-existed with more complex OCR-devices—optical character recognizers that converted printed words into synthetic speech. Both types of reading machine contributed to today’s procedures for scanning and document digitization. Arguably, reading optophones intervened more profoundly into the order of print than did Hausmann’s synaesthetic machine: they not only translated between the senses, they introduced a new symbolic system by which to read. Like braille, later vibrating models proposed that the skin could also read.

In December 1922, the Optophone was brought to the United States from the United Kingdom for a demonstration before a number of educators who worked with blind children; only two schools ordered the device. Reading machine development accelerated in the U.S. around World War II. In his position as chair of the National Defense Research Committee, Vannevar Bush established a Committee on Sensory Devices in 1944, largely for the purpose of rehabilitating blind soldiers. The other options for reading—braille and Talking Books—were relatively scarce and had a high cost of production. Reading machines promised to give blind readers access to magazines and ephemeral print (recipes, signs, mail), which was arguably more important than access to books.

Piechowski, wearing a suit, scans the pen of the A-2 reader over a document.

Joe Piechowski with the A-2 reader. Courtesy of Rob Flory.

At RCA (Radio Corporation of America), the television innovator Vladimir Zworykin became involved with this project. Zworykin had visited Fournier d’Albe in London in the 19-teens and seen a demonstration of the optophone. Working with Les Flory and Winthrop Pike, Zworykin built an initial machine known as the A-2 that operated on the same principles, but used a different mechanism for scanning—an electric stylus, which was publicized as “the first pen that reads.” Following the trail of citations for RCA’s “Reading Aid for the Blind” patent (US 2420716A, filed 1944), it is clear that the “pen” became an aid in domains far afield from blindness. It was repurposed as an optical probe for measuring the oxygen content of blood (1958); an “optical system for facsimile scanners” (1972); and, in a patent awarded to Burroughs Corporation in 1964, a light gun. This gun, in turn, found its way into the handheld controls for the first home video game system, produced by Sanders Associates.

The A-2 optophone was tested on three blind research subjects, including ham radio enthusiast Joe Piechowski, who was more of a technical collaborator. According to the reports RCA submitted to the CSD, these readers were able to correlate the “chirping” or “tweeting” sounds of the machine with letters “at random with about eighty percent accuracy” after 60 hours of practice. Close spacing on a printed page made it difficult to differentiate between letters; readers also had difficulty moving the stylus at a steady pace and in a straight line. Piechowski achieved reading speeds of 20 words per minute, which RCA deemed too slow.

Attempts were made to incorporate “human factors” and create a more efficient tonal code, to reduce reading time as well as learning time and confusion between letters. One alternate auditory display was known as the compressed optophone. Rather than generate multiple tones or chords for a single printed letter, which was highly redundant and confusing to the ear, the compressed version identified only certain features of a printed letter: such as the presence of an ascender or descender. Below is a comparison between the tones of the original optophone and the compressed version, recorded by physicist Patrick Nye in 1965. The following eight lower case letters make up the source material: f, i, k, j, p, q, r, z.

Original record in the author’s possession. With thanks to Elaine Nye, who generously tracked down two of her personal copies at the author’s request. The second copy is now held at Haskins Laboratories.

An image of the letter r as scanned by the optophone and compressed optophone.

From Patrick Nye, “An Investigation of Audio Outputs for a Reading Machine,” AFB Research Bulletin (July 1965): 30.


Because of the seeming limitations of tonal reading, RCA engineers re-directed their research to add character recognition to the scanning process. This was controversial, direct translators like the optophone being perceived as too difficult because they required blind people to do something akin to learning to read print—learning a symbolic tonal or tactile code. At an earlier moment, braille had been critiqued on similar grounds; many in the blind community have argued that mainstream anxieties about braille sprang from its symbolic difference. Speed, moreover, is relative. Reading machine users protested that direct translators like the optophone were inexpensive to build and already available—why wait for the refinement of OCR and synthetic speech? Nevertheless, between November 1946 and May 1947, Zworykin, Flory, and Pike worked on a prototype “letter reading machine,” today widely considered to be the first successful example of optical character recognition (OCR). Before reliable synthetic speech, this device spelled out words letter by letter using tape recordings. The Letter-Reader was too massive and expensive for personal use, however. It also had an operating speed of 20 words per minute—thus it was hardly an improvement over the A-2 translator.

Haskins Laboratories, another affiliate of the Committee on Sensory Devices, began working on the reading machine problem around the same time, ultimately completing an enormous amount of research into synthetic speech and—as argued by Donald Shankweiler and Carol Fowler—the “speech code” itself. In the 1940s, before workable text-to-speech, researchers at Haskins wanted to determine whether tones or artificial phonemes (“speech-like speech”) were easier to read by ear. They developed a “machine dialect of English,” named wuhzi: “a transliteration of written English which preserved the phonetic patterns of the words.” An example can be played below. The eight source words are: With, Will, Were, From, Been, Have, This, That.

Original record in the author’s possession. From Patrick Nye, “An Investigation of Audio Outputs for a Reading Machine” (1965). With thanks to Elaine Nye.

Based on the results of tests with several human subjects, the Haskins researchers concluded that aural reading via speech-like sounds was necessarily faster than reading musical tones. Like the RCA engineers, they felt that a requirement of these machines should be a fast rate of reading. Minimally, they felt that reading speed should keep pace with rapid speech, at about 200 words per minute.

Funded by the Veterans Administration, members of Mauch Laboratories in Ohio worked on both musical optophones and spelled-speech recognition machines from the 1950s into the 1970s. One of their many devices, the Visotactor, was a direct-translator with vibro-tactile output for four fingers. Another, the Visotoner, was a portable nine-channel optophone. All of the Mauch machines were tested by Harvey Lauer, a technology transfer specialist for the Veterans Administration for over thirty years, himself blind. Below is an excerpt from a Visotoner demonstration, recorded by Lauer in 1971.

Visotoner demonstration. Original 7” open reel tape in author’s possession. With thanks to Harvey Lauer for sharing items from his impressive collection and for collaborating with the author over many years.

Lauer's fingers are pictured in the finger-rests of the Visotactor, scanning a document.

Harvey Lauer reading with the Visotactor, a text-to-tactile translator, 1977.

Later on the same tape, Lauer discusses using the Visotoner to read mail, identify currency, check over his own typing, and read printed charts or graphics. He achieved reading speeds of 40 words per minute with the device. Lauer has also told me that he prefers the sound of the Visotoner to that of other optophone models—he compares its sound to Debussy, or the music for dream sequences in films.

Mauch also developed a spelled speech OCR machine called the Cognodictor, which was similar to the RCA model but made use of synthetic speech. In the recording below, Lauer demonstrates this device by reading a print-out about IBM fonts. He simultaneously reads the document with the Visotoner, which reveals glitches in the Cognodictor’s spelling.

Original 7” open reel tape in the author’s possession. With thanks to Harvey Lauer.

A hand uses the metal probe of the Cognodictor to scan a typed document.

The Cognodictor. Glendon Smith and Hans Mauch, “Research and Development in the Field of Reading Machines for the Blind,” Bulletin of Prosthetics Research (Spring 1977): 65.

In 1972, at the request of Lauer and other blind reading machine users, Mauch assembled a stereo-optophone with ten channels, called the Stereotoner. This device was distributed through the VA but never marketed, and most of the documentation exists in audio format, specifically in sets of training tapes that were made for blinded veterans who were the test subjects. Some promotional materials, such as the short video below, were recorded for sighted audiences—presumably teachers, rehabilitation specialists, or funding agencies.

Mauch Stereo Toner from Sounding Out! on Vimeo.

Video courtesy of Harvey Lauer.

Mary Jameson corresponded with Lauer about the stereotoner, via tape and braille, in the 1970s. In the braille letter pictured below she comments, “I think that stereotoner signals are the clearest I have heard.”

Scan of a braille letter from Jameson to Lauer.

Letter courtesy of Harvey Lauer. Transcribed by Shafeka Hashash.

In 1973, with the marketing of the Kurzweil Reader, funding for direct translation optophones ceased. The Kurzweil Reader was advertised as the first machine capable of multi-font OCR; it was made up of a digital computer and flatbed scanner and it could recognize a relatively large number of typefaces. Kurzweil recalls in his book The Age of Spiritual Machines that this technology quickly transferred to Lexis-Nexis as a way to retrieve information from scanned documents. As Lauer explained to me, the abandonment of optophones was a serious problem for people with print disabilities: the Kurzweil Readers were expensive ($10,000-$50,000 each); early models were not portable and were mostly purchased by libraries. Despite being advertised as omnifont readers, they could not in fact recognize most printed material. The very fact of captchas speaks to the continued failures of perfect character recognition by machines. And, as the “familiarization tapes” distributed to blind readers indicate, the early synthetic speech interface was not transparent—training was required to use the Kurzweil machines.

Original cassette in the author’s possession. 

A young Kurzweil stands by his reading machine, demonstrated by Jernigan, who is seated.

Raymond Kurzweil and Kenneth Jernigan with the Kurzweil Reading Machine (NFB, 1977). Courtesy National Federation of the Blind.

Lauer always felt that the ideal reading machine should have both talking OCR and direct-translation capabilities, the latter being used to get a sense of the non-text items on a printed page, or to “preview material and read unusual and degraded print.” Yet the long history of the optophone demonstrates that certain styles of decoding have been more easily naturalized than others—and symbols have increasingly been favored if they bear a close relation to conventional print or speech. Finally, as computers became widely available, the focus for blind readers shifted, as Lauer puts it, “from reading print to gaining access to computers.” Today, many electronic documents continue to be produced without OCR, and thus cannot be translated by screen readers; graphical displays and videos are largely inaccessible; and portable scanners are far from universal, leaving most “ephemeral” print still unreadable.

Mara Mills is an Assistant Professor of Media, Culture, and Communication at New York University, working at the intersection of disability studies and media studies. She is currently completing a book titled On the Phone: Deafness and Communication Engineering. Articles from this project can be found in Social Text, differences, the IEEE Annals of the History of Computing, and The Oxford Handbook of Sound Studies. Her second book project, Print Disability and New Reading Formats, examines the reformatting of print over the course of the past century by blind and other print disabled readers, with a focus on Talking Books and electronic reading machines. This research is supported by NSF Award #1354297.

Sounds of Science: The Mystique of Sonification


Hearing the Unheard IIWelcome to the final installment of Hearing the UnHeardSounding Out!s series on what we don’t hear and how this unheard world affects us. The series started out with my post on hearing, large and small, continued with a piece by China Blue on the sounds of catastrophic impacts, and Milton Garcés piece on the infrasonic world of volcanoes. To cap it all off, we introduce The Sounds of Science by professor, cellist and interactive media expert, Margaret Schedel.

Dr. Schedel is an Associate Professor of Composition and Computer Music at Stony Brook University. Through her work, she explores the relatively new field of Data Sonification, generating new ways to perceive and interact with information through the use of sound. While everyone is familiar with informatics, graphs and images used to convey complex information, her work explores how we can expand our understanding of even complex scientific information by using our fastest and most emotionally compelling sense, hearing.

– Guest Editor Seth Horowitz

With the invention of digital sound, the number of scientific experiments using sound has skyrocketed in the 21st century, and as Sounding Out! readers know, sonification has started to enter the public consciousness as a new and refreshing alternative modality for exploring and understanding many kinds of datasets emerging from research into everything from deep space to the underground. We seem to be in a moment in which “science that sounds” has a special magic, a mystique that relies to some extent on misunderstandings in popular awareness about the processes and potentials of that alternative modality.

For one thing, using sound to understand scientific phenomena is not actually new. Diarist Samuel Pepys wrote about meeting scientist Robert Hooke in 1666 that “he is able to tell how many strokes a fly makes with her wings (those flies that hum in their flying) by the note that it answers to in musique during their flying.” Unfortunately Hooke never published his findings, leading researchers to speculate on his methods. One popular theory is that he tied strings of varying lengths between a fly and an ear trumpet, recognizing that sympathetic resonance would cause the correct length string to vibrate, thus allowing him to calculate the frequency. Even Galileo used sound, showing the constant acceleration of a ball due to gravity by using an inclined plane with thin moveable frets. By moving the placement of the frets until the clicks created an even tempo he was able to come up with a mathematical equation to describe how time and distance relate when an object falls.

Illustration from Robert Hooke's Micrographia (1665)

Illustration from Robert Hooke’s Micrographia (1665)

There have also been other scientific advances using sound in the more recent past. The stethoscope was invented in 1816 for auscultation, listening to the sounds of the body. It was later applied to machines—listening for the operation of the technological gear. Underwater sonar was patented in 1913 and is still used to navigate and communicate using hydroacoustic phenomenon. The Geiger Counter was developed in 1928 using principles discovered in 1908; it is unclear exactly when the distinctive sound was added. These are all examples of auditory display [AD]; sonification-generating or manipulating sound by using data is a subset of AD. As the forward to the The Sonification Handbook states, “[Since 1992] Technologies that support AD have matured. AD has been integrated into significant (read “funded” and “respectable”) research initiatives. Some forward thinking universities and research centers have established ongoing AD programs. And the great need to involve the entire human perceptual system in understanding complex data, monitoring processes, and providing effective interfaces has persisted and increased” (Thomas Hermann, Andy Hunt, John G. Neuhoff, Sonification Handbook, iii)

Sonification clearly enables scientists, musicians and the public to interact with data in a very different way, particularly compared to the more numerous techniques involving vision. Indeed, because hearing functions quite differently than vision, sonification offers an alternative kind of understanding of data (sometimes more accurate), which would not be possible using eyes alone. Hearing is multi-directional—our ears don’t have to be pointing at a sound source in order to sense it. Furthermore, the frequency response of our hearing is thousands of times more accurate than our vision. In order to reproduce a moving image the sampling rate (called frame-rate) for film is 24 frames per second, while audio has to be sampled at 44,100 frames per second in order to accurately reproduce sound. In addition, aural perception works on simultaneous time scales—we can take in multiple streams of audio data at once at many different dynamics, while our pupils dilate and contract, limiting how much visual data we can absorb at a single time. Our ears are also amazing at detecting regular patterns over time in data; we hear these patterns as frequency, harmonic relationships, and timbre.

Image credit: Dr. Kevin Yager, data measured at X9 beamline, Brookhaven National Lab.

Image credit: Dr. Kevin Yager, Brookhaven National Lab.

But hearing isn’t simple, either. In the current fascination with sonification, the fact that aesthetic decisions must be made in order to translate data into the auditory domain can be obscured. Headlines such as “Here’s What the Higgs Boson Sounds Like” are much sexier than headlines such as “Here is What One Possible Mapping of Some of the Data We Have Collected from a Scientific Measuring Instrument (which itself has inaccuracies) Into Sound.” To illustrate the complexity of these aesthetic decisions, which are always interior to the sonification process, I focus here on how my collaborators and I have been using sound to understand many kinds of scientific data.

My husband, Kevin Yager, a staff scientist at Brookhaven National Laboratory, works at the Center for Functional Nanomaterials using scattering data from x-rays to probe the structure of matter. One night I asked him how exactly the science of x-ray scattering works. He explained that X-rays “scatter” off of all the atoms/particles in the sample and the intensity is measured by a detector. He can then calculate the structure of the material, using the Fast Fourier Transform (FFT) algorithm. He started to explain FFT to me, but I interrupted him because I use FFT all the time in computer music. The same algorithm he uses to determine the structure of matter, musicians use to separate frequency content from time. When I was researching this post, I found a site for computer music which actually discusses x-ray scattering as a precursor for FFT used in sonic applications.

To date, most sonifications have used data which changes over time – a fly’s wings flapping, a heartbeat, a radiation signature. Except in special cases Kevin’s data does not exist in time – it is a single snapshot. But because data from x-ray scattering is a Fourier Transform of the real-space density distribution, we could use additive synthesis, using multiple simultaneous sine waves, to represent different spatial modes. Using this method, we swept through his data radially, like a clock hand, making timbre-based sonifications from the data by synthesizing sine waves using with the loudness based on the intensity of the scattering data and frequency based on the position.

We played a lot with the settings of the additive synthesis, including the length of the sound, the highest frequency and even the number of frequency bins (going back to the clock metaphor – pretend the clock hand is a ruler – the number of frequency bins would be the number of demarcations on the ruler) arriving eventually at set of optimized variables.

Here is one version of the track we created using 10 frequency bins:


Here is one we created using 2000:


And here is one we created using 50 frequency bins, which we settled on:


On a software synthesizer this would be like the default setting. In the future we hope to have an interactive graphic user interface where sliders control these variables, just like a musician tweaks the sound of a synth, so scientists can bring out, or mask aspects of the data.

To hear what that would be like, here are a few tracks that vary length:




Finally, here is a track we created using different mappings of frequency and intensity:


Having these sliders would reinforce to the scientists that we are not creating “the sound of a metallic alloy,” we are creating one sonic representation of the data from the metallic alloy.

It is interesting that such a representation can be vital to scientists. At first, my husband went along with this sonification project as more of a thought experiment rather than something that he thought would actually be useful in the lab, until he heard something distinct about one of those sounds, suggesting that there was a misaligned sample. Once Kevin heard that glitched sound (you can hear it in the video above), he was convinced that sonification was a useful tool for his lab. He and his colleagues are dealing with measurements 1/25,000th the width of a human hair, aiming an X-ray through twenty pieces of equipment to get the beam focused just right. If any piece of equipment is out of kilter, the data can’t be collected. This is where our ears’ non-directionality is useful. The scientist can be working on his/her computer and, using ambient sound, know when a sample is misaligned.


It remains to be seen/heard if the sonifications will be useful to actually understand the material structures. We are currently running an experiment using Mechanical Turk to determine this kind of multi-modal display (using vision and audio) is actually helpful. Basically we are training people on just the images of the scattering data, and testing how well they do, and training another group of people on the images plus the sonification and testing how well they do.

I’m also working with collaborators at Stony Brook University on sonification of data. In one experiment we are using ambisonic (3-dimensional) sound to create a sonic map of the brain to understand drug addiction. Standing in the middle of the ambisonic cube, we hope to find relationships between voxels, a cube of brain tissue—analogous to pixels. When neurons fire in areas of the brain simultaneously there is most likely a causal relationship which can help scientists decode the brain activity of addiction. Computer vision researchers have been searching for these relationships unsuccessfully; we hope that our sonification will allow us to hear associations in distinct parts of the brain which are not easily recognized with sight. We are hoping to leverage the temporal pattern recognition of our auditory system, but we have been running into problems doing the sonification; each slice of data from the FMRI has about 300,000 data points. We have it working with 3,000 data points, but either our programming needs to get more efficient, or we have to get a much more powerful computer in order to work with all of the data.

On another project we are hoping to sonify gait data using smartphones. I’m working with some of my music students and a professor of Physical Therapy, Lisa Muratori, who works on understanding the underlying mechanisms of mobility problems in Parkinsons’ Disease (PD). The physical therapy lab has a digital motion-capture system and a split-belt treadmill for asymmetric stepping—the patients are supported by a harness so they don’t fall. PD is a progressive nervous system disorder characterized by slow movement, rigidity, tremor, and postural instability. Because of degeneration of specific areas of the brain, individuals with PD have difficulty using internally driven cues to initiate and drive movement. However, many studies have demonstrated an almost normal movement pattern when persons with PD are provided external cues, including significant improvements in gait with rhythmic auditory cueing. So far the research with PD and sound has be unidirectional – the patients listen to sound and try to match their gait to the external rhythms from the auditory cues.In our system we will use bio-feedback to sonify data from sensors the patients will wear and feed error messages back to the patient through music. Eventually we hope that patients will be able to adjust their gait by listening to self-generated musical distortions on a smartphone.

As sonification becomes more prevalent, it is important to understand that aesthetic decisions are inevitable and even essential in every kind of data representation. We are so accustomed to looking at visual representations of information—from maps to pie charts—that we may forget that these are also arbitrary transcodings. Even a photograph is not an unambiguous record of reality; the mechanics of the camera and artistic choices of the photographer control the representation. So too, in sonification, do we have considerable latitude. Rather than view these ambiguities as a nuisance, we should embrace them as a freedom that allows us to highlight salient features, or uncover previously invisible patterns.


Margaret Anne Schedel is a composer and cellist specializing in the creation and performance of ferociously interactive media. She holds a certificate in Deep Listening with Pauline Oliveros and has studied composition with Mara Helmuth, Cort Lippe and McGregor Boyle. She sits on the boards of 60×60 Dance, the BEAM Foundation, Devotion Gallery, the International Computer Music Association, and Organised Sound. She contributed a chapter to the Cambridge Companion to Electronic Music, and is a joint author of Electronic Music published by Cambridge University Press. She recently edited an issue of Organised Sound on sonification. Her research focuses on gesture in music, and the sustainability of technology in art. She ran SUNY’s first Coursera Massive Open Online Course (MOOC) in 2013. As an Associate Professor of Music at Stony Brook University, she serves as Co-Director of Computer Music and is a core faculty member of cDACT, the consortium for digital art, culture and technology.

Featured Image: Dr. Kevin Yager, data measured at X9 beamline, Brookhaven National Lab.

Research carried out at the Center for Functional Nanomaterials, Brookhaven National Laboratory, is supported by the U.S. Department of Energy, Office of Basic Energy Sciences, under Contract No. DE-AC02-98CH10886.

tape reelREWIND! ….. If you liked this post, you might also like:

The Noises of Finance–Nicholas Knouf

Revising the Future of Music Technology–Aaron Trammell

A Brief History of Auto-Tune–Owen Marshall

%d bloggers like this: