Archive | Noise RSS for this section

Vocal Gender and the Gendered Soundscape: At the Intersection of Gender Studies and Sound Studies

Gendered Voices widgetEditor’s Note: Welcome to Sounding Out!‘s annual February forum! This month, we’re wondering: what ideas regarding gender and sound do voices call forth? To think through this question, we’ve recruited several great writers who will be covering different aspects of gender and sound. Regular writer Regina Bradley will look at how music is gendered in Shonda Rhimes’ hit show Scandal. A.O. Roberts will discuss synthesized voices and gender. Art Blake will share with us his reflections on how his experience shifting his voice from feminine to masculine as a transgender man intersects with his work on John Cage. Robin James will return to SO! with an analysis of how ideas of what women should sound like have roots in Greek philosophy. Me? I’ll share a personal essay/analysis of what it means to be called a “loud woman.”

Today we start our February forum on gender and sound with Christine Ehrick‘s selections from her forthcoming book Radio and the Gendered Soundscape in Latin America. Below, she introduces us to the idea of the gendered soundscape, which she uses in her analysis on women’s radio speech from the 1930s to the 1950s. She will make you think twice about the voices you hear on the radio, in podcasts, over the phone…

In the meantime, lean in, close your eyes, and let the voices whisk you away.–Liana M. Silva, Managing Editor

Several years ago, while aboard a commercial airline awaiting take off, I heard the expected sound of a voice emerging from the cockpit, transmitted via the plane’s P.A. system. The voice gave passengers the usual greeting and general information about weather conditions, flight time, etc. What was unusual, and caught the otherwise distracted passengers’ attention, was the fact that the voice speaking was female. People looked up from their magazines and devices not because of the “message” but because of the “medium”: a voice that deviated from the standard soundscape of commercial aviation, a field comprised mostly of men.

For this historian, interested in vocal gender and the female voice in particular, the incident was a fascinating demonstration of both the voice as performance of the gendered body, and the fact that the human voice can and often does communicate beyond (and sometimes despite) the words being spoken. In this essay I want to briefly discuss some of the ideas I explore more fully in my forthcoming book, a study of women/gender and golden age radio titled Radio and the Gendered Soundscape in Latin America: Women and Broadcasting in Argentina and Uruguay, 1930-1950 (forthcoming, Cambridge 2015). In this book, I use the stories of five women and one radio station to explore the possibilities and limits for women’s radio speech, and to pose some larger questions about vocal gender and the gendered soundscape. For this post, I present the conceptual framework that I use to understand how gender is constructed through the voice.

"DSC00814" by Flickr user  jordan weaver, CC BY-NC-ND 2.0

“DSC00814” by Flickr user jordan weaver, CC BY-NC-ND 2.0

Gender and sound have both been explored as categories of historical analysis, but largely in isolation from one another. The historiographical impact of gender analysis is almost too obvious to mention; suffice to say that attention to gender has altered the very questions historians ask of the past and the way we understand structures of power and historical change. More recently, historians have begun to incorporate R. Murray Schafer’s concept of the soundscape and what Jonathan Sterne has called “sonic thinking” into their analysis of the past (The Sound Studies Reader, 3). But not enough consideration has been given within the field of history to the ways sound may be gendered and gender sounded.

I bring these three threads together – gender, sound, and history – via the concept of the gendered soundscape. Helmi Järviluoma, Pirkko Moisala and Anni Vilkko introduce the term in their book Gender and Qualitative Methods (2004), which asks readers to contemplate the way gender – and gendered hierarchies – may be projected and/or heard in sound environments. We not only “learn gender through the total sensorium,” as they put it; gender is also represented, contested and reinforced through the aural (85). Thinking historically about gendered soundscapes can help us conceptualize sound as a space where categories of “male” and “female” are constituted within the context of particular events over time, and by extension the ways that power, inequality and agency might be expressed in the sonic realm—in other words, tuning in to sound as a signifier of power. Although many of us have been well-trained to look for gender, I consider what it means to listen for it.

"Untitled" by Flickr user  Observe The Banana, CC BY-NC 2.0

“Untitled” by Flickr user Observe The Banana, CC BY-NC 2.0

The soundscape, of course, is not only gendered; other aspects of social hierarchy, such as race, class and sexuality, are also performed and perceived in the aural realm. Greg Goodale’s analysis in Sonic Persuasion: Reading Sound in the Recorded Age (2011) of “the race of sound,”  which argues that sound constructs rather than simply reiterating race, provides a useful framework for understanding both what we might call the gender of sound and the ways gender and race might intersect in the soundscape (76-105). As we learn to become more “ear-oriented” scholars, in other words, we come to perceive power, oppression, and agency in entirely new ways.

One of the most immediately gendered sound categories is the human voice, a richly historical convergence of human biology, technology and culture. We can and do hear gender in most human vocalizations; linguists seem to agree that, when listening to adult (non-elderly) voices speaking above a whisper “gender determination is usually a simple task” (See, for example, David Puts, Steven Gaulin and Katherine Verdolini in “Dominance and the Evolution of Sexual Dimorphism in Human Voice Pitch” and Michael Jessen in “Speaker Classification in Forensic Phonetics and Acoustics”). When we hear a voice without visual referent, as in the airplane example above or when listening to the radio, we immediately tend to classify the voices as “male” or “female.”

"my vocal cords." by Flickr user Dan Simpson, CC BY-NC 2.0

“my vocal cords.” by Flickr user Dan Simpson, CC BY-NC 2.0

Voice differences have roots in biological sex difference. With the onset of puberty, the larynx is enlarged and vocal folds increase in length and in thickness, resulting in a decrease in frequency (Hz) of vocal fold vibration and thus a lowering of voice pitch. But while bodies classified as biologically female experience about a one-half octave average drop in voice pitch with puberty, biological males tend to experience a full octave average drop in pitch, with the result being that adult male voices tend to operate within a lower frequency range than female voices. However, gendered constructions of the human voice vary widely over time and place.

Biology (body size, hormonal secretions, age, and other physiological factors) is no way destiny when it comes to the human voice. Linguists distinguish between “anatomical voice quality features,” which in essence set the parameters of comfortable pitch range given a person’s vocal anatomy (the range outside of which is difficult to easily maintain one’s speaking voice) and “voice quality settings,” which refers to where someone places their voice within that range (See Monique Adriana Johanna Biemans’ thesis, Gender variation in voice quality.) Bound to some degree by these physiological parameters, humans can and do place their voices in ways that are consistent with the performative aspects of gender, and voice pitch is both highly variable and subject to cultural/historical framing and self-fashioning (For more on this subject, see Anne Karpf, The Human Voice: How this Extraordinary Instrument Reveals Essential Clues About Who We Are, 2006). Thus like other aspects of gender, voice is culturally and historically constructed and performative.

Conceptualizing the voice as a sonic expression of the gendered body requires revisiting both the tendency of feminist scholars to equate “women’s voice” with writing or discourse, and the tendency of some media scholars to refer to voices without immediate visual referent (in film, radio) as “disembodied.” In their Introduction to Embodied Voices: Representing Female Vocality in Western Culture (1997), Leslie C. Dunn and Nancy A. Jones concisely articulate the challenge for scholars interested in the sonic/acoustic dimensions of women’s voices:

Feminists have used the word “voice” to refer to a wide range of aspirations: cultural agency, political enfranchisement, sexual autonomy, and expressive freedom, all of which have been historically denied to women. In this context, “voice” has become a metaphor for textual authority…This metaphor has become so pervasive, so intrinsic to feminist discourse that it makes us too easily forget (or repress) the concrete physical dimensions of the female voice upon which this metaphor was based. (1)

Thinking about voice in terms of vocal gender brings us to the complex relationship between voice and body. The concept of disembodiment conveys the sometimes uncanny effect of hearing (especially female) voices without an immediately discernible source. It also underscores the destabilizing effect of these unseen female voices liberated thus from patriarchy’s specular regime. Yet to refer to voices from an unseen source as “disembodied” is to suggest that the voice is somehow separate from the body, a problematic formulation.

"Untitled" by Flickr user  Luci Correia, CC BY 2.0

“Untitled” by Flickr user Luci Correia, CC BY 2.0

Simply: if the voice is not the body, what is it? Even when it travels over long distances (via telephone or radio, for example) and/or if its source remains out of sight, the body is there, present via the sound vibrations it produces. Stepping away from concepts like disembodiment frees us to explore the nuances of the relationship between the voice and the body, and the presence of gendered bodies in the soundscape, particularly with regard to the vertiginous relationships between bodies and voices that are gendered female.

Gender and history impact how we read the tone, velocity and pitch of the voice, but they also shape parameters of where and when particular voices are invited to speak or expected to remain silent. And here of course we encounter the ways gender hierarchy is expressed and constructed in the acoustic/vocal arena, as well as racial categorization. Kathleen Hall Jamieson puts it succinctly in Eloquence in an Electronic Age: The Transformation of Political Speechmaking (1990): “History has many themes. One of them is that women should be quiet” (67). While by no means absent, women’s voices have remained largely outside of the realm of what Schafer calls “signal”: sounds listened to consciously and that often convey messages and/of authority. Just as other aspects of gender inequality become naturalized, patriarchy tunes our ears to listen to certain voices differently. In these formulations, women’s voices are thus subject to categorization as “noise” or “unwanted sound” (see Mike Goldsmith, Discord: The Story of Noise) and therefore dissonant, disruptive, and potentially dangerous.

The discomfort (or dissonance) with women’s voices, especially women’s voices speaking publicly and/or with authority, carried over into and shaped the history of radio, making early and golden age broadcasting an ideal venue for an historical exploration of gender and voice. What did it mean to hear women’s voices on the radio? How did radio rework the gendered dimensions of public and private space, and by extension the place of the female voice in the public sphere?

The emergence of radio in the early twentieth century was part of a larger revolution in human communication which Walter Ong termed in Orality and Literacy: The Technologizing the Word (1983) a “secondary orality,” an historical moment which reawakened older oral traditions and communal listening in a very different historical and technological context (3). It also reawakened a focus on the human voice, with all of its implications for the gendered soundscape.

"Jane Hoffman, Tobey Weinberg, Ruth Goodman, and Amelia Romano read for a radio broadcast about the Triangle Fire" by Flickr user Kheel Center, CC BY 2.0

“Jane Hoffman, Tobey Weinberg, Ruth Goodman, and Amelia Romano read for a radio broadcast about the Triangle Fire” by Flickr user Kheel Center, CC BY 2.0

In many parts of the world, the rise of radio also coincided with an upsurge in feminist politics and discourses calling for women’s full citizenship and other related matters. As Kate Lacey notes in Feminine Frequencies: Gender, German Radio and the Public Sphere 1923-1945 (1997), “the arrival of radio heralded the modern era of mass communication, while women’s enfranchisement confirmed the onset of mass politics in the twentieth century.” Researching the history of women and radio – and particularly the sometimes hostile reactions to women’s radio voices – led me to appreciate the ways gender is performed and perceived via the voice, and from there into larger questions about the way social hierarchies – of gender, but also of race/ethnicity, class and sexuality – are reproduced and challenged within the sonic realm.

In this way we can better begin to contemplate the historical significance of women’s radio speech in understanding the sonic construction of gender. Depending on content and context, these voices carried the potential to not only challenge taboos on women’s oratory, but to assert the female body into spaces from which it had previously been excluded—like the cockpits (can’t help but note the name here) of commercial airliners.

Featured image: “ateliers claus – 140522 – monophonic – Radio Femmes Fatales” by Flickr user fabonthemoon, CC BY-NC-SA 2.0

Christine Ehrick is an Associate Professor in the Department of History at the University of Louisville. Her second book, Radio and the Gendered Soundscape in Latin America: Women and Broadcasting in Argentina and Uruguay, 1930-1950 will be published by Cambridge University Press in Fall 2015. This book explores women’s presence and especially their voices – on the airwaves in the two leading South American radio markets of Buenos Aires and Montevideo. Her current work looks at comedy, gender and voice, with a focus on mid-twentieth century Argentine comedians Niní Marshall and Tomás Simari.

Thanks to Cambridge UP for allowing me to use some excerpts from the forthcoming book in this essay.

tape reelREWIND! . . .If you liked this post, you may also dig:

Look Who’s Talking, Y’all: Dr. Phil, Vocal Accent and the Politics of Sounding White– Christie Zwahlen

On Sound and Pleasure: Meditations on the Human Voice– Yvon Bonefant

Heard Any Good Games Recently?: Listening to the Sportscape–Kaj Ahlsved

Optophones and Musical Print

The word type, as scanned by the optophone.

From E.E. Fournier d’Albe, The Moon Element (New York: D. Appleton & Company, 1924), 141.

 

In 1912, British physicist Edmund Fournier d’Albe built a device that he called the optophone, which converted light into tones. The first model—“the exploring optophone”—was meant to be a travel aid; it converted light into a sound of analogous intensity. A subsequent model, “the reading optophone,” scanned print using lamp-light separated into beams by a perforated disk. The pattern of light reflected back from a given character triggered a corresponding set of tones in a telephone receiver. d’Albe initially worked with 8 beams, producing 8 tones based on a diatonic scale. He settled on 5 notes: lower G, and then middle C, D, E and G. (Sol, do, re, mi, sol.) The optophone became known as a “musical print” machine. It was popularized by Mary Jameson, a blind student who achieved reading speeds of 60 words per minute.

Photograph of the optophone, an early scanner with a rounded glass bookrest.

Reading Optophone, held at Blind Veterans UK (formerly St. Dunstan’s). Photographed by the author. With thanks to Robert Baker for helping me search through the storeroom to locate this item, which was previously uncatalogued and “lost” for many years.

 

Scientific illustration of the optophone, showing a book on the bookrest and a pair of headphones for listening to the tonal output.

Schematic of optophone from Vetenskapen och livet (1922)

In the field of media studies, the optophone has become renowned through its imaginary repurposings by a number of modernist artists. For one thing, the optophone finds brief mention in Finnegan’s Wake. In turn, Marshall McLuhan credited James Joyce’s novel for being a new medium, turning text into sound. In “New Media as Political Forms,” McLuhan says that Joyce’s own “optophone principle” releases us from “the metallic and rectilinear embrace of the printed page.” More familiar within media studies today, Dada artist Raoul Hausmann patented (London 1935), but did not successfully build, an optophone presumably inspired by d’Albe’s model, which he hoped would be employed in audiovisual performances. This optophone was meant to convert sound into light as well as the reverse. It was part of a broader contemporary impulse to produce color music and synaesthetic art. Hausmann also wrote optophonetic poetry, based on the sounds and rhythms of “pure phonemes” and non-linguistic noises. In response, Francis Picabia painted two optophone portraits in 1921 and 22. Optophone I, below, is composed of lines that might be sound waves, with a pattern that disorders vision.

Francis Picabia's Optophone I, a series of concentric black circles with a female figure at the center.

Francis Picabia, Optophone I (1922)

Theorists have repeatedly located Hausmann’s device at the origin of new media. Authors in the Audiovisuology, Media Archaeology, and Beyond Art: A Third Culture anthologies credit Hausmann’s optophone with bringing-into-being cybernetics, digitization, the CD-ROM, audiovisual experiments in video art, and “primitive computers.” It seems to have escaped notice that d’Albe also used the optophone to create electrical music. In his book, The Moon Element, he writes:

Needless to say, any succession or combination of musical notes can be picked out by properly arranged transparencies, and I have succeeded in transcribing a number of musical compositions in this manner, which are, of course, only audible in the telephone. These notes, in the absence of all other sounding mechanism, are particularly pure and free from overtones. Indeed, a musical optophone worked by this intermittent light, has been arranged by means of a simple keyboard, and some very pleasant effects may thus be obtained, more especially as the loudness and duration of the different notes is under very complete and separate control.

E.E. Fournier d’Albe, The Moon Element (New York: D. Appleton & Company, 1924), 107.

d’Albe’s device is typically portrayed as a historical cul-de-sac, with few users and no real technical influence. Yet optophones continued to be designed for blind people throughout the twentieth century; at least one model has users even today. Musical print machines, or “direct translators,” co-existed with more complex OCR-devices—optical character recognizers that converted printed words into synthetic speech. Both types of reading machine contributed to today’s procedures for scanning and document digitization. Arguably, reading optophones intervened more profoundly into the order of print than did Hausmann’s synaesthetic machine: they not only translated between the senses, they introduced a new symbolic system by which to read. Like braille, later vibrating models proposed that the skin could also read.

In December 1922, the Optophone was brought to the United States from the United Kingdom for a demonstration before a number of educators who worked with blind children; only two schools ordered the device. Reading machine development accelerated in the U.S. around World War II. In his position as chair of the National Defense Research Committee, Vannevar Bush established a Committee on Sensory Devices in 1944, largely for the purpose of rehabilitating blind soldiers. The other options for reading—braille and Talking Books—were relatively scarce and had a high cost of production. Reading machines promised to give blind readers access to magazines and ephemeral print (recipes, signs, mail), which was arguably more important than access to books.

Piechowski, wearing a suit, scans the pen of the A-2 reader over a document.

Joe Piechowski with the A-2 reader. Courtesy of Rob Flory.

At RCA (Radio Corporation of America), the television innovator Vladimir Zworykin became involved with this project. Zworykin had visited Fournier d’Albe in London in the 19-teens and seen a demonstration of the optophone. Working with Les Flory and Winthrop Pike, Zworykin built an initial machine known as the A-2 that operated on the same principles, but used a different mechanism for scanning—an electric stylus, which was publicized as “the first pen that reads.” Following the trail of citations for RCA’s “Reading Aid for the Blind” patent (US 2420716A, filed 1944), it is clear that the “pen” became an aid in domains far afield from blindness. It was repurposed as an optical probe for measuring the oxygen content of blood (1958); an “optical system for facsimile scanners” (1972); and, in a patent awarded to Burroughs Corporation in 1964, a light gun. This gun, in turn, found its way into the handheld controls for the first home video game system, produced by Sanders Associates.

The A-2 optophone was tested on three blind research subjects, including ham radio enthusiast Joe Piechowski, who was more of a technical collaborator. According to the reports RCA submitted to the CSD, these readers were able to correlate the “chirping” or “tweeting” sounds of the machine with letters “at random with about eighty percent accuracy” after 60 hours of practice. Close spacing on a printed page made it difficult to differentiate between letters; readers also had difficulty moving the stylus at a steady pace and in a straight line. Piechowski achieved reading speeds of 20 words per minute, which RCA deemed too slow.

Attempts were made to incorporate “human factors” and create a more efficient tonal code, to reduce reading time as well as learning time and confusion between letters. One alternate auditory display was known as the compressed optophone. Rather than generate multiple tones or chords for a single printed letter, which was highly redundant and confusing to the ear, the compressed version identified only certain features of a printed letter: such as the presence of an ascender or descender. Below is a comparison between the tones of the original optophone and the compressed version, recorded by physicist Patrick Nye in 1965. The following eight lower case letters make up the source material: f, i, k, j, p, q, r, z.

Original record in the author’s possession. With thanks to Elaine Nye, who generously tracked down two of her personal copies at the author’s request. The second copy is now held at Haskins Laboratories.

An image of the letter r as scanned by the optophone and compressed optophone.

From Patrick Nye, “An Investigation of Audio Outputs for a Reading Machine,” AFB Research Bulletin (July 1965): 30.

 

Because of the seeming limitations of tonal reading, RCA engineers re-directed their research to add character recognition to the scanning process. This was controversial, direct translators like the optophone being perceived as too difficult because they required blind people to do something akin to learning to read print—learning a symbolic tonal or tactile code. At an earlier moment, braille had been critiqued on similar grounds; many in the blind community have argued that mainstream anxieties about braille sprang from its symbolic difference. Speed, moreover, is relative. Reading machine users protested that direct translators like the optophone were inexpensive to build and already available—why wait for the refinement of OCR and synthetic speech? Nevertheless, between November 1946 and May 1947, Zworykin, Flory, and Pike worked on a prototype “letter reading machine,” today widely considered to be the first successful example of optical character recognition (OCR). Before reliable synthetic speech, this device spelled out words letter by letter using tape recordings. The Letter-Reader was too massive and expensive for personal use, however. It also had an operating speed of 20 words per minute—thus it was hardly an improvement over the A-2 translator.

Haskins Laboratories, another affiliate of the Committee on Sensory Devices, began working on the reading machine problem around the same time, ultimately completing an enormous amount of research into synthetic speech and—as argued by Donald Shankweiler and Carol Fowler—the “speech code” itself. In the 1940s, before workable text-to-speech, researchers at Haskins wanted to determine whether tones or artificial phonemes (“speech-like speech”) were easier to read by ear. They developed a “machine dialect of English,” named wuhzi: “a transliteration of written English which preserved the phonetic patterns of the words.” An example can be played below. The eight source words are: With, Will, Were, From, Been, Have, This, That.

Original record in the author’s possession. From Patrick Nye, “An Investigation of Audio Outputs for a Reading Machine” (1965). With thanks to Elaine Nye.

Based on the results of tests with several human subjects, the Haskins researchers concluded that aural reading via speech-like sounds was necessarily faster than reading musical tones. Like the RCA engineers, they felt that a requirement of these machines should be a fast rate of reading. Minimally, they felt that reading speed should keep pace with rapid speech, at about 200 words per minute.

Funded by the Veterans Administration, members of Mauch Laboratories in Ohio worked on both musical optophones and spelled-speech recognition machines from the 1950s into the 1970s. One of their many devices, the Visotactor, was a direct-translator with vibro-tactile output for four fingers. Another, the Visotoner, was a portable nine-channel optophone. All of the Mauch machines were tested by Harvey Lauer, a technology transfer specialist for the Veterans Administration for over thirty years, himself blind. Below is an excerpt from a Visotoner demonstration, recorded by Lauer in 1971.

Visotoner demonstration. Original 7” open reel tape in author’s possession. With thanks to Harvey Lauer for sharing items from his impressive collection and for collaborating with the author over many years.

Lauer's fingers are pictured in the finger-rests of the Visotactor, scanning a document.

Harvey Lauer reading with the Visotactor, a text-to-tactile translator, 1977.

Later on the same tape, Lauer discusses using the Visotoner to read mail, identify currency, check over his own typing, and read printed charts or graphics. He achieved reading speeds of 40 words per minute with the device. Lauer has also told me that he prefers the sound of the Visotoner to that of other optophone models—he compares its sound to Debussy, or the music for dream sequences in films.

Mauch also developed a spelled speech OCR machine called the Cognodictor, which was similar to the RCA model but made use of synthetic speech. In the recording below, Lauer demonstrates this device by reading a print-out about IBM fonts. He simultaneously reads the document with the Visotoner, which reveals glitches in the Cognodictor’s spelling.

Original 7” open reel tape in the author’s possession. With thanks to Harvey Lauer.

A hand uses the metal probe of the Cognodictor to scan a typed document.

The Cognodictor. Glendon Smith and Hans Mauch, “Research and Development in the Field of Reading Machines for the Blind,” Bulletin of Prosthetics Research (Spring 1977): 65.

In 1972, at the request of Lauer and other blind reading machine users, Mauch assembled a stereo-optophone with ten channels, called the Stereotoner. This device was distributed through the VA but never marketed, and most of the documentation exists in audio format, specifically in sets of training tapes that were made for blinded veterans who were the test subjects. Some promotional materials, such as the short video below, were recorded for sighted audiences—presumably teachers, rehabilitation specialists, or funding agencies.

Mauch Stereo Toner from Sounding Out! on Vimeo.

Video courtesy of Harvey Lauer.

Mary Jameson corresponded with Lauer about the stereotoner, via tape and braille, in the 1970s. In the braille letter pictured below she comments, “I think that stereotoner signals are the clearest I have heard.”

Scan of a braille letter from Jameson to Lauer.

Letter courtesy of Harvey Lauer. Transcribed by Shafeka Hashash.

In 1973, with the marketing of the Kurzweil Reader, funding for direct translation optophones ceased. The Kurzweil Reader was advertised as the first machine capable of multi-font OCR; it was made up of a digital computer and flatbed scanner and it could recognize a relatively large number of typefaces. Kurzweil recalls in his book The Age of Spiritual Machines that this technology quickly transferred to Lexis-Nexis as a way to retrieve information from scanned documents. As Lauer explained to me, the abandonment of optophones was a serious problem for people with print disabilities: the Kurzweil Readers were expensive ($10,000-$50,000 each); early models were not portable and were mostly purchased by libraries. Despite being advertised as omnifont readers, they could not in fact recognize most printed material. The very fact of captchas speaks to the continued failures of perfect character recognition by machines. And, as the “familiarization tapes” distributed to blind readers indicate, the early synthetic speech interface was not transparent—training was required to use the Kurzweil machines.

Original cassette in the author’s possession. 

A young Kurzweil stands by his reading machine, demonstrated by Jernigan, who is seated.

Raymond Kurzweil and Kenneth Jernigan with the Kurzweil Reading Machine (NFB, 1977). Courtesy National Federation of the Blind.

Lauer always felt that the ideal reading machine should have both talking OCR and direct-translation capabilities, the latter being used to get a sense of the non-text items on a printed page, or to “preview material and read unusual and degraded print.” Yet the long history of the optophone demonstrates that certain styles of decoding have been more easily naturalized than others—and symbols have increasingly been favored if they bear a close relation to conventional print or speech. Finally, as computers became widely available, the focus for blind readers shifted, as Lauer puts it, “from reading print to gaining access to computers.” Today, many electronic documents continue to be produced without OCR, and thus cannot be translated by screen readers; graphical displays and videos are largely inaccessible; and portable scanners are far from universal, leaving most “ephemeral” print still unreadable.

Mara Mills is an Assistant Professor of Media, Culture, and Communication at New York University, working at the intersection of disability studies and media studies. She is currently completing a book titled On the Phone: Deafness and Communication Engineering. Articles from this project can be found in Social Text, differences, the IEEE Annals of the History of Computing, and The Oxford Handbook of Sound Studies. Her second book project, Print Disability and New Reading Formats, examines the reformatting of print over the course of the past century by blind and other print disabled readers, with a focus on Talking Books and electronic reading machines. This research is supported by NSF Award #1354297.