Archive | Blindness RSS for this section

Optophones and Musical Print

The word type, as scanned by the optophone.

From E.E. Fournier d’Albe, The Moon Element (New York: D. Appleton & Company, 1924), 141.

 

In 1912, British physicist Edmund Fournier d’Albe built a device that he called the optophone, which converted light into tones. The first model—“the exploring optophone”—was meant to be a travel aid; it converted light into a sound of analogous intensity. A subsequent model, “the reading optophone,” scanned print using lamp-light separated into beams by a perforated disk. The pattern of light reflected back from a given character triggered a corresponding set of tones in a telephone receiver. d’Albe initially worked with 8 beams, producing 8 tones based on a diatonic scale. He settled on 5 notes: lower G, and then middle C, D, E and G. (Sol, do, re, mi, sol.) The optophone became known as a “musical print” machine. It was popularized by Mary Jameson, a blind student who achieved reading speeds of 60 words per minute.

Photograph of the optophone, an early scanner with a rounded glass bookrest.

Reading Optophone, held at Blind Veterans UK (formerly St. Dunstan’s). Photographed by the author. With thanks to Robert Baker for helping me search through the storeroom to locate this item, which was previously uncatalogued and “lost” for many years.

 

Scientific illustration of the optophone, showing a book on the bookrest and a pair of headphones for listening to the tonal output.

Schematic of optophone from Vetenskapen och livet (1922)

In the field of media studies, the optophone has become renowned through its imaginary repurposings by a number of modernist artists. For one thing, the optophone finds brief mention in Finnegan’s Wake. In turn, Marshall McLuhan credited James Joyce’s novel for being a new medium, turning text into sound. In “New Media as Political Forms,” McLuhan says that Joyce’s own “optophone principle” releases us from “the metallic and rectilinear embrace of the printed page.” More familiar within media studies today, Dada artist Raoul Hausmann patented (London 1935), but did not successfully build, an optophone presumably inspired by d’Albe’s model, which he hoped would be employed in audiovisual performances. This optophone was meant to convert sound into light as well as the reverse. It was part of a broader contemporary impulse to produce color music and synaesthetic art. Hausmann also wrote optophonetic poetry, based on the sounds and rhythms of “pure phonemes” and non-linguistic noises. In response, Francis Picabia painted two optophone portraits in 1921 and 22. Optophone I, below, is composed of lines that might be sound waves, with a pattern that disorders vision.

Francis Picabia's Optophone I, a series of concentric black circles with a female figure at the center.

Francis Picabia, Optophone I (1922)

Theorists have repeatedly located Hausmann’s device at the origin of new media. Authors in the Audiovisuology, Media Archaeology, and Beyond Art: A Third Culture anthologies credit Hausmann’s optophone with bringing-into-being cybernetics, digitization, the CD-ROM, audiovisual experiments in video art, and “primitive computers.” It seems to have escaped notice that d’Albe also used the optophone to create electrical music. In his book, The Moon Element, he writes:

Needless to say, any succession or combination of musical notes can be picked out by properly arranged transparencies, and I have succeeded in transcribing a number of musical compositions in this manner, which are, of course, only audible in the telephone. These notes, in the absence of all other sounding mechanism, are particularly pure and free from overtones. Indeed, a musical optophone worked by this intermittent light, has been arranged by means of a simple keyboard, and some very pleasant effects may thus be obtained, more especially as the loudness and duration of the different notes is under very complete and separate control.

E.E. Fournier d’Albe, The Moon Element (New York: D. Appleton & Company, 1924), 107.

d’Albe’s device is typically portrayed as a historical cul-de-sac, with few users and no real technical influence. Yet optophones continued to be designed for blind people throughout the twentieth century; at least one model has users even today. Musical print machines, or “direct translators,” co-existed with more complex OCR-devices—optical character recognizers that converted printed words into synthetic speech. Both types of reading machine contributed to today’s procedures for scanning and document digitization. Arguably, reading optophones intervened more profoundly into the order of print than did Hausmann’s synaesthetic machine: they not only translated between the senses, they introduced a new symbolic system by which to read. Like braille, later vibrating models proposed that the skin could also read.

In December 1922, the Optophone was brought to the United States from the United Kingdom for a demonstration before a number of educators who worked with blind children; only two schools ordered the device. Reading machine development accelerated in the U.S. around World War II. In his position as chair of the National Defense Research Committee, Vannevar Bush established a Committee on Sensory Devices in 1944, largely for the purpose of rehabilitating blind soldiers. The other options for reading—braille and Talking Books—were relatively scarce and had a high cost of production. Reading machines promised to give blind readers access to magazines and ephemeral print (recipes, signs, mail), which was arguably more important than access to books.

Piechowski, wearing a suit, scans the pen of the A-2 reader over a document.

Joe Piechowski with the A-2 reader. Courtesy of Rob Flory.

At RCA (Radio Corporation of America), the television innovator Vladimir Zworykin became involved with this project. Zworykin had visited Fournier d’Albe in London in the 19-teens and seen a demonstration of the optophone. Working with Les Flory and Winthrop Pike, Zworykin built an initial machine known as the A-2 that operated on the same principles, but used a different mechanism for scanning—an electric stylus, which was publicized as “the first pen that reads.” Following the trail of citations for RCA’s “Reading Aid for the Blind” patent (US 2420716A, filed 1944), it is clear that the “pen” became an aid in domains far afield from blindness. It was repurposed as an optical probe for measuring the oxygen content of blood (1958); an “optical system for facsimile scanners” (1972); and, in a patent awarded to Burroughs Corporation in 1964, a light gun. This gun, in turn, found its way into the handheld controls for the first home video game system, produced by Sanders Associates.

The A-2 optophone was tested on three blind research subjects, including ham radio enthusiast Joe Piechowski, who was more of a technical collaborator. According to the reports RCA submitted to the CSD, these readers were able to correlate the “chirping” or “tweeting” sounds of the machine with letters “at random with about eighty percent accuracy” after 60 hours of practice. Close spacing on a printed page made it difficult to differentiate between letters; readers also had difficulty moving the stylus at a steady pace and in a straight line. Piechowski achieved reading speeds of 20 words per minute, which RCA deemed too slow.

Attempts were made to incorporate “human factors” and create a more efficient tonal code, to reduce reading time as well as learning time and confusion between letters. One alternate auditory display was known as the compressed optophone. Rather than generate multiple tones or chords for a single printed letter, which was highly redundant and confusing to the ear, the compressed version identified only certain features of a printed letter: such as the presence of an ascender or descender. Below is a comparison between the tones of the original optophone and the compressed version, recorded by physicist Patrick Nye in 1965. The following eight lower case letters make up the source material: f, i, k, j, p, q, r, z.

Original record in the author’s possession. With thanks to Elaine Nye, who generously tracked down two of her personal copies at the author’s request. The second copy is now held at Haskins Laboratories.

An image of the letter r as scanned by the optophone and compressed optophone.

From Patrick Nye, “An Investigation of Audio Outputs for a Reading Machine,” AFB Research Bulletin (July 1965): 30.

 

Because of the seeming limitations of tonal reading, RCA engineers re-directed their research to add character recognition to the scanning process. This was controversial, direct translators like the optophone being perceived as too difficult because they required blind people to do something akin to learning to read print—learning a symbolic tonal or tactile code. At an earlier moment, braille had been critiqued on similar grounds; many in the blind community have argued that mainstream anxieties about braille sprang from its symbolic difference. Speed, moreover, is relative. Reading machine users protested that direct translators like the optophone were inexpensive to build and already available—why wait for the refinement of OCR and synthetic speech? Nevertheless, between November 1946 and May 1947, Zworykin, Flory, and Pike worked on a prototype “letter reading machine,” today widely considered to be the first successful example of optical character recognition (OCR). Before reliable synthetic speech, this device spelled out words letter by letter using tape recordings. The Letter-Reader was too massive and expensive for personal use, however. It also had an operating speed of 20 words per minute—thus it was hardly an improvement over the A-2 translator.

Haskins Laboratories, another affiliate of the Committee on Sensory Devices, began working on the reading machine problem around the same time, ultimately completing an enormous amount of research into synthetic speech and—as argued by Donald Shankweiler and Carol Fowler—the “speech code” itself. In the 1940s, before workable text-to-speech, researchers at Haskins wanted to determine whether tones or artificial phonemes (“speech-like speech”) were easier to read by ear. They developed a “machine dialect of English,” named wuhzi: “a transliteration of written English which preserved the phonetic patterns of the words.” An example can be played below. The eight source words are: With, Will, Were, From, Been, Have, This, That.

Original record in the author’s possession. From Patrick Nye, “An Investigation of Audio Outputs for a Reading Machine” (1965). With thanks to Elaine Nye.

Based on the results of tests with several human subjects, the Haskins researchers concluded that aural reading via speech-like sounds was necessarily faster than reading musical tones. Like the RCA engineers, they felt that a requirement of these machines should be a fast rate of reading. Minimally, they felt that reading speed should keep pace with rapid speech, at about 200 words per minute.

Funded by the Veterans Administration, members of Mauch Laboratories in Ohio worked on both musical optophones and spelled-speech recognition machines from the 1950s into the 1970s. One of their many devices, the Visotactor, was a direct-translator with vibro-tactile output for four fingers. Another, the Visotoner, was a portable nine-channel optophone. All of the Mauch machines were tested by Harvey Lauer, a technology transfer specialist for the Veterans Administration for over thirty years, himself blind. Below is an excerpt from a Visotoner demonstration, recorded by Lauer in 1971.

Visotoner demonstration. Original 7” open reel tape in author’s possession. With thanks to Harvey Lauer for sharing items from his impressive collection and for collaborating with the author over many years.

Lauer's fingers are pictured in the finger-rests of the Visotactor, scanning a document.

Harvey Lauer reading with the Visotactor, a text-to-tactile translator, 1977.

Later on the same tape, Lauer discusses using the Visotoner to read mail, identify currency, check over his own typing, and read printed charts or graphics. He achieved reading speeds of 40 words per minute with the device. Lauer has also told me that he prefers the sound of the Visotoner to that of other optophone models—he compares its sound to Debussy, or the music for dream sequences in films.

Mauch also developed a spelled speech OCR machine called the Cognodictor, which was similar to the RCA model but made use of synthetic speech. In the recording below, Lauer demonstrates this device by reading a print-out about IBM fonts. He simultaneously reads the document with the Visotoner, which reveals glitches in the Cognodictor’s spelling.

Original 7” open reel tape in the author’s possession. With thanks to Harvey Lauer.

A hand uses the metal probe of the Cognodictor to scan a typed document.

The Cognodictor. Glendon Smith and Hans Mauch, “Research and Development in the Field of Reading Machines for the Blind,” Bulletin of Prosthetics Research (Spring 1977): 65.

In 1972, at the request of Lauer and other blind reading machine users, Mauch assembled a stereo-optophone with ten channels, called the Stereotoner. This device was distributed through the VA but never marketed, and most of the documentation exists in audio format, specifically in sets of training tapes that were made for blinded veterans who were the test subjects. Some promotional materials, such as the short video below, were recorded for sighted audiences—presumably teachers, rehabilitation specialists, or funding agencies.

Mauch Stereo Toner from Sounding Out! on Vimeo.

Video courtesy of Harvey Lauer.

Mary Jameson corresponded with Lauer about the stereotoner, via tape and braille, in the 1970s. In the braille letter pictured below she comments, “I think that stereotoner signals are the clearest I have heard.”

Scan of a braille letter from Jameson to Lauer.

Letter courtesy of Harvey Lauer. Transcribed by Shafeka Hashash.

In 1973, with the marketing of the Kurzweil Reader, funding for direct translation optophones ceased. The Kurzweil Reader was advertised as the first machine capable of multi-font OCR; it was made up of a digital computer and flatbed scanner and it could recognize a relatively large number of typefaces. Kurzweil recalls in his book The Age of Spiritual Machines that this technology quickly transferred to Lexis-Nexis as a way to retrieve information from scanned documents. As Lauer explained to me, the abandonment of optophones was a serious problem for people with print disabilities: the Kurzweil Readers were expensive ($10,000-$50,000 each); early models were not portable and were mostly purchased by libraries. Despite being advertised as omnifont readers, they could not in fact recognize most printed material. The very fact of captchas speaks to the continued failures of perfect character recognition by machines. And, as the “familiarization tapes” distributed to blind readers indicate, the early synthetic speech interface was not transparent—training was required to use the Kurzweil machines.

Original cassette in the author’s possession. 

A young Kurzweil stands by his reading machine, demonstrated by Jernigan, who is seated.

Raymond Kurzweil and Kenneth Jernigan with the Kurzweil Reading Machine (NFB, 1977). Courtesy National Federation of the Blind.

Lauer always felt that the ideal reading machine should have both talking OCR and direct-translation capabilities, the latter being used to get a sense of the non-text items on a printed page, or to “preview material and read unusual and degraded print.” Yet the long history of the optophone demonstrates that certain styles of decoding have been more easily naturalized than others—and symbols have increasingly been favored if they bear a close relation to conventional print or speech. Finally, as computers became widely available, the focus for blind readers shifted, as Lauer puts it, “from reading print to gaining access to computers.” Today, many electronic documents continue to be produced without OCR, and thus cannot be translated by screen readers; graphical displays and videos are largely inaccessible; and portable scanners are far from universal, leaving most “ephemeral” print still unreadable.

Mara Mills is an Assistant Professor of Media, Culture, and Communication at New York University, working at the intersection of disability studies and media studies. She is currently completing a book titled On the Phone: Deafness and Communication Engineering. Articles from this project can be found in Social Text, differences, the IEEE Annals of the History of Computing, and The Oxford Handbook of Sound Studies. Her second book project, Print Disability and New Reading Formats, examines the reformatting of print over the course of the past century by blind and other print disabled readers, with a focus on Talking Books and electronic reading machines. This research is supported by NSF Award #1354297.

SO! Amplifies: Ian Rawes and the London Sound Survey

Document3SO! Amplifies. . .a highly-curated, rolling mini-post series by which we editors hip you to cultural makers and organizations doing work we really really dig.  You’re welcome!

.

The London Sound Survey website went online in 2009 with a couple of hundred recordings I’d made over the previous year. For a long time I’d wanted to make a website about London but couldn’t think of a good angle. When I got a job as a storeman in the British Library’s sound archive I became interested in field recording. There were the chance discoveries in the crates I hauled around of LPs like Murray Schafer’s The Vancouver Soundscape and the Time of Bells series by the anthropologist Steven Feld. I realised that sound could be the way to know my home city better and to present my experience of it.

Fast forward to last week: It is a warm June afternoon and the marsh is alive with the hum of the Waltham Cross electricity substation. I am a few miles to the northeast of London in the shallow crease of the Lea Valley. It’s a part of the extra-urban mosaic of reservoirs, quarries, industrial brownfield sites, grazing lands, nature reserves and outdoor leisure centres which has been usefully named “Edgelands” by the environmentalist Marion Shoard.

.
To make the recording, I’m wearing two mics strapped to each side of my head. The grey acrylic fur windcovers enveloping each mic might, from a distance, look like small woodland animals. It’s as well that not many people come here.
This is how I’m spending the summer, gathering the raw material for a new section on the London Sound Survey website. The London Sound Survey is a growing collection of Creative Commons-licensed sound recordings of places, events and wildlife in the British capital. Historical references too are gathered to find out how London’s sounds have changed. It’s partly to experiment with depicting the sounds of places as diagrams and collages rather than literal-minded maps. But it’s also a nice indulgence after quitting a job where I spent the last three years in a windowless room.
Content of the daytime sound grid recordings depicted in graphical form. The louder the sound, the darker the icon. More than one icon of the same kind means that sound takes up more of the recording. The London Sound Survey® 2014

Content of the daytime sound grid recordings depicted in graphical form. The louder the sound, the darker the icon. More than one icon of the same kind means that sound takes up more of the recording. The London Sound Survey® 2014

Listening as a topic of scholarly interest has grown in popularity recently. I was interested too, and thought the best way forward was to find some expert listeners – blind people – and ask for their opinions. I soon learned there were differences in perceptions between those born blind, and those with age-related visual impairments. The former are more likely to have detailed mental maps of their surroundings based on listening to reverberation, from which they learn about features like the width of streets and the height of buildings.
.
I’m grateful to have Andre Louis, a blind musician and field recordist, begin to add his recordings and commentary to the LSS website. I’m always struck by the precision with which Andre pays attention to what he hears around him.
.
Screenshot 2014-06-23 11.04.30

Hear the city’s busy thoroughfares and quieter corners through the ears of musician and recordist Andre Louis. His thoughts on why he records are rendered in braille to form the basis for a new London sound map. The London Sound Survey® 2014

Other work is to be done. The Museum of London has offered to archive the site’s recordings and I have to ferret out all the original uncompressed sound files for them. Also, new batches of recordings have to be made for another site project, the 12 Tones of London. Here I’ve used census data and a statistical method called cluster analysis to sort neighbourhoods into 12 groups, and identify in each group the most demographically ‘typical’ neighbourhood to record in.
12 Tones of London uses a statistical analysis to select 12 out of London's 623 council wards (not counting the City of London) in the hope that their sound profiles can be generalised across relatively large swathes of the capital. It makes central to the investigation demographic factors such as class, ethnicity and age.

12 Tones of London uses a statistical analysis to select 12 out of London’s 623 council wards (not counting the City of London) in the hope that their sound profiles can be generalised across relatively large swathes of the capital. The London Sound Survey® 2014

This way, the primary social facts of class and ethnicity are put into the foreground of the project by determining where recordings are made. It’s a small start in moving away from the tropes of unusual or disappearing sounds, and towards how new ways of living sound in a city reproducing itself through great flows of capital and labour.
.
The London Sound Survey belongs to the tradition of enthusiasts’ websites which strive to amass as much information as they can about their chosen subjects. It has an open-ended design since the boundaries of what can be learned about city life and history through sound have hardly been tested, far less determined. It’s probably benefited from how the internet has expanded people’s access to music and other media, and from that a greater willingness to experiment in what they choose to listen to.
Ian Rawes was born in 1965 and grew up in London where he’s spent most of his life. Since leaving school he’s worked as a printer, book designer, market stallholder, concert promoter and sound archivist. He now runs the London Sound Survey full-time and lives in a suburb of south-east London.
tape reel
REWIND!…If you liked this post, you may also dig:
.
.
.

Papa Sangre and the Construction of Immersion in Audio Games

Sound and PlayEditor’s Note:  Welcome to Sounding Out!‘s fall forum titled “Sound and Play,” where we ask how sound studies, as a discipline, can help us to think through several canonical perspectives on play. While Johan Huizinga had once argued that play is the primeval foundation from which all culture has sprung, it is important to ask where sound fits into this construction of culture; does it too have the potential to liberate or re-entrench our social worlds? SO!’s new regular contributor Enongo Lumumba-Kasongo notes how audio games, like Papa Sangre, often use sound as a gimmick to engage players, and considers the politics of this feint. For whom are audio games immersive, and how does the experience serve to further marginalize certain people or disadvantaged groups?–AT

Immersion is a problem at the heart of sound studies. As Frances Dyson (2009) suggests in Sounding New Media, “Sound is the immersive medium par excellence. Three dimensional, interactive and synesthetic, perceived in the here and now of an embodied space, sound returns to the listener the very same qualities that media mediates…Sound surrounds” (4). Alternately, in the context of games studies (a field that is increasingly engaged with sound studies), issues of sound and immersion have most recently been addressed in terms of instrumental potentialities, historical developments, and technical constraints. Some notable examples include Sander Huiberts’ (2010) M.A. thesis entitled “Captivating Sound: The Role of Audio Immersion for Computer Games,” in which he details technical and philosophical frames of immersion as they relate to the audio of a variety of computer games, and an article by Aaron Oldenburg (2013) entitled “Sonic Mechanics: Audio as Gameplay,” in which he situates the immersive aspects of audio-gameplay within contemporaneous experimental art movements. This research provokes the question: How do those who develop these games construct the idea of immersion through game design and what does this mean for users who challenge this construct? Specifically I would like to challenge Dyson’s claim that sound really is “the immersive medium par excellence” by considering how the concept of immersion in audio-based gameplay can be tied to privileged notions of character and game development.

psIn order to investigate this problem, I decided to play an audio game and document my daily experiences on a WordPress blog. Based on its simulation of 3D audio Papa Sangre was the first game that came to mind. I also selected the game because of its accessibility; unlike the audio game Deep Sea, which is celebrated for its immersive capacities but is only playable by request at The Museum of Art and Digital Entertainment, Papa Sangre is purchasable as an app for $2.99 and can be played on an iPhone, iPad or iPod. Papa Sangre helps us to consider new possibilities for what is meant by virtual space and it serves as a useful tool for pushing back against essentialisms of “immersion” when talking sound and virtual space.

Papa Sangre is comprised of 25 levels, the completion of which leads player incrementally closer towards the palace of Papa Sangre, a man who has kidnapped a close friend of the protagonist. The game boasts real time binaural audio, meaning that the game’s diegetic sounds (sounds that the character in the game world can “hear”) pan across the player’s headphones in relation to the movement of the game’s protagonist. The objective of each level is to locate and collect musical notes that are scattered through the game’s many topographies while avoiding any number of enemies and obstacles, of course.

.

A commercial success, Papa Sangre has been named “Game of the Week” by Apple, received a 9/10 rating from IGN, a top review from 148apps, and many positive reviews from fans. Gamezebo concludes an extremely positive review of Papa Sangre by calling it “a completely unique experience. It’s tense and horrifying and never lets you relax. By focusing on one aspect of the game so thoroughly, the developers have managed to create something that does one thing really, really well…Just make sure to play with the lights on.” This commercial attention has yielded academic feedback as well. In a paper entitled “Towards an analysis of Papa Sangre, an audio-only game for the iPhone/iPad,” Andrew Hugill (2012) celebrates games like Papa Sangre for providing “an excellent opportunity for the development of a new framework for electroacoustic music analysis.” Despite such attention–and perhaps because of it–I argue that Papa Sangre deserves a critical second listen.

Between February and April of 2012, I played Papa Sangre several times a day and detailed the auditory environments of the game in my blog posts. However, by the time I reached the final level, I still wasn’t sure how to answer my initial question. Had Papa Sangre really engendered a novel experience or it could simply be thought of as a video game with no video?  I noted in my final post:

I am realizing that what makes the audio gaming experience seem so different from the experience of playing video games is the perception that the virtual space, the game itself, only exists through me. The “space” filled by the levels and characters within the game only exists between my ears after it is projected through the headphones and then I extend this world through my limbs to my extremities, which feeds back into the game through the touch screen interface, moving in a loop like an electric current…Headphones are truly a necessity in order to beat the game, and in putting them on, the user becomes the engine through which the game comes to life…When I play video games, even the ones that utilize a first-person perspective, I feel like the game space exists outside of me, or rather ahead of me, and it is through the controller that I am able to project my limbs forward into the game world, which in turn structures how I orient my body. Video game spaces of course, do not exist outside of me, as I need my eyes and ears to interpret the light waves and sound waves that travel back from the screen, but I suppose what matters here is not what is actually happening, but how what is happening is perceived by the user. Audio games have the potential to engender completely different gaming experiences because they make the user feel like he or she is the platform through which the game-space is actualized.

Upon further reflection, however, I recognize that Papa Sangre creates an environment designed to be immersive only to certain kinds of users. A close reading of Papa Sangre reveals bias against both female and disabled players.

Take Papa Sangre’s problematic relationship with blindness. The protagonist is not a visually impaired individual operating in a horrifying new world, but rather a sighted individual who is thrust into a world that is horrifying by virtue of its darkness. The first level of the game is simply entitled “In the Dark.” When the female guide first appears to the protagonist in that same level, she states:

Here you are in the land of the dead, the realm ruled by Papa Sangre…In this underworld it is pitch dark. You cannot see a thing; you can’t even see me, a fluttery watery thing here to help you. But you can listen and move…You must learn how to see with your ears. You will need these powers to save the soul in peril and make your way to the light.

Note the conversation between 3:19 and 3:56.

The game envisions an audience who find blindness to be necessarily terrifying. By equating an inability to see with death and fear, developers are intensifying popular horror genre tropes that diminish the lived experiences of those with visual impairments and unquestioningly present blindness as a problem to overcome. Rather than challenging the relationship between blindness and vulnerability that horror-game developers fetishize, Papa Sangre misses the opportunity to present a visually impaired protagonist who is not crippled by his or her disability.

feet

Disconcertingly, audio games have been tied to game accessibility efforts by developers and players alike for many years. In a 2008 interview Kenji Eno, founder of WARP (a company that specialized in audio games in the late 90s), claimed  his interactions with visually impaired gamers yielded a desire to produce audio games. Similarly forums like audiogames.net showcase users and developers interested in games that cater to gamers with impaired vision.

In terms of its actual game-play, PapaSangre is navigable without visual cues. After playing the game for just two weeks I was able to explore each level with my eyes closed. Still, the ease with which gamers can play the game without looking at the screen does not negate the tension caused by recycled depictions of disability that are in many ways built into storyline’s foundation.

gruntsThe game also fails to engage gender in any complexity. Although the main character’s appearance is never shown, the protagonist is aurally gendered male. Most notable are the deep grunting noises made when he falls to the ground. For me, this acted as a barrier to imagining a fully embodied virtual experience. Those deep grunts revealed many assumptions the designers must have considered about the imagined and perhaps intended audience of the game.  While lack of diversity is certainly an issue at the heart of all entertainment media, Papa Sangre‘s oversight directly contradicts the message of the game, wherein the putative goal is to experience an environment that enhances one’s sense of self within the virtual space.

On October 31st, 2013, Somethin’ Else will release Papa Sangre II. A quick look at the trailer suggests that the developers’ have not changed the formula. The 46-second clip warns that the game is “powered by your fear” after noting, “This Halloween, you are dead.”

.

It appears that an inability to see is still deeply connected with notions of fear and death in the game’s sequel. This does not have to be the case. Why not design a game where impairment is not framed as a hindrance or source of fear? Why not build a game with the option to choose between different sounding voice actors and actresses? Despite its popularity, however, Papa Sangre is by no means representative of general trends across the spectrum of audio-based game design. Oldenburg (2013) points out that over the past decade many independent game developers have been designing experimental “blind games” that eschew themes and representations found in popular video games in favor of the abstract relationships between diegetic sound and in-game movement.

Whether or not they eventually consider the social politics of gaming, Papa Sangre’s developers already send a clear message to all gamers by hardwiring disability and gender into both versions of the game while promoting a limited image of “immersion.” Hopefully as game designers Somethin’ Else grow in popularity and prestige, future developers that use the “Papa Engine” will be more cognizant of the privilege and discrimination embedded in the sonic cues of its framework.  Until then, if you are not a sighted male gamer, you must prepare yourself to be immersed in constant aural cues that this experience, like so many others, was not designed with you in mind.

Enongo Lumumba-Kasongo is a PhD student in the Department of Science and Technology Studies at Cornell University. Since completing a senior thesis on digital music software, tacit knowledge, and gender under the guidance of Trevor Pinch, she has become interested in pursuing research in the emergent field of sound studies. She hopes to combine her passion for music with her academic interests in technological systems, bodies, politics and practices that construct and are constructed by sound. More specifically she would like to examine the politics surrounding low-income community studios, as well as the uses of sound in (or as) electronic games.  In her free time she produces hip hop beats and raps under the moniker Sammus (based on the video game character, Samus Aran, from the popular Metroid franchise).

tape reelREWIND! . . .If you liked this post, you may also dig:

Goalball: Sport, Silence, and Spectatorship— Melissa Helquist

Playing with Bits, Pieces, and Lightning Bolts: An Interview With Sound Artist Andrea Parkins— Maile Colbert

Video Gaming and the Sonic Feedback of Surviellance: Bastion and the Stanley Parable— Aaron Trammell

Goalball: Sport, Silence, and Spectatorship

Sound and Sport2Editor’s Note:  Today, SO! kicks off our summer series on “Sound and Sport,” an interrogation of the roles that sound and listening play in the interconnected aspects of many sports: athletic skill, spectatorial experience,  laws of physics challenged and exploited, and politics expressed and created.  Often, the true play in sports involves power–and sound is a key venue to help us understand its flows and snags, and parse out the actual winners and losers. And, perhaps more directly than other venues, sports is a heightened arena that helps us understand just how important sound is in our everyday lives, even if (and especially because) we take it for granted.

One of my favorite personal “sports metaphors” for sounds’ unacknowledged centrality involves the precise 3.5 seconds I snowboarded with an iPod before I hit an ice patch full speed and had one of the worst falls I have suffered in my 15+ years of the sport.  While I was laying on the hard packed snow gasping for breath and trying to piece together what happened, I realized exactly how much I depended on my listening to provide me with crucial, even-life saving, information. With my ears overwhelmed with treble-y punk, I had charged straight into an icepatch that I would have deftly avoided as soon as I heard the inevitable and unmistakeable scratching sound signalling its location.  That kinesthetic lesson has continued to inform my everyday, every day since it happened and has led me to ever deeper understandings about sound’s power and the various forms of power that it clarifies–and are clarified by it in turn.  I hope that this series will do the same for you, but without the blood and the bruises, even as some of our writers will remind you about the complex and dubious relationship sports can draw between “pain” and “gain.”

Batting first up on our line-up is Melissa Helquist, who describes how the Paralympic sport Goalball challenges the norms of the spectator/athlete relationship.  Look for a post on Muhammad Ali in June (Tara Betts), skateboarding in July (Josh Ottum) and an all-out Olympic extravaganza in August, including a podcast discussing the sonic transformations of Brazil’s favelas in anticipation of the world’s ears in 2016 (Andrea Medrado).  This summer, it is Sounding Out! FTW. –-J. Stoever-Ackerman, Editor-in-Chief

it’s oh so quiet

it’s oh so still

you’re all alone

and so peaceful until

you ring the bell

bim bam

you shout and yell

hi ho ho

you broke the spell

–Bjork, “It’s Oh So Quiet”

During the London 2012 Olympics, the Copper Box venue, which hosted handball, was dubbed “the box that rocks.” The moniker was also, perhaps, a way to drum up interest in handball, a still obscure sport. And indeed, raucous spectators and the pulse and bounce of balls and shoes created a sonic spectacle.

.

When the Paralympics began a few weeks later, much was made of the transformation of “the box that rocks” into “the box that rocks you to sleep.” The game that supposedly will rock you to sleep is Goalball.

Goalball is a 3-a-side game, played in two 12-minute halves. The offensive team rockets the ball down the court (it must be rolling by mid-court), trying to gain the low and wide net on the other side.  As the ball rolls toward the net, the defending players lie on their sides in front of the net, blocking the ball with their bodies. As the ball is pummeled back and forth across the court, it jingles—a simple, clear bell emanating from eight holes in the ball’s surface. In this sport, sound is quite literally a game changer.

Goalball team practice at the ParalympicsGB Training Camp, Image by Flickr User The Department for Culture, Media and Sport

Goalball team practice at the ParalympicsGB Training Camp, Image by Flickr User The Department for Culture, Media and Sport

All goalball players are blind or low vison and wear blackout eyeshades to equalize the playing field and to ensure that any residual vision doesn’t get in the way of the gameplay’s sound. A pre-game equipment check includes referees ensuring that eyeshades don’t let in any light. Penalties include touching one’s eyeshades or making noises that disrupt the other team’s ability to hear the movement of the ball.

.

Goalball, a game first invented as a way to rehabilitate soldiers blinded in WWII, has been a Paralympic sport since 1976. It is one of the few sports designed specifically for blind athletes, rather than adapted from an existing sport. It is a game of sound and touch, a contrast to the visual perception typically associated with team-based ball games. The game, like many others in the Paralympics, expands the sensory experience of sport. The court’s borders are demarcated with tape-covered twine. Sonically, players orient themselves by calling out the position of the on-coming ball, rapping knuckles on the floor, and of course the jingle of the ball.

 “Goalball” Courtesy of Perkins School for the Blind Archives

Image by Flickr User The Department for Culture, Media and Sport

Image by Flickr User The Department for Culture, Media and Sport

The game is high impact; at the Paralympic level, the ball is thrown at speeds up to 60mph. Players launch the ball with high-velocity spins, something like a cross between a discus throw and bowling. Defenders block the ball with their entire bodies—hands, feet, torso. The game is intense, but it is also quiet.

The players’ reliance on sound demands new expectations of the audience. As spectators, we are watchers, but we are also noisemakers—shouters, shriekers, trashtalkers. Goalball spectators can (and do) cheer when a goal is scored, but when gameplay begins, silence is the rule. Before gameplay begins, referees demand, “Quiet Please!”

The Copper Box “was quite specifically designed to achieve a low background noise level so the blind athletes could play” (Soundscape, Issue 3, 38), enabling the venue to be transformed into an atypical space for the sound of sport and spectatorship, a place to challenge our assumptions and expectations.

Goalball? A description? from Daphnee Denis on Vimeo.

Goalball spectatorship doesn’t demand pure silence, but it is not the unfettered cacophony that we often expect in sports spectatorship. At the London 2012 Paralympics, Bjork’s “It’s So Quiet” was played during breaks in gameplay to remind spectators of their obligation. The song’s ebb and flow of silence and exuberance captures the sonic rhythm of Goalball, its pulsing cadence of silent attention and energetic eruption.

.

The nickname, “the box that rocks you to sleep” captures the discomfort spectators may have with Goalball’s soundscape. The silence during play is tense, but the scoring of a goal offers a sudden release. Spectators do not create a persistent cacophony, but rather a pulse, a constant pattern of lull and explosion. The silence of Goalball can feel disconcerting for spectators unfamiliar with the game. The sound of the crowd—the cheers, the clapping, the screams, the groans, the chants–often seem fundamental to the experience of sport.

Silent spectatorship, of course, is an expectation for other sporting events such as golf and tennis. Here, the demand for silence is linked ostensibly to concentration, but also invokes questions of tradition, class, and (in the case of tennis) gender. These sports are individualized demonstrations of skill, and we are admirers, observers.

Team sports invite identification from the crowd. We extend ourselves, making ourselves part of the team, often through sonic exuberance. Speaking about the crowd sounds of the London Olympics, Mike Goldsmith, author of Discord, notes:

Individual athletes commented that the crowd sound was a great source of energy to then, but could also distract. Their comments suggested that some of them used the crowd sound as a resource, that they could tap into or not as the moment demanded (Soundscape, Issue 3, 36).

Sound can feel like participation. Through sound, we make ourselves part of the game, cheering to support our team, hollering to distract a shot. Sound can make sport feel communal, and thus silence can feel like separation, a wall between spectator and team.

But the seeming silence of Goalball spectatorship offers an opportunity to pay attention to the sound of play, sounds that often get subsumed by the roar of the crowd. The meeting of disability and sport offers a “prime space to reread and rewrite culture’s makings” (Tanya Titchkosky, Reading and Writing Disability Differently, 2007), a space to hear sport differently. The culture of sport is rife with ableist assumptions about how we move, how we watch, how we play. Even when sound is part of sport, it is often an afterthought, an addition—sound might be distracting, it may affect play, but the center sensory preoccupation of sport is frequently visual. We watch, we spectate, we keep our eye on the ball. Sport is sight, but it is also sound. Spectatorship is raucous, but it is also silent.

zing boom shhhh.

Featured Image, “Goal Ball” by Flickr User BLac

Melissa Helquist is an Associate Professor of English at Salt Lake Community College and a PhD Candidate in Technical Communication and Rhetoric at Texas Tech University. Her dissertation research focuses on digital literacy and blindness and explores the use of sound to read, write, and interpret images. She is a 2012-13 HASTAC scholar and the recipient of a 2012-2013 CCCC Research Initiative Grant. She lives in Salt Lake City, where she hikes, camps, and canoes with her husband and daughter. 


tape reel
REWIND!
 . . .
If you liked this post, you may also dig:

Eye Candy: The Absence of the Female Voice in Sports Talk Radio–Liana Silva

The Plasticity of Listening: Deafness and Sound Studies–Steph Ceraso

Taking Me Out of the Ball Game: Advertising’s Acoustic Pitch–Osvaldo Oyola

 

 

%d bloggers like this: