For many, the audiobook is a source of pleasure and distraction, a way to get through the To Read Pile while washing dishes or commuting. Audiobooks have a stealthy way of rendering invisible the labor of creating this aural experience: the writer, the narrator, the producer, the technology…here at Sounding Out! we want to render that labor visible and, moreover, think of the sound as a focus of analysis in itself.
Over the next few weeks, we will host several authors who will make all of us think differently about the audiobook selections on our phone, in our car, and in our radios. Today we start things off with a close listen of the 1982 audiobook edition of James Joyce’s Ulysses. Watch out for the hoooooooooooooonk of the SO! train pulling into the station!
—Managing Editor Liana Silva
To think about James Joyce’s Ulysses is to think about the first instant when it truly seized your ears. Accordingly, my Ulysses begins in its final episode, “Penelope”: Molly Bloom is lying down or sitting up next to a passed-out Leopold Bloom when she hears the “frseeeeeeeefronnnng train somewhere whistling.” Her train does not go chug, choo, or chuff, but it rhymes with her “Loves old sweeeetsonnnng” (1669) with an infectious insouciance for the codes of language. Let us call this the Ulysses of 1922 (though the definitive edition of James Joyce’s book whose page numbers are cited here was produced in 1984 by Hans Walter Gabler).
The Ulysses of 1922 is what Jacques Derrida called gramophonic. It plays back to us something recorded without filtering out the noise and is to be heard more than it is to be read. We listen to the book, but we are second-in-line. The first listener is the book itself, which listens to Dublin and records everything with an odd sonic democracy, discriminating little amid its recording of all sounds vivid or vapid, giving equal importance to cats, carts, bells, machines, laughter, coughs, and language. The book saunters about the city, listening and recording, and we listen to the book like we would to a scratchy, static-filled recording of a concert the morning-after. It is a reminder of something Michel Serres once said in The Five Senses: “Meaning trails this long comet tail behind it. A certain kind of æsthetics… take as their object this brilliant trail” (120). Ulysses’ elusive modern city glows in this comet tail of noise and background static more than it pivots around conventionally meaningful language content. Eventually, industrial and technological modernity catches up with artistic modernism and in 1924, Joyce reads and records parts of the “Aeolus” episode of Ulysses, and later in 1929 he records a section of Finnegans Wake. Many years after, in 1982 – the centenary year of Joyce’s birth – Ulysses comes home to Dublin and is recorded in full by Irish national radio.
The 1982 Ulysses Broadcast was an uninterrupted twenty-nine-and-a-half-hour reading of the entire unabridged text on Ireland’s RTÉ Radio on 16th June – Bloomsday – produced by Micheál Ó hAodha. Among this and the two film versions, one from 1967 and the other from 2003, and other recordings such as the ones by LibriVox volunteers and a more recent one by BBC Radio 4, the 1982 Ulysses Broadcast was the first complete recording of the text. Director William Styles called upon voice actors from the Radio Éireann Players to dramatize and act Ulysses out.
My Ulysses of 1982 seizes me differently from the book. From the first seconds of the 1982 Broadcast, I reacted to Buck Mulligan stepping down the stairs inside the Martello Tower with surprise, because the reading is somewhat copiously accompanied; the sounds of loud waves outside of the walls of the seaside tower were part of the soundscape I was thrown into:
Immersion was of the essence. Not that the Ulysses of 1922 is by any means a silent text, but this accompaniment was a simultaneous roar. Sounds in the written text take up space, and as these sounds are being “played” in the book, there is a length of text where nothing else is happening. Think, for instance, of the machinery in the “Aeolus” episode: “Almost human the way it sllt to call attention” (251). As the “sllt” is recorded by the book, it is not over or behind any other sound or voice. It takes up its own space, unlike in the Broadcast
The layering of Buck Mulligan’s voice over the sounds of the sea becomes possible in the move from the spatial-visual of the page to the temporal-aural of a recording. However, listening to the Broadcast prompts me to ask: Is the sonic democracy of recording the soundscape still there?
Most critical work on the audiobook focuses on readerly reception and pleasure, almost indicating that we can hear the Ulysses of 1922 but we must read the Broadcast of 1982; the book provides for more direct sensory engagement while with the Broadcast, we must focus on analyzing the mechanics of our reception. We also get terms like Reinhart Meyer-Kalkus’ “hear-reading” (179) or Matthew Rubery’s “ear contact” (72) which are concerned with the link between the playback of the recorded text and the reading ear. We hear-read when we listen to the voice in our heads recite aloud to us what we are reading, and we establish ear contact, much like eye contact, when we find our ears bound to voices instead of people. Both these concepts are concerned with reception. If we steer clear of our listening of the Broadcast and turn the focus to the Broadcast’s listening of Ulysses, what we find is a rich sonic world, but it is one which takes us away from the linguistic play of the text.
For instance, the book gives cues for the ambient sounds of Dublin clamor surrounding any voice which might be speaking at that moment. “Stream of life” (327) signals in the Broadcast the coming alive of the city soundscape. What is described as a “sudden screech of laughter” (255) in the book is layered upon loud laughter in the Broadcast, as is “a loud cough” (281) upon a loud cough, and a telephone which “whirred” (283) upon the sound of an actual ringing telephone. Later, in the “Circe” episode, a mention of whistling (1169) is also whistled out.
Trams, the clatter of plates and glasses, desks being rapped, coins and bells ringing and jingling, cannon-firing, all these sounds are played as accompaniments again and again as their descriptions are being voiced in the Broadcast. Like in bedtime storytelling, says Brigette Ouvry-Vial, sound effects as uncomplicated accompaniments are never in conflict with the voiced text. Think of pictures and illustrations alongside words in children’s literature (185). The background sound effects of the broadcast add nothing to the sonic democracy of the book even if they do not detract from it.
The Ulysses of 1922 is also rife with non-lexical, unpronounceable sounds, like the one’s Bloom’s cat makes. The many different cat sounds, for example “Mkgnao!” and “Mrkgnao!” and “Mrkrgnao!” (107-8), are not voiced at all in the broadcast, and are instead replaced by the mimicked sounds of a cat meowing, almost exactly the same each time:
“Miaow!” (133) and “Prr” (107), which are Bloom’s responses to his cat, are voiced by him. When the “door of Ruttledge’s office whispered: ee: cree” (243), there is no voicing – only the sound of a creaking door. Yet, when we are in Bloom’s thoughts, like when he remembers a glorious gust of wind which blew up Molly’s skirt, he voices the gust of wind in the Broadcast going “Brrfoo!” (329), pronouncing the non-lexical word with a close-approximation. Would not the non-lexical sounds in his head suggest that he is thinking in sound rather than in language, much like many of us who can hear sounds in our heads? Often but not always, environmental sounds are retained as actual sounds while the sounds in Bloom’s head are sublimated into pronounceable, phonetic language. But mostly there is an insistence on adding sound effects wherever possible.
Whether the book describes the sound or sounds it with a non-lexical string of words, the Broadcast attaches its effects. If we look at the book as a recorder, its movements are staggeringly complex as it moves in and out of multiple spaces. When it is in Bloom’s head, the environment is muted, and when it is inside a carriage, unless it is poked out an open window, it does not record the street. Ssave for a few instances, the Broadcast’s insistence on effects attests to its rich production, but not to its vitality. It therefore stands as an accompaniment to the book, not as a text in its own right given its compositional inconsistencies. So, the several variations on Bloom’s flatulence with “Rrrrrr” (625), “Fff. Oo. Rrpr,” and “Pprrpffrrppfff” are all erased and instead fart sounds are recorded.
On the same page, when Bloom tries to mask his own sounds of bodily release under the din of the passing tram, the “Krandlkrakran” (629) is both voiced by Bloom and recorded as the sound of a noisily ringing tram in the background. But only an actual train whistles in “Penelope,” with no voice in the Broadcast attempting to say “frseeeeeeeefronnnng” (1669).
For Charles Bernstein, the sound of a work of literature, much like the shape of poetry on the page, might be an element which is “extralexical but… not extrasemantic” (5). It is different from the written word but it is not a meaningless ornament. For the Broadcast, however, it might as well be the case that sound is made irrelevant to meaning. Or, we can argue that the meaning being made is in the realm of performance studies and not literature. The pure temporality of the Broadcast helps. We can stop reading the book to look, but we cannot stop the Broadcast and still listen. Moreover, when the Broadcast records, it is listening to the book’s listening of Dublin, removed by another degree from the soundscape of Dublin.
The Broadcast is not however without value. Bernstein echoes Serres when he aggrandizes the “sheer noise of language” (22) which must take precedence over the impulse to decode everything. The Broadcast answers this need to not immediately rationalize and sublimate in analysis everything that is heard, but to rather hear without listening. Cue the poet Robert Carleton Brown who once said that writing since the very beginning has been “bottled up” inside of books (23). And in 1982, the stopper on Joyce’s spuming prose was popped.
Featured Image: “telemachus: the tower, 8 a.m., theology, white/gold, heir, narrative (young)” by Flickr user brad lindert, CC-BY-2.0
Shantam Goyal studies English Literature at the State University of New York at Buffalo for his PhD. He completed his M.Phil in 2018 from the University of Delhi with a dissertation titled “Listen Ulysses: Joyce and Sound.” He hopes to continue this thread for his doctoral research on Finnegans Wake and mishearing. Besides Joyce Studies and Sound Studies, he works on Poetics and Jazz Studies, and is also attempting to translate parts of Ulysses into Hindi as a personal project. His reviews, articles, and creative work have appeared in The Print, The Hindu Business Line, Vayavya, ColdNoon, Daath Voyage, and Café Dissensus among other publications. He prefers that any appellations for him such as academic, poet, or person be prefaced with “Delhi-based.”
REWIND! . . .If you liked this post, you may also dig:
In 1912, British physicist Edmund Fournier d’Albe built a device that he called the optophone, which converted light into tones. The first model—“the exploring optophone”—was meant to be a travel aid; it converted light into a sound of analogous intensity. A subsequent model, “the reading optophone,” scanned print using lamp-light separated into beams by a perforated disk. The pattern of light reflected back from a given character triggered a corresponding set of tones in a telephone receiver. d’Albe initially worked with 8 beams, producing 8 tones based on a diatonic scale. He settled on 5 notes: lower G, and then middle C, D, E and G. (Sol, do, re, mi, sol.) The optophone became known as a “musical print” machine. It was popularized by Mary Jameson, a blind student who achieved reading speeds of 60 words per minute.
In the field of media studies, the optophone has become renowned through its imaginary repurposings by a number of modernist artists. For one thing, the optophone finds brief mention in Finnegan’s Wake. In turn, Marshall McLuhan credited James Joyce’s novel for being a new medium, turning text into sound. In “New Media as Political Forms,” McLuhan says that Joyce’s own “optophone principle” releases us from “the metallic and rectilinear embrace of the printed page.” More familiar within media studies today, Dada artist Raoul Hausmann patented (London 1935), but did not successfully build, an optophone presumably inspired by d’Albe’s model, which he hoped would be employed in audiovisual performances. This optophone was meant to convert sound into light as well as the reverse. It was part of a broader contemporary impulse to produce color music and synaesthetic art. Hausmann also wrote optophonetic poetry, based on the sounds and rhythms of “pure phonemes” and non-linguistic noises. In response, Francis Picabia painted two optophone portraits in 1921 and 22. Optophone I, below, is composed of lines that might be sound waves, with a pattern that disorders vision.
Theorists have repeatedly located Hausmann’s device at the origin of new media. Authors in the Audiovisuology, Media Archaeology, and Beyond Art: A Third Culture anthologies credit Hausmann’s optophone with bringing-into-being cybernetics, digitization, the CD-ROM, audiovisual experiments in video art, and “primitive computers.” It seems to have escaped notice that d’Albe also used the optophone to create electrical music. In his book, The Moon Element, he writes:
d’Albe’s device is typically portrayed as a historical cul-de-sac, with few users and no real technical influence. Yet optophones continued to be designed for blind people throughout the twentieth century; at least one model has users even today. Musical print machines, or “direct translators,” co-existed with more complex OCR-devices—optical character recognizers that converted printed words into synthetic speech. Both types of reading machine contributed to today’s procedures for scanning and document digitization. Arguably, reading optophones intervened more profoundly into the order of print than did Hausmann’s synaesthetic machine: they not only translated between the senses, they introduced a new symbolic system by which to read. Like braille, later vibrating models proposed that the skin could also read.
In December 1922, the Optophone was brought to the United States from the United Kingdom for a demonstration before a number of educators who worked with blind children; only two schools ordered the device. Reading machine development accelerated in the U.S. around World War II. In his position as chair of the National Defense Research Committee, Vannevar Bush established a Committee on Sensory Devices in 1944, largely for the purpose of rehabilitating blind soldiers. The other options for reading—braille and Talking Books—were relatively scarce and had a high cost of production. Reading machines promised to give blind readers access to magazines and ephemeral print (recipes, signs, mail), which was arguably more important than access to books.
At RCA (Radio Corporation of America), the television innovator Vladimir Zworykin became involved with this project. Zworykin had visited Fournier d’Albe in London in the 19-teens and seen a demonstration of the optophone. Working with Les Flory and Winthrop Pike, Zworykin built an initial machine known as the A-2 that operated on the same principles, but used a different mechanism for scanning—an electric stylus, which was publicized as “the first pen that reads.” Following the trail of citations for RCA’s “Reading Aid for the Blind” patent (US 2420716A, filed 1944), it is clear that the “pen” became an aid in domains far afield from blindness. It was repurposed as an optical probe for measuring the oxygen content of blood (1958); an “optical system for facsimile scanners” (1972); and, in a patent awarded to Burroughs Corporation in 1964, a light gun. This gun, in turn, found its way into the handheld controls for the first home video game system, produced by Sanders Associates.
The A-2 optophone was tested on three blind research subjects, including ham radio enthusiast Joe Piechowski, who was more of a technical collaborator. According to the reports RCA submitted to the CSD, these readers were able to correlate the “chirping” or “tweeting” sounds of the machine with letters “at random with about eighty percent accuracy” after 60 hours of practice. Close spacing on a printed page made it difficult to differentiate between letters; readers also had difficulty moving the stylus at a steady pace and in a straight line. Piechowski achieved reading speeds of 20 words per minute, which RCA deemed too slow.
Attempts were made to incorporate “human factors” and create a more efficient tonal code, to reduce reading time as well as learning time and confusion between letters. One alternate auditory display was known as the compressed optophone. Rather than generate multiple tones or chords for a single printed letter, which was highly redundant and confusing to the ear, the compressed version identified only certain features of a printed letter: such as the presence of an ascender or descender. Below is a comparison between the tones of the original optophone and the compressed version, recorded by physicist Patrick Nye in 1965. The following eight lower case letters make up the source material: f, i, k, j, p, q, r, z.
Original record in the author’s possession. With thanks to Elaine Nye, who generously tracked down two of her personal copies at the author’s request. The second copy is now held at Haskins Laboratories.
Because of the seeming limitations of tonal reading, RCA engineers re-directed their research to add character recognition to the scanning process. This was controversial, direct translators like the optophone being perceived as too difficult because they required blind people to do something akin to learning to read print—learning a symbolic tonal or tactile code. At an earlier moment, braille had been critiqued on similar grounds; many in the blind community have argued that mainstream anxieties about braille sprang from its symbolic difference. Speed, moreover, is relative. Reading machine users protested that direct translators like the optophone were inexpensive to build and already available—why wait for the refinement of OCR and synthetic speech? Nevertheless, between November 1946 and May 1947, Zworykin, Flory, and Pike worked on a prototype “letter reading machine,” today widely considered to be the first successful example of optical character recognition (OCR). Before reliable synthetic speech, this device spelled out words letter by letter using tape recordings. The Letter-Reader was too massive and expensive for personal use, however. It also had an operating speed of 20 words per minute—thus it was hardly an improvement over the A-2 translator.
Haskins Laboratories, another affiliate of the Committee on Sensory Devices, began working on the reading machine problem around the same time, ultimately completing an enormous amount of research into synthetic speech and—as argued by Donald Shankweiler and Carol Fowler—the “speech code” itself. In the 1940s, before workable text-to-speech, researchers at Haskins wanted to determine whether tones or artificial phonemes (“speech-like speech”) were easier to read by ear. They developed a “machine dialect of English,” named wuhzi: “a transliteration of written English which preserved the phonetic patterns of the words.” An example can be played below. The eight source words are: With, Will, Were, From, Been, Have, This, That.
Original record in the author’s possession. From Patrick Nye, “An Investigation of Audio Outputs for a Reading Machine” (1965). With thanks to Elaine Nye.
Based on the results of tests with several human subjects, the Haskins researchers concluded that aural reading via speech-like sounds was necessarily faster than reading musical tones. Like the RCA engineers, they felt that a requirement of these machines should be a fast rate of reading. Minimally, they felt that reading speed should keep pace with rapid speech, at about 200 words per minute.
Funded by the Veterans Administration, members of Mauch Laboratories in Ohio worked on both musical optophones and spelled-speech recognition machines from the 1950s into the 1970s. One of their many devices, the Visotactor, was a direct-translator with vibro-tactile output for four fingers. Another, the Visotoner, was a portable nine-channel optophone. All of the Mauch machines were tested by Harvey Lauer, a technology transfer specialist for the Veterans Administration for over thirty years, himself blind. Below is an excerpt from a Visotoner demonstration, recorded by Lauer in 1971.
Visotoner demonstration. Original 7” open reel tape in author’s possession. With thanks to Harvey Lauer for sharing items from his impressive collection and for collaborating with the author over many years.
Later on the same tape, Lauer discusses using the Visotoner to read mail, identify currency, check over his own typing, and read printed charts or graphics. He achieved reading speeds of 40 words per minute with the device. Lauer has also told me that he prefers the sound of the Visotoner to that of other optophone models—he compares its sound to Debussy, or the music for dream sequences in films.
Mauch also developed a spelled speech OCR machine called the Cognodictor, which was similar to the RCA model but made use of synthetic speech. In the recording below, Lauer demonstrates this device by reading a print-out about IBM fonts. He simultaneously reads the document with the Visotoner, which reveals glitches in the Cognodictor’s spelling.
Original 7” open reel tape in the author’s possession. With thanks to Harvey Lauer.
In 1972, at the request of Lauer and other blind reading machine users, Mauch assembled a stereo-optophone with ten channels, called the Stereotoner. This device was distributed through the VA but never marketed, and most of the documentation exists in audio format, specifically in sets of training tapes that were made for blinded veterans who were the test subjects. Some promotional materials, such as the short video below, were recorded for sighted audiences—presumably teachers, rehabilitation specialists, or funding agencies.
Video courtesy of Harvey Lauer.
Mary Jameson corresponded with Lauer about the stereotoner, via tape and braille, in the 1970s. In the braille letter pictured below she comments, “I think that stereotoner signals are the clearest I have heard.”
In 1973, with the marketing of the Kurzweil Reader, funding for direct translation optophones ceased. The Kurzweil Reader was advertised as the first machine capable of multi-font OCR; it was made up of a digital computer and flatbed scanner and it could recognize a relatively large number of typefaces. Kurzweil recalls in his book The Age of Spiritual Machines that this technology quickly transferred to Lexis-Nexis as a way to retrieve information from scanned documents. As Lauer explained to me, the abandonment of optophones was a serious problem for people with print disabilities: the Kurzweil Readers were expensive ($10,000-$50,000 each); early models were not portable and were mostly purchased by libraries. Despite being advertised as omnifont readers, they could not in fact recognize most printed material. The very fact of captchas speaks to the continued failures of perfect character recognition by machines. And, as the “familiarization tapes” distributed to blind readers indicate, the early synthetic speech interface was not transparent—training was required to use the Kurzweil machines.
Original cassette in the author’s possession.
Lauer always felt that the ideal reading machine should have both talking OCR and direct-translation capabilities, the latter being used to get a sense of the non-text items on a printed page, or to “preview material and read unusual and degraded print.” Yet the long history of the optophone demonstrates that certain styles of decoding have been more easily naturalized than others—and symbols have increasingly been favored if they bear a close relation to conventional print or speech. Finally, as computers became widely available, the focus for blind readers shifted, as Lauer puts it, “from reading print to gaining access to computers.” Today, many electronic documents continue to be produced without OCR, and thus cannot be translated by screen readers; graphical displays and videos are largely inaccessible; and portable scanners are far from universal, leaving most “ephemeral” print still unreadable.
Mara Mills is an Assistant Professor of Media, Culture, and Communication at New York University, working at the intersection of disability studies and media studies. She is currently completing a book titled On the Phone: Deafness and Communication Engineering. Articles from this project can be found in Social Text, differences, the IEEE Annals of the History of Computing, and The Oxford Handbook of Sound Studies. Her second book project, Print Disability and New Reading Formats, examines the reformatting of print over the course of the past century by blind and other print disabled readers, with a focus on Talking Books and electronic reading machines. This research is supported by NSF Award #1354297.
Today’s post from Damien Keane marks the first in our recurrent series, “Live from the SHC,” which will broadcast the research of this year’s Fellows at Cornell University’s Society for the Humanities, directed by Timothy Murray, whose theme for 2011-2012 is “Sound: Culture, Theory, Practice, Politics.” The Society’s Fellows study a diverse range of sound studies topics: the relationship between sound and urban space, the moment when musical sound enters (and exits) the body, the sound of popular culture in interwar Egypt, the role of sound in constructing pious Muslim communities in secular European countries, and the manipulation of sound by “circuit benders”–to name just a few of the projects being researched up at the A.D. White House (To see the full list of Fellows, click here. To see the slate of SHC courses being taught at Cornell this year, click here). To allow our readers the opportunity to listen in to the goings on at the SHC, Sounding Out!‘s correspondent on the inside, Editor-in-Chief and SHC Fellow Jennifer Stoever-Ackerman, will be curating the online conversation amongst the fellows about their current research (and, of course, their sonic guilty pleasures)–so look (and listen) for it on selected Mondays from now through May 2012. –JSA
I began asking my students how many of them had turntables almost nine years ago, when, as a graduate student teaching a course on the twentieth century, I found myself seventy pages into Dracula and having to explain what a phonograph is: then, only one or two hands went up, or half-up, in response to the question. A year ago, when I put this question to students in a course on modernism, fully two-thirds of the thirty-five students in the room raised their hands – although more than a few of them also believed that Steve Jobs had invented the mp3. Even allowing for self-selection and/or misrepresentation on their part, and for a soft form of administrative research and/or miscounting on my part, this swing is quite dramatic.
But what I find intriguing about the uptick is how it tracks with an equally dramatic rise in the use of a kind of allusion in commentaries about the effects of digital media on education. I am referring to those parenthetical asides that so often follow mention of residual technologies such as turntables and albums (e.g. “anyone remember those?”). While these asides are delivered in the manner of a stand-up comic doing material about blind dates or the post office, they serve to euphemize powerful attitudes about technology and what it is to be modern that are even less funny than such shtick, perhaps most directly by assuming the unqualified and unimpeachable newness of the present. Asking if anyone remembers the record player – or the chalkboard or the library or the secretarial staff – takes for granted that you’ve already taken it to the next level, having left off your turntable with parietals and rumble seats.
Which brings me to the gramophone recording that James Joyce made of the ending of Anna Livia Plurabelle, an episode of what would eventually become Finnegans Wake. Joyce had begun publishing pieces of his work in progress in 1924, and the Anna Livia Plurabelle section was by far the most visible of these, having appeared in several variant forms by the turn of the decade. Among these were its publication as a stand-alone, limited edition book in 1928, and the release of the recording that had been made at C.K. Ogden’s Orthological Institute in Cambridge, as a double-sided twelve-inch gramophone disc in 1929. Each did surprisingly well, and in tandem they helped to secure acceptance of Joyce’s formally challenging aesthetic procedures among readers. Indeed, some in the literary field, such as T.S. Eliot, hoped that recordings of authors’ readings would soon supplant, or at least augment, the market for limited and deluxe editions.
The fact that recordings did not replace books would be of little interest now, were it not for the way that their relationship in 1929 neatly conjures the supposedly defining trait of our own moment: the demise of one media epoch and the emergence of another (and with economic catastrophe at hand, too). Yet it is the gramophone recording – the “new media” format of that past moment – that is considered thoroughly obsolete today. What then can be said about the “listening history” of the recording? How has the sound of Joyce’s recorded reading interacted with perceptions of the readerly difficulty of the text, which, whether rightly or wrongly, are the pre-eminent “mode” through which readers approach Finnegans Wake? Put simply, if reading the text is hard, is listening to a recorded reading of that text hard?
In thinking about these questions, what catches my attention is the slippage between different senses of “difficulty,” between the positive inflections of literary difficulty and the negative connotations of sonic interference. Contemporary listeners tend to emphasize the crackles and pops that compete with the sound of Joyce’s voice and its language, and thereby locate the “difficulty” of the recorded reading in the sound of the recording itself. It is as though the recording medium actually impedes access to Joyce’s language or somehow imposes itself between the listener and his voice – despite the fact that the medium enables this access in the first place. This is to say, the “low” fidelity of the recording of Joyce’s voice simply cannot reproduce the literary experience of Joyce’s prose, that most elusive, or rarified, object of speculation. (Is it the singularity of literature or is it Memorex?) Today’s default reaction to the recorded reading might be juxtaposed to a much earlier response, Hazel Felman’s setting of the very end of the passage for piano and voice, published in 1935. Here is her explanation of the genesis of her composition:
The emotional reaction I experienced on hearing a recording which James Joyce made of the last pages of ANNA LIVIA PLURABELLE was profound. As I listened to the record time and again, the conviction grew upon me that if I could re-create in music the mood that the author creates when he reads the closing chapters of ANNA LIVIA PLURABELLE, I would be justified in translating into a musical idiom what already seemed to be sheer music.
Felman’s response pivots on the aural clarity of the Anna Livia Plurabelle recording – she notes, for example, that D is used as a pedal point in the setting because Joyce’s voice modulates around a pitch of D in the recording – and the “sheer music” she heard there was mediated by that very recording.
Almost eighty years after Felman, today’s hi-fi listeners more often hear in the recording only a poor realization of Joyce’s text, a judgment that has nothing to do with Joyce’s performance and everything to do with the grossly uneven staking of the recording’s semiotic content against its materiality. Yet we do well to recall that surface noise is just as “modernist” as Joyce’s innovative prose. Why do we always forget that? To ask this question brings us back to turntables and students. (Anyone remember them?)
Damien Keane is an assistant professor in the English department at the
State University of New York at Buffalo. He is nearing the completion of
a manuscript entitled Ireland and the Problem of Information.