Future Memory: Womb Sound As Shared Experience Crossing Time and Space
This Month will feature a two-part post by SO! regular writer Maile Colbert. Look for Part Two on Monday, January 19th.
—
I was a child obsessed with time travel. Beyond favorites such as A Wrinkle in Time and Time Bandits, I perpetually daydreamed of the ability to pause, reverse, and fast-forward my life. I had a book on the “olden days” and it amazed me that my great-grandparents, whom I had the fortune to know, had lived them. I wanted to fast forward and see myself their current age, telling stories to the next generations of a good life lived. I used to entertain the thought that if I let my breath go and let myself sink to the bottom of a body of water, I could pause time, or at least slow it down, as the sound of the fluid world around me seemed to suggest. Whenever my family moved, I made a time capsule, and I always scanned the ocean for long lost bottled messages. These were the beginnings of my future in time-based media–both image and sound–my love for found footage, and my recent research and writing on sound back in time.
Now as a new mother, I am beginning to think about the future in a way I hadn’t before. I see my mother in my daughter, and I see her mother, and my partner’s mother. I recognize my grandfather’s eyebrow when furrowed, and her grandfather’s nose. My mouth when smiling, my partner’s mouth when in concentration.
And our ears. . .our very sensitive hearing, almost like a punch line. Our daughter is truly the daughter of sound artists. In this first post of a two part series on humans’ earliest interactions with sound, I document our work sounding and listening together, which began in a future-oriented past I am still learning about.
Womb
There was a study in which doctors gave babies only a day old pacifiers connected to tape recorders. Depending on the pattern of the new babies suck, the tape recorder would either switch on the sound of the mother’s voice, or a stranger’s.
“Within 10 to 20 minutes, the babies learned to adjust their sucking rate on the pacifier to turn on their own mother’s voice,” says the study’s coauthor William Fifer, Ph.D., an associate professor of psychiatry and pediatrics at Columbia University’s College of Physicians and Surgeons. “This not only points out a newborn’s innate love for his mother’s voice but also a baby’s unique ability to learn quickly.”
– “What Babies Learn in the Womb,” 2014, Lara Flynn Maccarthy, Parenting
My daughter Odette knew my voice the moment she was born. In a strange, bright, cold new world, it seemed one constant she could rely upon. When she was first placed upon my chest, I started to sing to her, and she was calming, staring at me, as much as her newborn eyes would let her, with an expression of surprised recognition, as this familiar voice sang a familiar song, one I sang her often in the womb. One I knew by heart because my mother would sing it to me when I was a child.
Are you going to Scarborough Fair
Parsley, sage, rosemary and thyme
Remember me to the one who lives there
She once was a true love of mine. . .
The mother’s voice comes to the fetus not solely as ambient sound through the abdomen, as other external sounds and voices would, but also through the vocal cords’ internal vibration. There is a direct connection, a shared space. As early as the seventh month, a fetal heartbeat will slow and calm to the sound of the mother’s voice, and research has shown newborns even prefer a similar version of their mother’s voice to what they heard in the womb, muffled and low. When Odette suffered colic in her early months, one sure way to help comfort her was to sing to her while she was on my chest. Aside from the close contact of skin, the familiar smell, the warmth, it could be that hearing my voice also through the chest mimicked the womb filter.
In the tape recorder study, researchers also noted that newborns would suck more intensely to recordings of people speaking in the language of their mothers, most likely picking up on the melody and rhythm. We are beginning to understand that learning starts in the womb.
Fetal Soap Addiction
Carmen Bank found her 1985 pregnancy rather boring. So, to pass the time, she started doing something she would never have dreamed of: watching a soap opera.
Unexpectedly, she found herself hooked. And so she spent almost every morning in front of her television set, ready for the familiar theme of “Ryan’s Hope.” After Melissa was born that October, Bank bought a videocassette recorder so she could tape the show when she was too busy to watch.
Bank isn’t sure when she discovered the behavior, but, shortly after Melissa was born, Bank realized that the baby seemed to recognize the “Ryan’s Hope” theme and would stop fussing when the program began.
“She’d just sit there and watch the whole introduction and then she would start imitating what they do on the show,” Bank said. “This has been going on forever.”
-The Very Young and Restless, Do Soaps Hook the Unborn? June 28, 1988, Allan Parachini, The New York Times
My third trimester was a rough one. I was a walking swimming pool of about forty pounds of baby and amniotic fluid. My pelvis had gone completely out of line, making even that pregnancy waddle slow and difficult. Needless to say, I was less and less mobile. I was lucky that much of my remaining work was writing and studio based, but often found myself having to take mental breaks as well. My body/mind chemistry was working overtime. Something that happens with pregnancy when preparing mentally for your new, shared life is to think a lot about your own childhood. I was lucky to have a happy one, and so strong nostalgic feelings and memories would come up, particularly around the television show Dr. Who. I used to spend a happy hour with my father once a week watching reruns from the 70’s in the 80’s.
Dr. Who returned to broadcast in the 2000s, in a few new successful regenerations. The new iteration uses a lot of the classic themes, characters, and even remixes and re-masters the the original opening score written by Ron Grainer and realized by the great Delia Derbyshire for the BBC Radiophonic Workshop in 1963; the Dr. Who theme was one of the very first signature electronic music tunes, and performed well before commercial synthesizers were even available. Derbyshire used musique concrete techniques, cutting each note individually on analogue tape, speeding up and slowing down to create the notes from recordings of a single plucked string, white noise, and the simple harmonic waveforms of test-tone oscillators. (Grainer was famous for asking after hearing Derbyshire’s magic, “Did I write that?”. Derbyshire replied “Most of it.” The BBC, who kept members of the Radiophonic Workshop anonymous, prevented Grainer from giving Derbyshire a co-composer credit and a share of the royalties.)
It is a really, really catchy tune:
While Odette was in the womb, I watched all of those decades addictively, one after another. When I came across the soap opera study after she was born, I decided my obsessive Who-watching had set up a perfect laboratory to try it out myself. We started in 1963 and moved through time with the Doctor. Odette looked up in surprise and her brow furrowed in concentration. She looked around slowly at first, then faster and faster. She smiled; she cooed; she laughed. She started to flap her arms.
.
When I finally turned it off, she stopped everything and looked concerned. I turned it on again and we danced together in clear recognition of this already-shared future past sonic moment, one I had with my father and now with her. Now I understood that as I consumed Dr. Who, Odette was not only hearing, she was learning, and beginning the act of listening.
Sounds have a surprising impact upon the fetal heart rate: a five second stimulus can cause changes in heart rate and movement which last up to an hour. Some musical sounds can cause changes in metabolism. “Brahm’s Lullaby,” for example, played six times a day for five minutes in a premature baby nursery produced faster weight gain than voice sounds played on the same schedule (Chapman, 1975)
-The Fetal Sense, A Classical View, David B. Chamberlain, Birth Psychology
Wombscapes
Odette’s very first movements, her first “quickening”, was in response to David Bowie’s “Starman”. This was around 16 weeks, often the time for first movements in the fetus, and interestingly also the time when the hearing has developed. The fetus floats in a rich and complex soundscape; it is anything but quiet. The womb filter…amniotic fluid, embryonic membranes, uterus, the maternal abdomen-low frequencies, and blood in veins whooshing, then Mother’s voice and body noises such as hiccups and the gurgles of digestion and of course, the heartbeat. The Mother’s heartbeat can be as loud as a vacuum cleaner and ultra sounds as loud as a subway car arriving in a train station.We can try to mimic the womb-scape, imagining sounds being filtered through the body. We can use a hydrophone–a pressure microphone designed to be sensitive to soundwaves through fluid matter–on the abdomen to get an idea and sample for our womb-scape.
Perhaps it would sound something like this…
…reactive listening begins eight weeks before the ear is structurally complete at about 24 weeks. These findings indicate the complexity of hearing, lending support to the idea that receptive hearing begins with the skin and skeletal framework, skin being a multireceptor organ integrating input from vibrations, thermo receptors, and pain receptors. This primal listening system is then amplified with vestibular and cochlear information as it becomes available. With responsive listening proven at 16 weeks, hearing is clearly a major information channel operating for about 24 weeks before birth.
-The Fetal Sense, A classical view
Sound artist and Acoustic Ecologist Andrea Williams has been recently working on a composition for Bellybuds, for her yet born nephew. Bellybuds are “a specialized speaker system that gently adheres to your belly & safely plays memory-shaping sound directly to the womb.” Much of her work is composed with space in mind, using room sounds in a live performance situation. Williams told me it was interesting thinking about the womb as a new “venue,” with her little developing nephew as her audience. “What is he hearing?” she asked, “will he recognize me right away upon meeting him for the first time if he only hears the sound of my voice through the Bellybuds while he is a fetus?” I love the idea that she could send a “hello” from one place to her nephew in the womb in another.
The more we understand and realize about fetal hearing and processing sound, the more we understand how fetuses can detect subtle changes and process complex information. Memory starts to form around 30 weeks, and it’s possible early sound interventions at this time could help babies with detected abnormal development. Speaking and singing to the unborn fetus, allowing them to experience different soundscapes while still in the womb, helps shape their brains. This is probably why the urge to do so is there.
. . .Odette’s first dance. Odette’s first songs. . . transcending time and space.
—
dedicated to Odette Helen, and to the family, daughter, and memory of Steven Miller
—
Featured Image: Odette’s Birth Cry, photo credit Rui Costa
The album Future Memory, for Odette will be released in 2015 through Wild Silence. A dedication album to a newborn daughter…a mix of her parents’ recorded and shared sounds, memories, hopes, and dreams towards a future with her. Sounds of her womb-scape, birth, and first year…music in collaboration with friends and family across oceans and land…an album of lullabies for Odette.
—
Maile Colbert is a multi-media artist with a concentration on sound and video who relocated from Los Angeles, US to Lisbon, Portugal. She is a regular writer for Sounding Out!
—
REWIND! . . .If you liked this post, you may also dig:
On Sound and Pleasure: Meditations on the Human Voice– Yvon Bonenfant
This Is Your Body on the Velvet Underground– Jacob Smith
Sound Designing Motherhood: Irene Lusztig & Maile Colbert Open The Motherhood Archives– Maile Colbert
Optophones and Musical Print
In 1912, British physicist Edmund Fournier d’Albe built a device that he called the optophone, which converted light into tones. The first model—“the exploring optophone”—was meant to be a travel aid; it converted light into a sound of analogous intensity. A subsequent model, “the reading optophone,” scanned print using lamp-light separated into beams by a perforated disk. The pattern of light reflected back from a given character triggered a corresponding set of tones in a telephone receiver. d’Albe initially worked with 8 beams, producing 8 tones based on a diatonic scale. He settled on 5 notes: lower G, and then middle C, D, E and G. (Sol, do, re, mi, sol.) The optophone became known as a “musical print” machine. It was popularized by Mary Jameson, a blind student who achieved reading speeds of 60 words per minute.

Reading Optophone, held at Blind Veterans UK (formerly St. Dunstan’s). Photographed by the author. With thanks to Robert Baker for helping me search through the storeroom to locate this item, which was previously uncatalogued and “lost” for many years.

Schematic of optophone from Vetenskapen och livet (1922)
In the field of media studies, the optophone has become renowned through its imaginary repurposings by a number of modernist artists. For one thing, the optophone finds brief mention in Finnegan’s Wake. In turn, Marshall McLuhan credited James Joyce’s novel for being a new medium, turning text into sound. In “New Media as Political Forms,” McLuhan says that Joyce’s own “optophone principle” releases us from “the metallic and rectilinear embrace of the printed page.” More familiar within media studies today, Dada artist Raoul Hausmann patented (London 1935), but did not successfully build, an optophone presumably inspired by d’Albe’s model, which he hoped would be employed in audiovisual performances. This optophone was meant to convert sound into light as well as the reverse. It was part of a broader contemporary impulse to produce color music and synaesthetic art. Hausmann also wrote optophonetic poetry, based on the sounds and rhythms of “pure phonemes” and non-linguistic noises. In response, Francis Picabia painted two optophone portraits in 1921 and 22. Optophone I, below, is composed of lines that might be sound waves, with a pattern that disorders vision.
Theorists have repeatedly located Hausmann’s device at the origin of new media. Authors in the Audiovisuology, Media Archaeology, and Beyond Art: A Third Culture anthologies credit Hausmann’s optophone with bringing-into-being cybernetics, digitization, the CD-ROM, audiovisual experiments in video art, and “primitive computers.” It seems to have escaped notice that d’Albe also used the optophone to create electrical music. In his book, The Moon Element, he writes:

E.E. Fournier d’Albe, The Moon Element (New York: D. Appleton & Company, 1924), 107.
d’Albe’s device is typically portrayed as a historical cul-de-sac, with few users and no real technical influence. Yet optophones continued to be designed for blind people throughout the twentieth century; at least one model has users even today. Musical print machines, or “direct translators,” co-existed with more complex OCR-devices—optical character recognizers that converted printed words into synthetic speech. Both types of reading machine contributed to today’s procedures for scanning and document digitization. Arguably, reading optophones intervened more profoundly into the order of print than did Hausmann’s synaesthetic machine: they not only translated between the senses, they introduced a new symbolic system by which to read. Like braille, later vibrating models proposed that the skin could also read.
In December 1922, the Optophone was brought to the United States from the United Kingdom for a demonstration before a number of educators who worked with blind children; only two schools ordered the device. Reading machine development accelerated in the U.S. around World War II. In his position as chair of the National Defense Research Committee, Vannevar Bush established a Committee on Sensory Devices in 1944, largely for the purpose of rehabilitating blind soldiers. The other options for reading—braille and Talking Books—were relatively scarce and had a high cost of production. Reading machines promised to give blind readers access to magazines and ephemeral print (recipes, signs, mail), which was arguably more important than access to books.

Joe Piechowski with the A-2 reader. Courtesy of Rob Flory.
At RCA (Radio Corporation of America), the television innovator Vladimir Zworykin became involved with this project. Zworykin had visited Fournier d’Albe in London in the 19-teens and seen a demonstration of the optophone. Working with Les Flory and Winthrop Pike, Zworykin built an initial machine known as the A-2 that operated on the same principles, but used a different mechanism for scanning—an electric stylus, which was publicized as “the first pen that reads.” Following the trail of citations for RCA’s “Reading Aid for the Blind” patent (US 2420716A, filed 1944), it is clear that the “pen” became an aid in domains far afield from blindness. It was repurposed as an optical probe for measuring the oxygen content of blood (1958); an “optical system for facsimile scanners” (1972); and, in a patent awarded to Burroughs Corporation in 1964, a light gun. This gun, in turn, found its way into the handheld controls for the first home video game system, produced by Sanders Associates.
The A-2 optophone was tested on three blind research subjects, including ham radio enthusiast Joe Piechowski, who was more of a technical collaborator. According to the reports RCA submitted to the CSD, these readers were able to correlate the “chirping” or “tweeting” sounds of the machine with letters “at random with about eighty percent accuracy” after 60 hours of practice. Close spacing on a printed page made it difficult to differentiate between letters; readers also had difficulty moving the stylus at a steady pace and in a straight line. Piechowski achieved reading speeds of 20 words per minute, which RCA deemed too slow.
Attempts were made to incorporate “human factors” and create a more efficient tonal code, to reduce reading time as well as learning time and confusion between letters. One alternate auditory display was known as the compressed optophone. Rather than generate multiple tones or chords for a single printed letter, which was highly redundant and confusing to the ear, the compressed version identified only certain features of a printed letter: such as the presence of an ascender or descender. Below is a comparison between the tones of the original optophone and the compressed version, recorded by physicist Patrick Nye in 1965. The following eight lower case letters make up the source material: f, i, k, j, p, q, r, z.
Original record in the author’s possession. With thanks to Elaine Nye, who generously tracked down two of her personal copies at the author’s request. The second copy is now held at Haskins Laboratories.

From Patrick Nye, “An Investigation of Audio Outputs for a Reading Machine,” AFB Research Bulletin (July 1965): 30.
Because of the seeming limitations of tonal reading, RCA engineers re-directed their research to add character recognition to the scanning process. This was controversial, direct translators like the optophone being perceived as too difficult because they required blind people to do something akin to learning to read print—learning a symbolic tonal or tactile code. At an earlier moment, braille had been critiqued on similar grounds; many in the blind community have argued that mainstream anxieties about braille sprang from its symbolic difference. Speed, moreover, is relative. Reading machine users protested that direct translators like the optophone were inexpensive to build and already available—why wait for the refinement of OCR and synthetic speech? Nevertheless, between November 1946 and May 1947, Zworykin, Flory, and Pike worked on a prototype “letter reading machine,” today widely considered to be the first successful example of optical character recognition (OCR). Before reliable synthetic speech, this device spelled out words letter by letter using tape recordings. The Letter-Reader was too massive and expensive for personal use, however. It also had an operating speed of 20 words per minute—thus it was hardly an improvement over the A-2 translator.
Haskins Laboratories, another affiliate of the Committee on Sensory Devices, began working on the reading machine problem around the same time, ultimately completing an enormous amount of research into synthetic speech and—as argued by Donald Shankweiler and Carol Fowler—the “speech code” itself. In the 1940s, before workable text-to-speech, researchers at Haskins wanted to determine whether tones or artificial phonemes (“speech-like speech”) were easier to read by ear. They developed a “machine dialect of English,” named wuhzi: “a transliteration of written English which preserved the phonetic patterns of the words.” An example can be played below. The eight source words are: With, Will, Were, From, Been, Have, This, That.
Original record in the author’s possession. From Patrick Nye, “An Investigation of Audio Outputs for a Reading Machine” (1965). With thanks to Elaine Nye.
Based on the results of tests with several human subjects, the Haskins researchers concluded that aural reading via speech-like sounds was necessarily faster than reading musical tones. Like the RCA engineers, they felt that a requirement of these machines should be a fast rate of reading. Minimally, they felt that reading speed should keep pace with rapid speech, at about 200 words per minute.
Funded by the Veterans Administration, members of Mauch Laboratories in Ohio worked on both musical optophones and spelled-speech recognition machines from the 1950s into the 1970s. One of their many devices, the Visotactor, was a direct-translator with vibro-tactile output for four fingers. Another, the Visotoner, was a portable nine-channel optophone. All of the Mauch machines were tested by Harvey Lauer, a technology transfer specialist for the Veterans Administration for over thirty years, himself blind. Below is an excerpt from a Visotoner demonstration, recorded by Lauer in 1971.
Visotoner demonstration. Original 7” open reel tape in author’s possession. With thanks to Harvey Lauer for sharing items from his impressive collection and for collaborating with the author over many years.
Later on the same tape, Lauer discusses using the Visotoner to read mail, identify currency, check over his own typing, and read printed charts or graphics. He achieved reading speeds of 40 words per minute with the device. Lauer has also told me that he prefers the sound of the Visotoner to that of other optophone models—he compares its sound to Debussy, or the music for dream sequences in films.
Mauch also developed a spelled speech OCR machine called the Cognodictor, which was similar to the RCA model but made use of synthetic speech. In the recording below, Lauer demonstrates this device by reading a print-out about IBM fonts. He simultaneously reads the document with the Visotoner, which reveals glitches in the Cognodictor’s spelling.
Original 7” open reel tape in the author’s possession. With thanks to Harvey Lauer.

The Cognodictor. Glendon Smith and Hans Mauch, “Research and Development in the Field of Reading Machines for the Blind,” Bulletin of Prosthetics Research (Spring 1977): 65.
In 1972, at the request of Lauer and other blind reading machine users, Mauch assembled a stereo-optophone with ten channels, called the Stereotoner. This device was distributed through the VA but never marketed, and most of the documentation exists in audio format, specifically in sets of training tapes that were made for blinded veterans who were the test subjects. Some promotional materials, such as the short video below, were recorded for sighted audiences—presumably teachers, rehabilitation specialists, or funding agencies.
Mauch Stereo Toner from Sounding Out! on Vimeo.
Video courtesy of Harvey Lauer.
Mary Jameson corresponded with Lauer about the stereotoner, via tape and braille, in the 1970s. In the braille letter pictured below she comments, “I think that stereotoner signals are the clearest I have heard.”
In 1973, with the marketing of the Kurzweil Reader, funding for direct translation optophones ceased. The Kurzweil Reader was advertised as the first machine capable of multi-font OCR; it was made up of a digital computer and flatbed scanner and it could recognize a relatively large number of typefaces. Kurzweil recalls in his book The Age of Spiritual Machines that this technology quickly transferred to Lexis-Nexis as a way to retrieve information from scanned documents. As Lauer explained to me, the abandonment of optophones was a serious problem for people with print disabilities: the Kurzweil Readers were expensive ($10,000-$50,000 each); early models were not portable and were mostly purchased by libraries. Despite being advertised as omnifont readers, they could not in fact recognize most printed material. The very fact of captchas speaks to the continued failures of perfect character recognition by machines. And, as the “familiarization tapes” distributed to blind readers indicate, the early synthetic speech interface was not transparent—training was required to use the Kurzweil machines.
Original cassette in the author’s possession.

Raymond Kurzweil and Kenneth Jernigan with the Kurzweil Reading Machine (NFB, 1977). Courtesy National Federation of the Blind.
Lauer always felt that the ideal reading machine should have both talking OCR and direct-translation capabilities, the latter being used to get a sense of the non-text items on a printed page, or to “preview material and read unusual and degraded print.” Yet the long history of the optophone demonstrates that certain styles of decoding have been more easily naturalized than others—and symbols have increasingly been favored if they bear a close relation to conventional print or speech. Finally, as computers became widely available, the focus for blind readers shifted, as Lauer puts it, “from reading print to gaining access to computers.” Today, many electronic documents continue to be produced without OCR, and thus cannot be translated by screen readers; graphical displays and videos are largely inaccessible; and portable scanners are far from universal, leaving most “ephemeral” print still unreadable.
—
Mara Mills is an Assistant Professor of Media, Culture, and Communication at New York University, working at the intersection of disability studies and media studies. She is currently completing a book titled On the Phone: Deafness and Communication Engineering. Articles from this project can be found in Social Text, differences, the IEEE Annals of the History of Computing, and The Oxford Handbook of Sound Studies. Her second book project, Print Disability and New Reading Formats, examines the reformatting of print over the course of the past century by blind and other print disabled readers, with a focus on Talking Books and electronic reading machines. This research is supported by NSF Award #1354297.






















Recent Comments