Tag Archive | hearing

The Better to Hear You With, My Dear: Size and the Acoustic World

Hearing the Unheard IIToday the SO! Thursday stream inaugurates a four-part series entitled Hearing the UnHeard, which promises to blow your mind by way of your ears. Our Guest Editor is Seth Horowitz, a neuroscientist at NeuroPop and author of The Universal Sense: How Hearing Shapes the Mind (Bloomsbury, 2012), whose insightful work on brings us directly to the intersection of the sciences and the arts of sound.9781608190904

That’s where he’ll be taking us in the coming weeks. Check out his general introduction just below, and his own contribution for the first piece in the series. — NV

Welcome to Hearing the UnHeard, a new series of articles on the world of sound beyond human hearing. We are embedded in a world of sound and vibration, but the limits of human hearing only let us hear a small piece of it. The quiet library screams with the ultrasonic pulsations of fluorescent lights and computer monitors. The soothing waves of a Hawaiian beach are drowned out by the thrumming infrasound of underground seismic activity near “dormant” volcanoes. Time, distance, and luck (and occasionally really good vibration isolation) separate us from explosive sounds of world-changing impacts between celestial bodies. And vast amounts of information, ranging from the songs of auroras to the sounds of dying neurons can be made accessible and understandable by translating them into human-perceivable sounds by data sonification.

Four articles will examine how this “unheard world” affects us. My first post below will explore how our environment and evolution have constrained what is audible, and what tools we use to bring the unheard into our perceptual realm. In a few weeks, sound artist China Blue will talk about her experiences recording the Vertical Gun, a NASA asteroid impact simulator which helps scientists understand the way in which big collisions have shaped our planet (and is very hard on audio gear). Next, Milton A. Garcés, founder and director of the Infrasound Laboratory of University of Hawaii at Manoa will talk about volcano infrasound, and how acoustic surveillance is used to warn about hazardous eruptions. And finally, Margaret A. Schedel, composer and Associate Professor of Music at Stonybrook University will help readers explore the world of data sonification, letting us listen in and get greater intellectual and emotional understanding of the world of information by converting it to sound.

— Guest Editor Seth Horowitz

Although light moves much faster than sound, hearing is your fastest sense, operating about 20 times faster than vision. Studies have shown that we think at the same “frame rate” as we see, about 1-4 events per second. But the real world moves much faster than this, and doesn’t always place things important for survival conveniently in front of your field of view. Think about the last time you were driving when suddenly you heard the blast of a horn from the previously unseen truck in your blind spot.

Hearing also occurs prior to thinking, with the ear itself pre-processing sound. Your inner ear responds to changes in pressure that directly move tiny little hair cells, organized by frequency which then send signals about what frequency was detected (and at what amplitude) towards your brainstem, where things like location, amplitude, and even how important it may be to you are processed, long before they reach the cortex where you can think about it. And since hearing sets the tone for all later perceptions, our world is shaped by what we hear (Horowitz, 2012).

But we can’t hear everything. Rather, what we hear is constrained by our biology, our psychology and our position in space and time. Sound is really about how the interaction between energy and matter fill space with vibrations. This makes the size, of the sender, the listener and the environment, one of the primary features that defines your acoustic world.

You’ve heard about how much better your dog’s hearing is than yours. I’m sure you got a slight thrill when you thought you could actually hear the “ultrasonic” dog-training whistles that are supposed to be inaudible to humans (sorry, but every one I’ve tested puts out at least some energy in the upper range of human hearing, even if it does sound pretty thin). But it’s not that dogs hear better. Actually, dogs and humans show about the same sensitivity to sound in terms of sound pressure, with human’s most sensitive region from 1-4 kHz and dogs from about 2-8 kHz. The difference is a question of range and that is tied closely to size.

Most dogs, even big ones, are smaller than most humans and their auditory systems are scaled similarly. A big dog is about 100 pounds, much smaller than most adult humans. And since body parts tend to scale in a coordinated fashion, one of the first places to search for a link between size and frequency is the tympanum or ear drum, the earliest structure that responds to pressure information. An average dog’s eardrum is about 50 mm2, whereas an average human’s is about 60 mm2. In addition while a human’s cochlea is spiral made of 2.5 turns that holds about 3500 inner hair cells, your dog’s has 3.25 turns and about the same number of hair cells. In short: dogs probably have better high frequency hearing because their eardrums are better tuned to shorter wavelength sounds and their sensory hair cells are spread out over a longer distance, giving them a wider range.

Interest in the how hearing works in animals goes back centuries. Classical image of comparative ear anatomy from 1789 by Andreae Comparetti.

Interest in the how hearing works in animals goes back centuries. Classical image of comparative ear anatomy from 1789 by Andreae Comparetti.

Then again, if hearing was just about size of the ear components, then you’d expect that yappy 5 pound Chihuahua to hear much higher frequencies than the lumbering 100 pound St. Bernard. Yet hearing sensitivity from the two ends of the dog spectrum don’t vary by much. This is because there’s a big difference between what the ear can mechanically detect and what the animal actually hears. Chihuahuas and St. Bernards are both breeds derived from a common wolf-like ancestor that probably didn’t have as much variability as we’ve imposed on the domesticated dog, so their brains are still largely tuned to hear what a medium to large pseudo wolf-like animal should hear (Heffner, 1983).

But hearing is more than just detection of sound. It’s also important to figure out where the sound is coming from. A sound’s location is calculated in the superior olive – nuclei in the brainstem that compare the difference in time of arrival of low frequency sounds at your ears and the difference in amplitude between your ears (because your head gets in the way, making a sound “shadow” on the side of your head furthest from the sound) for higher frequency sounds. This means that animals with very large heads, like elephants, will be able to figure out the location of longer wavelength (lower pitched) sounds, but probably will have problems localizing high pitched sounds because the shorter frequencies will not even get to the other side of their heads at a useful level. On the other hand, smaller animals, which often have large external ears, are under greater selective pressure to localize higher pitched sounds, but have heads too small to pick up the very low infrasonic sounds that elephants use.

Audiograms (auditory sensitivity in air measured in dB SPL) by frequency of animals of different sizes showing the shift of maximum sensitivity to lower frequencies with increased size. Data replotted based on audiogram data by Sivian and White, 1933; ISO 1961; Heffner and Masterton, 1980; Heffner and Heffner, 1982; Heffner, 1983; Jackson et al, 1999.

Audiograms (auditory sensitivity in air measured in dB SPL) by frequency of animals of different sizes showing the shift of maximum sensitivity to lower frequencies with increased size. Data replotted based on audiogram data by Sivian and White (1933). “On minimum audible sound fields.” Journal of the Acoustical Society of America, 4: 288-321; ISO 1961; Heffner, H., & Masterton, B. (1980). “Hearing in glires: domestic rabbit, cotton rat, feral house mouse, and kangaroo rat.” Journal of the Acoustical Society of America, 68, 1584-1599.; Heffner, R. S., & Heffner, H. E. (1982). “Hearing in the elephant: Absolute sensitivity, frequency discrimination, and sound localization.” Journal of Comparative and Physiological Psychology, 96, 926-944.; Heffner H.E. (1983). “Hearing in large and small dogs: Absolute thresholds and size of the tympanic membrane.” Behav. Neurosci. 97: 310-318. ; Jackson, L.L., et al.(1999). “Free-field audiogram of the Japanese macaque (Macaca fuscata).” Journal of the Acoustical Society of America, 106: 3017-3023.

But you as a human are a fairly big mammal. If you look up “Body Size Species Richness Distribution” which shows the relative size of animals living in a given area, you’ll find that humans are among the largest animals in North America (Brown and Nicoletto, 1991). And your hearing abilities scale well with other terrestrial mammals, so you can stop feeling bad about your dog hearing “better.” But what if, by comic-book science or alternate evolution, you were much bigger or smaller? What would the world sound like? Imagine you were suddenly mouse-sized, scrambling along the floor of an office. While the usual chatter of humans would be almost completely inaudible, the world would be filled with a cacophony of ultrasonics. Fluorescent lights and computer monitors would scream in the 30-50 kHz range. Ultrasonic eddies would hiss loudly from air conditioning vents. Smartphones would not play music, but rather hum and squeal as their displays changed.

And if you were larger? For a human scaled up to elephantine dimensions, the sounds of the world would shift downward. While you could still hear (and possibly understand) human speech and music, the fine nuances from the upper frequency ranges would be lost, voices audible but mumbled and hard to localize. But you would gain the infrasonic world, the low rumbles of traffic noise and thrumming of heavy machinery taking on pitch, color and meaning. The seismic world of earthquakes and volcanoes would become part of your auditory tapestry. And you would hear greater distances as long wavelengths of low frequency sounds wrap around everything but the largest obstructions, letting you hear the foghorns miles distant as if they were bird calls nearby.

But these sounds are still in the realm of biological listeners, and the universe operates on scales far beyond that. The sounds from objects, large and small, have their own acoustic world, many beyond our ability to detect with the equipment evolution has provided. Weather phenomena, from gentle breezes to devastating tornadoes, blast throughout the infrasonic and ultrasonic ranges. Meteorites create infrasonic signatures through the upper atmosphere, trackable using a system devised to detect incoming ICBMs. Geophones, specialized low frequency microphones, pick up the sounds of extremely low frequency signals foretelling of volcanic eruptions and earthquakes. Beyond the earth, we translate electromagnetic frequencies into the audible range, letting us listen to the whistlers and hoppers that signal the flow of charged particles and lightning in the atmospheres of Earth and Jupiter, microwave signals of the remains of the Big Bang, and send listening devices on our spacecraft to let us hear the winds on Titan.

Here is a recording of whistlers recorded by the Van Allen Probes currently orbiting high in the upper atmosphere:

When the computer freezes or the phone battery dies, we complain about how much technology frustrates us and complicates our lives. But our audio technology is also the source of wonder, not only letting us talk to a friend around the world or listen to a podcast from astronauts orbiting the Earth, but letting us listen in on unheard worlds. Ultrasonic microphones let us listen in on bat echolocation and mouse songs, geophones let us wonder at elephants using infrasonic rumbles to communicate long distances and find water. And scientific translation tools let us shift the vibrations of the solar wind and aurora or even the patterns of pure math into human scaled songs of the greater universe. We are no longer constrained (or protected) by the ears that evolution has given us. Our auditory world has expanded into an acoustic ecology that contains the entire universe, and the implications of that remain wonderfully unclear.

__

Exhibit: Home Office

This is a recording made with standard stereo microphones of my home office. Aside from usual typing, mouse clicking and computer sounds, there are a couple of 3D printers running, some music playing, largely an environment you don’t pay much attention to while you’re working in it, yet acoustically very rich if you pay attention.

.

This sample was made by pitch shifting the frequencies of sonicoffice.wav down so that the ultrasonic moves into the normal human range and cuts off at about 1-2 kHz as if you were hearing with mouse ears. Sounds normally inaudible, like the squealing of the computer monitor cycling on kick in and the high pitched sound of the stepper motors from the 3D printer suddenly become much louder, while the familiar sounds are mostly gone.

.

This recording of the office was made with a Clarke Geophone, a seismic microphone used by geologists to pick up underground vibration. It’s primary sensitivity is around 80 Hz, although it’s range is from 0.1 Hz up to about 2 kHz. All you hear in this recording are very low frequency sounds and impacts (footsteps, keyboard strikes, vibration from printers, some fan vibration) that you usually ignore since your ears are not very well tuned to frequencies under 100 Hz.

.

Finally, this sample was made by pitch shifting the frequencies of infrasonicoffice.wav up as if you had grown to elephantine proportions. Footsteps and computer fan noises (usually almost indetectable at 60 Hz) become loud and tonal, and all the normal pitch of music and computer typing has disappeared aside from the bass. (WARNING: The fan noise is really annoying).

.

The point is: a space can sound radically different depending on the frequency ranges you hear. Different elements of the acoustic environment pop up depending on the type of recording instrument you use (ultrasonic microphone, regular microphones or geophones) or the size and sensitivity of your ears.

Spectrograms (plots of acoustic energy [color] over time [horizontal axis] by frequency band [vertical axis]) from a 90 second recording in the author’s home office covering the auditory range from ultrasonic frequencies (>20 kHz top) to the sonic (20 Hz-20 kHz, middle) to the low frequency and infrasonic (<20 Hz).

Spectrograms (plots of acoustic energy [color] over time [horizontal axis] by frequency band [vertical axis]) from a 90 second recording in the author’s home office covering the auditory range from ultrasonic frequencies (>20 kHz top) to the sonic (20 Hz-20 kHz, middle) to the low frequency and infrasonic (<20 Hz).

Featured image by Flickr User Jaime Wong.

Seth S. Horowitz, Ph.D. is a neuroscientist whose work in comparative and human hearing, balance and sleep research has been funded by the National Institutes of Health, National Science Foundation, and NASA. He has taught classes in animal behavior, neuroethology, brain development, the biology of hearing, and the musical mind. As chief neuroscientist at NeuroPop, Inc., he applies basic research to real world auditory applications and works extensively on educational outreach with The Engine Institute, a non-profit devoted to exploring the intersection between science and the arts. His book The Universal Sense: How Hearing Shapes the Mind was released by Bloomsbury in September 2012.

tape reel

REWIND! If you liked this post, check out …

Reproducing Traces of War: Listening to Gas Shell Bombardment, 1918– Brian Hanrahan

Learning to Listen Beyond Our Ears– Owen Marshall

This is Your Body on the Velvet Underground– Jacob Smith

I Can’t Hear You Now, I’m Too Busy Listening: Social Conventions and Isolated Listening

Editor’s Note: I hate to interrupt our busy readers, but I just wanted to mention that today’s post by Osvaldo Oyola marks our last entry in SO!‘s July Forum on Listening.  For the full introduction to the World Listening Month! series click here.  To peep the previous posts, click here.  Also, look for our #Blog-O-Versary 3.0 post coming up on July 27th, a multimedia celebration of three years of Sounding Out! awesomeness (complete with a free, downloadable soundtrack compiled by our editors and writers for your listening pleasure). Now for some pure, uninterrupted reading (we hope!).–JSA

—-

In calling attention to listening as an activity, July 18th’s World Listening Day made me think about our social conventions around listening. While it is not uncommon for folks to pay lip service to listening’s value, this ignores the variety of ways that listening is actually socially prioritized (and the multiple meanings housed in the term “listening”).  Case in point, the officiant at my recent wedding exhorted my about-to-be-wife and me to listen to each other:  “listen for what is consistent and familiar, but also for what is new, emergent, even sweetly radical in your partner.”  When used in this sense, listening refers to a focused attention to the meaning of sound, particularly language. His words suggest that our relationship would be strengthened by listening’s ability to convey interpersonal knowledge.

While listening is certainly crucial to social bonds, my own experience as a careful and engaged listener of music suggests that some of the most crucial listening we do happens as an isolated–and isolating experience–especially when listening involves recorded sound. However, its importance to our individual well being often seems directly inverse to the (lack of) seriousness other people seem to give it. Not my now-wife, of course, but uninterrupted musical listening was not an official part of our vows, either.  There is an inherent tension between social and isolated forms of listening.

Sign o’ the Times,  still my fave 25 years later.

As a teenager, for example, whatever my arguments with my mom might have really been about, a frequent instigator of a blow-up was her reaction to my annoyance when she’d interrupt my listening at her whim. I’d be sitting in my room listening in anticipation for what I have often called my favorite recorded human sound–that moment in Prince’s “Adore” on Sign o’ the Times around 2:55 (music nerd correction: on the album version it is actually at 2:48) when Prince makes a little moan before the second time he sings “crucial”–and mom would burst into the room to ask me a question, giving no heed to the stereo. I often responded to this in the same way: “If I were reading or watching TV, you’d say ‘excuse me,’ to get my attention, just like you always taught me a polite person should do. But when it is music you just go ahead and interrupt as if I weren’t doing anything, but I am doing something. I’m listening to music. It’s an activity.” (Of course, you have to imagine that response laden with all the snottiness only a teenager could muster). You would’ve thought she’d understand, since my obsessive love of music was influenced in no small part by her huge collection of salsa records, but my mom’s listening is mostly predicated on embodying the music through dance. This kind of listening is not so much about close attention to the details of the sound, but rather on a visceral reception of its physicality. Again, like listening to speech, the form of listening given to dance commonly reinforces social bonds—between dance partners, among dancers in a crowd, between dancers and DJ or band.

The kind of listening I am describing cuts us off from the immediate social world. It requires that people who want your attention must rudely interrupt your listening pleasure or ask forgiveness for the interruption. Theoretically, they could wait patiently, but this rarely happens, so the listener often feels forced to downplay the annoyance that comes along with interruption, lest they break a social bond and/or belie how important this kind of listening really is to them.

“Tuning Out” by Flickr User CarbonNYC

Of course, the ubiquity of headphones suggests that there are many people who want to be focused enough on their listening as to avoid interruption. (Though, that may be a chicken-and-the-egg situation, as I can’t help but wonder to what degree the headphones become an excuse for social disengagement.) Either way, it is noteworthy that the wearing of headphones become a visual clue for a desire to be isolated in the listening practice, even when in an otherwise public environment. If you are going to ask a stranger on subway for directions, you are less likely to choose the person with headphones on, and if you do choose to ask them, the headphones direct the form of social action required to get their attention and ask. It calls for a visual signal, like a gesture to remove the headphones, or even polite physical contact, like a tap on the shoulder—but you certainly would not pull the headphones off their ears and just start talking at them, as you might talk at someone listening to music through speakers if you happen to walk into the room. The invention of things like the Doffing Headphone handle, which allows headphone listeners to greet others by “doffing” their headphones like one used to do with a hat, arises from the need for isolated listeners to interact with the social world  even while enmeshed in their portable bubble of personal space. However, be that as it may, the handles have not exactly caught on.

Doffing Headphones

Perhaps headphones are the just the logical evolution of crafting a listening space. They are certainly much more feasible than the ‘Yogi Enclosure’ Kier Keightley discusses in his article “’Turn It down!’ She Shrieked: Gender, Domestic Space, and High Fidelity, 1948-59.”  The “Yogi enclosure” was High Fidelity magazine’s tongue-in-cheek (and highly gendered) 1954 solution to a man’s inability to enjoy his hi-fi in a space where he is likely, the article suggests,  to be harangued by his wife and annoyed by his children.  This masculinizing of listening speaks to the social contours of what is ostensibly an individual practice. In the case of my teenaged self and my mother, I wanted my 1000th listen to Dark Side of the Moon to dictate her behavior in the way that other individual activities in a shared space dictate behavior through social conventions.  Looking back, I was also trying to claim space in her home.  I never considered how as a mom she was expected to always be available, never free from interruption no matter what she was doing.  Keightley’s article demonstrates this through explaining the construction of listening technologies as a domain of men that requires women and children to be quiet in order to allow him the pleasure of his equipment.  I could imagine my right to be uninterrupted, for my listening to be taken seriously, considered a productive activity, by virtue of my gender and my youth.   While, now that I think of it, even the majority of my mom’s record-listening and salsa dancing  accompanied household chores that fierce adherence to gender roles demanded time she might have preferred to dedicate to listening alone.

Listening by Flickr User Alessandra Luvisotto

While gender politics have changed significantly since 1954, careful music listeners of any gender still seek to define the use of space through the use of sound, intentionally or unintentionally. There is a satisfaction that comes with filling a space with sound that I feel cannot be matched by even the highest quality noise-canceling headphones. Sound emerging from speakers and moving through the air creates a presence. It demands attention. It dictates behavior.  It is a kind of power.

Image by Flickr user Ken Schwatz

Another case in point: I can remember my college roommate and I (the same fellow who’d end up being the officiant at my wedding, coincidentally enough) traveling from store to store to try out different stereo speakers, carrying a CD copy of This Mortal Coil’s Filigree & Shadow and getting salesmen to play the soft sounds on tracks like “Thias (II),” as a test. These were the days before online comparison shopping, so in order to achieve this idealized listening experience–which for us meant the loudest and softest sounds were equally clear–we had to annoy salesmen with our self-important discussion of miniscule differences in sound quality and failure to actually purchase the costly speakers we were trying.

.

What I am trying to convey with this anecdote is that, while the idealized listening experience we imagined was an isolated one (probably something involving staring at the glow-in-the-dark star stickers on the ceiling of our darkened dorm room), it was born of the sociality and power I mentioned above. We were exercising a form of privilege (or at least practicing for an imagined future masculine power over the domestic sphere).  This imagined idealized listening not only required a developed understanding of what we were listening for, but a shared sense of the ideal circumstances for those focused, uninterrupted, close listening sessions.  And those ideal circumstances required a freedom from the responsibilities of social bonds, that we, as young men, never doubted we could access.   There is no part of listening (as opposed to merely hearing) that isn’t social, and both isolated and more explicitly interpersonal forms of listening feed each other, but only when both are valued, nurtured, and made possible.

I thought by exploring these isolated listening experiences that I might come closer to understanding the primacy of the visual in the social etiquette of interruption, but I am no closer. Instead, I am left to consider the dynamics of power that (dis)allow that space for close listening. All I have learned about the matter since those teenaged arguments with my mom is that, if I plan to do some real listening, I either need to be alone in the house or that the onus is on me, the listener, to make an announcement: “I will be listening to music now.” Still, more often than not, I put on my headphones.   The fact remains that without the visual signals that let others know that listening is occurring–headphones, dancing–listening as a solo activity is so often devalued and interrupted. Sound alone is not enough.

Now if you’ll excuse me, I just got Jonathan Lethem’s book on Fear of Music, and I plan on closely listening to each track of the Talking Heads’ record before and after the associated chapter in Lethem’s book. Let’s hope I won’t be interrupted.

Osvaldo Oyola is a regular contributor to Sounding Out! and ABD in English at Binghamton University.