Tag Archive | Music

Sounds of Science: The Mystique of Sonification

NYU_full

Hearing the Unheard IIWelcome to the final installment of Hearing the UnHeardSounding Out!s series on what we don’t hear and how this unheard world affects us. The series started out with my post on hearing, large and small, continued with a piece by China Blue on the sounds of catastrophic impacts, and Milton Garcés piece on the infrasonic world of volcanoes. To cap it all off, we introduce The Sounds of Science by professor, cellist and interactive media expert, Margaret Schedel.

Dr. Schedel is an Associate Professor of Composition and Computer Music at Stony Brook University. Through her work, she explores the relatively new field of Data Sonification, generating new ways to perceive and interact with information through the use of sound. While everyone is familiar with informatics, graphs and images used to convey complex information, her work explores how we can expand our understanding of even complex scientific information by using our fastest and most emotionally compelling sense, hearing.

– Guest Editor Seth Horowitz

With the invention of digital sound, the number of scientific experiments using sound has skyrocketed in the 21st century, and as Sounding Out! readers know, sonification has started to enter the public consciousness as a new and refreshing alternative modality for exploring and understanding many kinds of datasets emerging from research into everything from deep space to the underground. We seem to be in a moment in which “science that sounds” has a special magic, a mystique that relies to some extent on misunderstandings in popular awareness about the processes and potentials of that alternative modality.

For one thing, using sound to understand scientific phenomena is not actually new. Diarist Samuel Pepys wrote about meeting scientist Robert Hooke in 1666 that “he is able to tell how many strokes a fly makes with her wings (those flies that hum in their flying) by the note that it answers to in musique during their flying.” Unfortunately Hooke never published his findings, leading researchers to speculate on his methods. One popular theory is that he tied strings of varying lengths between a fly and an ear trumpet, recognizing that sympathetic resonance would cause the correct length string to vibrate, thus allowing him to calculate the frequency. Even Galileo used sound, showing the constant acceleration of a ball due to gravity by using an inclined plane with thin moveable frets. By moving the placement of the frets until the clicks created an even tempo he was able to come up with a mathematical equation to describe how time and distance relate when an object falls.

Illustration from Robert Hooke's Micrographia (1665)

Illustration from Robert Hooke’s Micrographia (1665)

There have also been other scientific advances using sound in the more recent past. The stethoscope was invented in 1816 for auscultation, listening to the sounds of the body. It was later applied to machines—listening for the operation of the technological gear. Underwater sonar was patented in 1913 and is still used to navigate and communicate using hydroacoustic phenomenon. The Geiger Counter was developed in 1928 using principles discovered in 1908; it is unclear exactly when the distinctive sound was added. These are all examples of auditory display [AD]; sonification-generating or manipulating sound by using data is a subset of AD. As the forward to the The Sonification Handbook states, “[Since 1992] Technologies that support AD have matured. AD has been integrated into significant (read “funded” and “respectable”) research initiatives. Some forward thinking universities and research centers have established ongoing AD programs. And the great need to involve the entire human perceptual system in understanding complex data, monitoring processes, and providing effective interfaces has persisted and increased” (Thomas Hermann, Andy Hunt, John G. Neuhoff, Sonification Handbook, iii)

Sonification clearly enables scientists, musicians and the public to interact with data in a very different way, particularly compared to the more numerous techniques involving vision. Indeed, because hearing functions quite differently than vision, sonification offers an alternative kind of understanding of data (sometimes more accurate), which would not be possible using eyes alone. Hearing is multi-directional—our ears don’t have to be pointing at a sound source in order to sense it. Furthermore, the frequency response of our hearing is thousands of times more accurate than our vision. In order to reproduce a moving image the sampling rate (called frame-rate) for film is 24 frames per second, while audio has to be sampled at 44,100 frames per second in order to accurately reproduce sound. In addition, aural perception works on simultaneous time scales—we can take in multiple streams of audio data at once at many different dynamics, while our pupils dilate and contract, limiting how much visual data we can absorb at a single time. Our ears are also amazing at detecting regular patterns over time in data; we hear these patterns as frequency, harmonic relationships, and timbre.

Image credit: Dr. Kevin Yager, data measured at X9 beamline, Brookhaven National Lab.

Image credit: Dr. Kevin Yager, Brookhaven National Lab.

But hearing isn’t simple, either. In the current fascination with sonification, the fact that aesthetic decisions must be made in order to translate data into the auditory domain can be obscured. Headlines such as “Here’s What the Higgs Boson Sounds Like” are much sexier than headlines such as “Here is What One Possible Mapping of Some of the Data We Have Collected from a Scientific Measuring Instrument (which itself has inaccuracies) Into Sound.” To illustrate the complexity of these aesthetic decisions, which are always interior to the sonification process, I focus here on how my collaborators and I have been using sound to understand many kinds of scientific data.

My husband, Kevin Yager, a staff scientist at Brookhaven National Laboratory, works at the Center for Functional Nanomaterials using scattering data from x-rays to probe the structure of matter. One night I asked him how exactly the science of x-ray scattering works. He explained that X-rays “scatter” off of all the atoms/particles in the sample and the intensity is measured by a detector. He can then calculate the structure of the material, using the Fast Fourier Transform (FFT) algorithm. He started to explain FFT to me, but I interrupted him because I use FFT all the time in computer music. The same algorithm he uses to determine the structure of matter, musicians use to separate frequency content from time. When I was researching this post, I found a site for computer music which actually discusses x-ray scattering as a precursor for FFT used in sonic applications.

To date, most sonifications have used data which changes over time – a fly’s wings flapping, a heartbeat, a radiation signature. Except in special cases Kevin’s data does not exist in time – it is a single snapshot. But because data from x-ray scattering is a Fourier Transform of the real-space density distribution, we could use additive synthesis, using multiple simultaneous sine waves, to represent different spatial modes. Using this method, we swept through his data radially, like a clock hand, making timbre-based sonifications from the data by synthesizing sine waves using with the loudness based on the intensity of the scattering data and frequency based on the position.

We played a lot with the settings of the additive synthesis, including the length of the sound, the highest frequency and even the number of frequency bins (going back to the clock metaphor – pretend the clock hand is a ruler – the number of frequency bins would be the number of demarcations on the ruler) arriving eventually at set of optimized variables.

Here is one version of the track we created using 10 frequency bins:

.

Here is one we created using 2000:

.

And here is one we created using 50 frequency bins, which we settled on:

.

On a software synthesizer this would be like the default setting. In the future we hope to have an interactive graphic user interface where sliders control these variables, just like a musician tweaks the sound of a synth, so scientists can bring out, or mask aspects of the data.

To hear what that would be like, here are a few tracks that vary length:

.

.

.

Finally, here is a track we created using different mappings of frequency and intensity:

.

Having these sliders would reinforce to the scientists that we are not creating “the sound of a metallic alloy,” we are creating one sonic representation of the data from the metallic alloy.

It is interesting that such a representation can be vital to scientists. At first, my husband went along with this sonification project as more of a thought experiment rather than something that he thought would actually be useful in the lab, until he heard something distinct about one of those sounds, suggesting that there was a misaligned sample. Once Kevin heard that glitched sound (you can hear it in the video above), he was convinced that sonification was a useful tool for his lab. He and his colleagues are dealing with measurements 1/25,000th the width of a human hair, aiming an X-ray through twenty pieces of equipment to get the beam focused just right. If any piece of equipment is out of kilter, the data can’t be collected. This is where our ears’ non-directionality is useful. The scientist can be working on his/her computer and, using ambient sound, know when a sample is misaligned.

procedure

It remains to be seen/heard if the sonifications will be useful to actually understand the material structures. We are currently running an experiment using Mechanical Turk to determine this kind of multi-modal display (using vision and audio) is actually helpful. Basically we are training people on just the images of the scattering data, and testing how well they do, and training another group of people on the images plus the sonification and testing how well they do.

I’m also working with collaborators at Stony Brook University on sonification of data. In one experiment we are using ambisonic (3-dimensional) sound to create a sonic map of the brain to understand drug addiction. Standing in the middle of the ambisonic cube, we hope to find relationships between voxels, a cube of brain tissue—analogous to pixels. When neurons fire in areas of the brain simultaneously there is most likely a causal relationship which can help scientists decode the brain activity of addiction. Computer vision researchers have been searching for these relationships unsuccessfully; we hope that our sonification will allow us to hear associations in distinct parts of the brain which are not easily recognized with sight. We are hoping to leverage the temporal pattern recognition of our auditory system, but we have been running into problems doing the sonification; each slice of data from the FMRI has about 300,000 data points. We have it working with 3,000 data points, but either our programming needs to get more efficient, or we have to get a much more powerful computer in order to work with all of the data.

On another project we are hoping to sonify gait data using smartphones. I’m working with some of my music students and a professor of Physical Therapy, Lisa Muratori, who works on understanding the underlying mechanisms of mobility problems in Parkinsons’ Disease (PD). The physical therapy lab has a digital motion-capture system and a split-belt treadmill for asymmetric stepping—the patients are supported by a harness so they don’t fall. PD is a progressive nervous system disorder characterized by slow movement, rigidity, tremor, and postural instability. Because of degeneration of specific areas of the brain, individuals with PD have difficulty using internally driven cues to initiate and drive movement. However, many studies have demonstrated an almost normal movement pattern when persons with PD are provided external cues, including significant improvements in gait with rhythmic auditory cueing. So far the research with PD and sound has be unidirectional – the patients listen to sound and try to match their gait to the external rhythms from the auditory cues.In our system we will use bio-feedback to sonify data from sensors the patients will wear and feed error messages back to the patient through music. Eventually we hope that patients will be able to adjust their gait by listening to self-generated musical distortions on a smartphone.

As sonification becomes more prevalent, it is important to understand that aesthetic decisions are inevitable and even essential in every kind of data representation. We are so accustomed to looking at visual representations of information—from maps to pie charts—that we may forget that these are also arbitrary transcodings. Even a photograph is not an unambiguous record of reality; the mechanics of the camera and artistic choices of the photographer control the representation. So too, in sonification, do we have considerable latitude. Rather than view these ambiguities as a nuisance, we should embrace them as a freedom that allows us to highlight salient features, or uncover previously invisible patterns.

__

Margaret Anne Schedel is a composer and cellist specializing in the creation and performance of ferociously interactive media. She holds a certificate in Deep Listening with Pauline Oliveros and has studied composition with Mara Helmuth, Cort Lippe and McGregor Boyle. She sits on the boards of 60×60 Dance, the BEAM Foundation, Devotion Gallery, the International Computer Music Association, and Organised Sound. She contributed a chapter to the Cambridge Companion to Electronic Music, and is a joint author of Electronic Music published by Cambridge University Press. She recently edited an issue of Organised Sound on sonification. Her research focuses on gesture in music, and the sustainability of technology in art. She ran SUNY’s first Coursera Massive Open Online Course (MOOC) in 2013. As an Associate Professor of Music at Stony Brook University, she serves as Co-Director of Computer Music and is a core faculty member of cDACT, the consortium for digital art, culture and technology.

Featured Image: Dr. Kevin Yager, data measured at X9 beamline, Brookhaven National Lab.

Research carried out at the Center for Functional Nanomaterials, Brookhaven National Laboratory, is supported by the U.S. Department of Energy, Office of Basic Energy Sciences, under Contract No. DE-AC02-98CH10886.

tape reelREWIND! ….. If you liked this post, you might also like:

The Noises of Finance–Nicholas Knouf

Revising the Future of Music Technology–Aaron Trammell

A Brief History of Auto-Tune–Owen Marshall

This is Your Body on the Velvet Underground

1393526855_fe755ab736_b

Start a band3 (1)

“Everyone who bought one of those 30,000 copies started a band.” Brian Eno’s remark about the Velvet Underground’s brilliant but commercially lackluster 1967 debut album was re-circulated widely last October, when fans and critics mourned the passing of Lou Reed, lead songwriter for the band, and a key cultural figure of the last fifty years, by any metric.

The remark has become trite through overuse, but not the sentiment it captures. A band that has, since even before before Andy Warhol’s Factory, been linked to an aesthetic of menace, hysteria and psychosis didn’t just “inspire” or “provoke” much of the music, art and sensibilities of the post-1960’s. It extruded that era.

At Sounding Out!, we decided that in order to come to grips with Reed’s work in general (and the Velvet Underground in particular) from a Sound Studies perspective we’d have to adopt that spirit of provocation. I asked two prominent writers in the field for articles about how this band changed — and continues to change — the experience and history of sound, in a short series Start A Band: Lou Reed and Sound Studies. I’m thrilled to present the first of our articles from returning author Jacob Smith from Northwestern University, a musician and accomplished author of several distinguished books on sound and media history. Stay tuned next week for a second installment from Tim Anderson from Old Dominion University, an award winning writer and co-chair of the Sound Studies SIG at the Society for Cinema and Media Studies.

–NV

Lou Reed’s recent death has inspired many critics to return to his groundbreaking work with the Velvet Underground (VU). Albums such as “The Velvet Underground and Nico” (1967), “White Light/White Heat” (1968) and “The Velvet Underground” (1969) have the reputation of influencing everyone from David Bowie, Iggy Pop and Roxy Music to the Sex Pistols, Talking Heads, REM and Nirvana. Many recent obituaries describe VU in literary terms, citing Reed’s “lyrical honesty,” “rock and roll poetry,” and touting his songs as “serious writing” and even a kind of “Great American Novel.” There is much to be missed by taking such a decidedly literary approach to sound recordings, and there is an alternative approach, thanks to the emergence of Sound Studies as a vibrant academic field. Can what Jonathan Sterne has called the “interdisciplinary ferment” of Sound Studies help us to re-think the work of this seminal rock band (The Sound Studies Reader, 2)? I think it can.

For a start, Sound Studies emboldens us to base our analysis on VU’s records, which have often been oddly upstaged by other aspects of their career: Reed’s street-level lyrics to be sure; but also the group’s role as background music to Andy Warhol’s Factory; or their use of drones and feedback, which makes them a footnote in the history of avant-garde music; or their influence on the glam and punk explosions of the 1970s. Sound Studies encourages us to start with VU’s records, but the next step is not necessarily a formal musicological analysis.

white lightFor some of its proponents, including Steven Connor, Sound Studies is best understood as part of a broader investigation of the “fertility of the relations” between the senses, and VU’s albums turn out to be an excellent place to begin an exploration of the multisensory experience of recorded sound (“Edison’s Teeth: Touching Hearing,” in Hearing Cultures, ed. Viet Erlmann, 54). This essay explores the tactile experience of VU’s records, inspired by work on the tactile dimension of the cinema. I borrow the organizational structure of Jennifer M. Barker’s book The Tactile Eye, which moves from a discussion of sensations at the surface of the body, to muscular responses, and lastly, to the “the murky recesses of the body, where heart, lungs, pulsing fluids, and firing synapses receive, respond to, and reenact the rhythms of cinema” (2-3). Think of this essay as a body-scan of the VU listening experience (“this is your body on VU”) that follows a similar path from the skin, to the musculature, and finally, to the viscera.

Downy Sins

Writing about the tactile experience of movies is concerned with modes of looking that resemble touching, a “haptic visuality” that attends to textures and surfaces, and moves over the image like a caress (See Laura U. Marks The Skin of the Film, 183 and Touch, 117). A form of “haptic listening” has also been commonplace in the culture of popular sound recordings. Much of the recorded popular music of the past century has invested meaning in what Theodore Gracyk calls “very specific sound qualities and their textural combination” (Rhythm and Noise, 61). From Bo Diddley to Aphex Twin, pop recordings have tended to stress evocative timbres, idiosyncratic voices, and signature sounds over structural or lyrical complexity.

VU’s records exemplify that tendency because their complexity can be found more on the level of timbre than in musical structure or instrumentation. Moreover, Reed’s lyrics often encourage a blurring of listening and touching. On “Venus in Furs,” listeners are prompted to hear John Cale’s viola stabs as the licks and bites of a mistress’s whip. “Sister Ray” cues haptic listening by chugging resolutely on a single chord for seventeen minutes, while a plasmatic organ performance mutates from elegant bass arpeggios to shimmering waves of icy noise. As with “Venus in Furs,” Reed’s lyrics tie the textural complexity of “Sister Ray” to the surface of the body, through his descriptions of searching the skin of his arm for a “mainline” vein, and a first-person account of receiving oral sex. These examples demonstrate that the poetry of Reed’s lyrics is intimately bound up with the sonic texture of VU’s recordings, and moreover, that he was adept at liberating the erotic potential of haptic listening.

Run Run Run

Films can produce an empathetic muscular response in the viewer’s body, as when we flinch in response to a horror film, or clench our fists while watching a thrilling action movie (Barker, The Tactile Eye 94, 83, 72). Listeners can have similarly empathetic relationships with recorded sound when they move along with the rhythms of a dance record, synchronize their workout or commute to a carefully designed playlist, or embody a recorded performance by miming an air guitar or air drums.

The members of VU had distinctive styles of instrumental performance that made their records evoke powerful muscular responses in listeners. Consider the piano track on “Waiting For the Man,” which loses any semblance of melodic content to become the sheer act of pounding on the keyboard by the end of the song. Reed is usually regarded as a lyricist, but he is just as influential as a muscular rhythm guitar player. “What Goes On” has a minimalist arrangement that eschews structural development to become a showcase for Reed’s vigorous strumming. The second half of the track lacks vocals or a conventional solo instrument, and so feels like a diagram of the human body that reveals the pulsating musculature beneath the skin.

Reed’s jangly rhythm guitar could dominate the mix of VU tracks like “What Goes On” because it occupies a sonic niche that, in a more typical rock arrangement, would be filled by the hi-hat or ride cymbal. In fact, it is Maureen Tucker’s distinctive drumming that is the main source of muscle power on VU’s records. Standing behind a spare kit consisting of little more than a snare and a bass drum turned on its side, Tucker attacked her instrument with relentless intensity, raising her mallet over her head with each bone-cracking snare hit. A review of a live appearance in 1968 observed that Tucker “beats the shit” out of her drums so that the sound “slams into your bowels and crawls out your asshole” (See Clinton Heylin, All Yesterdays’ Parties, 64). Hear (and feel) for yourself, on the VU track “Foggy Notion,” a seven-minute drum and guitar workout.

Tucker was a pioneer female instrumentalist in the male-dominated world of rock. “I didn’t want to be the one to blow it,” she said in an interview. “I wasn’t gonna say, ‘Well, they’ll say she’s a girl, she can’t do it.’ So I was determined, I wasn’t gonna stop” (Albin Zak III, The Velvet Underground Companion, 1965). Ironically, her uncompromising and supremely physical performances were so minimal and precise that she was sometimes compared to a machine. A Verve Records press release from 1968 referred to the fact that Tucker had briefly held a job at IBM, and wrote that “her symphonic simplicity is like that of a human computer.” One trajectory of VU’s influence leads to the electronic austerity of bands like Kraftwerk, but an attention to the tactile dimension of the band’s records prevents Tucker, one of rock’s most muscular drummers, from disappearing into the circuitry.

The Body Lies Bare

A tactile analysis of VU records can go deeper still, to document their relation to the body’s viscera. The experience of the inner body is usually hidden from us, and gains our attention only when organ systems produce an overall effect like nausea (Barker, The Tactile Eye, 125). We lack direct conscious control over most of our visceral responses, but we can stimulate them through the ingestion of drugs, which of course, is the topic of many of VU’s most famous tracks. But where other rock bands of the 1960s associated the drug experience with whimsical flights of the imagination, VU’s drug references are bluntly visceral.

A still from a 1966 film of the Velvet Underground rehearsing by Rosalind Stevenson

A still from a 1966 film of the Velvet Underground rehearsing by Rosalind Stevenson

“Heroin” is a sonic re-enactment of the physical effects of the eponymous drug, conveyed not only via Reed’s lyrics, but in the backing track’s fluctuations between dreamy bliss and frantic rush. “White Light/White Heat” fuses two sensory metaphors, one visual and one tactile, in order to point to an embodied experience beyond them both. Listen to how the track ends, with surging cymbals and a distorted bass figure whose spasmodic rhythm suggests the dilation of blood vessels, the firing of synapses, and the tightening and release of internal organs that have been kicked into amphetamine overdrive.

The mysterious visceral body can also emerge into our consciousness in moments when the internal rhythms of the heart or lungs are destabilized, as in a sudden heart palpitation or violent case of the hiccups (Barker, The Tactile Eye, 128-29). “Lady Godiva’s Operation” provides a vivid demonstration. The first half of the track is run-of-the-mill hippy exotica, with John Cale’s lead vocal given the conventional placement in the center of the stereo picture. This calm sonic surface is unsettled when Cale’s voice is decentered, shifting first to the left and then the right speaker. The lead vocal fractures even further when Reed begins to finish each of Cale’s lines:

Cale: ‘Doctor is coming,’ the nurse thinks…

Reed: … sweetly.

Cale: Turning on the machines that…

Reed: … neatly pump air.

Cale: The body lies bare.

By integrating these fragmented lines, we learn that a body is lying on an operating table. Listeners are encouraged to inhabit this body through the placement of voices around and above us, as well as the sounds of heartbeats and breathing that enter the mix but are jarringly out of rhythm with the existing backing track. Reed sings that the doctor is making his first incision into the body, and the backing track vanishes, leaving only the heartbeat, breathing, and an eerie whirring vocalization that sonifies some nameless physical process. The scene ends with a dark twist, suitable as a shock tactic from an exploitation film: the anesthetic has malfunctioned, and the patient has regained consciousness in the midst of the procedure.

The track’s arrhythmic sound effects overwhelm the coherent flow of the standard musical mix, working in tandem with the lyric’s account of the body made manifest in a moment of dysfunction. The fact that VU’s “White Light/White Heat” LP contains “Lady Godiva’s Operation,” as well as the title track and “Sister Ray,” makes it a tour de force of tactile phonography. Reed may have been a rock poet, but he and his collaborators were also acoustic engineers who were adept at sonifying tactile experience, producing music worth feeling with our whole bodies.

Featured Image- “A Drop of Warhol” by Flicker User Celeste RC

Jacob Smith is Associate Professor in the Radio-Television-Film Department at Northwestern University. He has written several books on sound (Vocal Tracks: Performance and Sound Media [2008], and Spoken Word: Postwar American Phonograph Cultures [2011], both from the University of California Press), and published articles on media history, sound, and performance.

tape reelREWIND!…If you liked this post, you may also dig:

“Devil’s Symphony: Orson Welles’s “Hell on Ice” as Eco-Sonic Critique”–Jacob Smith

One Nation Under a Groove?: Music, Sonic Borders, and the Politics of Vibration” -Marcus Boon

Music Meant to Make You Move: Considering the Aural Kinesthetic-Imani Kai Johnson

 

 

Revising the Future of Music Technology

671px-Sblive!

Sound and TechThis is the opening salvo in Sounding Out!‘s April  Forum on “Sound and Technology.”  Every Monday this month, you’ll be hearing new insights on this age-old pairing from the likes of Sounding Out! veterano Primus Luta, along with new voices Andrew Salvati and Owen Marshall.  These fast-forward folks will share their thinking about everything from Auto-tune to productivity algorithms. So, program your presets for Sounding Out! and enjoy today’s exhilarating opening think piece from SO! Multimedia Editor Aaron Trammell.  –JS, Editor-in-Chief

We drafted a manifesto.

Microsoft Research’s New England Division, a collective of top researchers working in and around new media, hosted a one-day symposium on music technology. Organizers Nancy Baym and Jonathan Sterne invited top scholars from a plethora of interdisciplinary fields to discuss the value, affordances, problems, joys, curiosities, pasts, presents, and futures of Music Technology. It was a formal debrief of the weekend’s Music Tech Fest, a celebration of innovative technology in music. Our hosts christened the day, “What’s Music Tech For?” and told us to make bold, brave statements. A kaleidoscope of kinetic energy and ideas followed. And, at 6PM we crumpled into exhausted chatter over sangria, cocktails, and imported beer at a local tapas restaurant.

The day began with Annette Markham, our timekeeper, offering us some tips on how to best think through what a manifesto is. She went down the list: manifestos are primal, they terminate the past, create new worlds, trigger communities, define us, antagonize others, inspire being, provoke action, crave presence. In short, manifestos are a sort of intellectual world building. They provide a road map toward an imagined future, but in doing so they also work to produce this very future. Annette’s list made manifestos seem to be a very focused thing, and perhaps they usually are. But, having now worked through the process of creating a manifesto with a collective, I would add one more point – manifestos are sloppy.

Our draft manifesto is a collective vision about what the blind-spots of music technology are, at present, and what we want the future of music technology to look like. And although there is general synergy around all of the points within it, that synergy is somewhat addled by the polyphonic nature of the contributors. There were a number of discussions over the course of the day that were squelched by the incommensurable perspectives of one or two of the participants. For instance, two scholars argued about whether or not technical platforms have politics. These moments of disagreement, however, only added a brilliant contour to our group jam. Like the distortion cooked into a Replacements single, it only serves to highlight how superb the moments of harmony and agreement are in contrast. This brilliant and ambivalent fuzziness speaks perfectly to the value of radical interdisciplinarity.

These disagreements were exactly the point. Why else would twenty academics from a variety of interdisciplinary fields have been invited to participate? Like a political summit, there were delegates from Biology, Anthropology, Computer Science, Musicology, Science and Technology Studies, and more. Rotating through the room, we did our introductions (see the complete list of participants at the bottom of this paper). Our interests were genuine and stated with earnestness. Nancy Baym declared emphatically that music is, “a productive site for radical interdisciplinarity,” while Andrew Dubber, the director of Music Tech Fest, noted the centrality of culture to the dialogue. Both music and technology are culture, he argued. The precarity of musical occupations, the gender divide, and the relationship between algorithm and consumer, all had to take a central role in our conversation, an inspired Georgina Born demanded. Bryan Pardo, a computer scientist, announced that he was listening with an open mind for tips on how to best design the platforms of tomorrow. Though collegial, our introductory remarks were all political, loaded with our ambitions and biases.

The day was an amazing, free-form, brainstorm. An hour and a half long each, the sessions challenged us to answer a big question – first, what are the problems of music technology, then what are some actions and possibilities for its future. Every fifteen or twenty minutes an alarm would ring and tables would exchange members, the new member sharing ideas from the table they came from. At one point I came to a new table telling stories about how music had the power to sculpt social relations, and was immediately confronted with a dialogue about problems of integration in the STEM fields.

In short, the brainstorms were a hodgepodge of ideas. Some spoke about the centrality of music to many cultural practices. Noting the ways in which humans respond to their environments through music, they questioned if tonal schema were ultimately a rationalization of the world. Though music was theorized as a means of social control many questions remained about whether it could or should be operationalized as such. Others considered different conversations entirely. Jocking sustainability and transduction as key factors in an ideal interdisciplinarity and shunning models that either tried to put one discipline in service of another, or simply tried to stack and combine ideas.

Borrowed from Margaret Atwater.

Borrowed from Margaret Atwater.

Some of the most productive debates centered around the nature of “open” technology. Engineers were challenged on their claim that “open source technology” was an unproblematic good, by Cultural Studies scholars who argued that the barriers to access were still fraught by the invisible lines of race, class, and gender. If open source technology is to be the future of music technology, they argued, much work must still be done to foster a dialogue where many voices can take part in that space.

We also did our best to think up actionable solutions to these problems, but for many it was difficult to dream big when their means were small in comparison. One group wrote, “we demand money,” on a whiteboard in capital letters and blue marker. Funding is a recurrent and difficult problem for many scholars in the United States and other, similar, locations, where funding for the arts is particularly scarce. On points like this, we all agreed.

We even considered what new spaces of interactivity should look like. Fostering spaces of interaction with public works of art, music, performance and more, could go a long way in convincing policy makers that these fields are, in fact, worthy of greater funding. Could a university be designed so as to prioritize this public mode of performance and interactivity? Would it have to abandon the cloistered office systems, which often prohibit the serendipitous occasion of interdisciplinary discussion around the arts?

Borrowed from bfishadow @Flickr.

Borrowed from bfishadow @Flickr.

 

There are still many problems with the dream of our manifesto. To start, although we shared many ideas, the vision of the manifesto is, if anything, disheveled and uneven. And though the radical interdisciplinarity we epitomized as a group led to a million excellent conversations, it is difficult, still, to get a sense of who “we” really are. If anything, our manifesto will be the embodiment of a collective that existed only for a moment and then disbursed, complete with jagged edges and inconsistencies. This gumbo of ideas, for me, is beautiful. Each and every voice included adds a little extra to the overall idea.

Ultimately, “What’s Music Tech For?” really got me thinking. Although I remain skeptical about the United States seeing funding for the arts as a worthy endeavor anytime soon, I left the event with a number of provocative questions. Am I, as a scholar, too critical about the value of technology, and blind to the ways it does often function to provoke a social good? How can technological development be set apart from the demands of the market, and then used to kindle social progress? How is music itself a technology, and when is it used as a tool of social coercion? And finally, what should a radical mode of listening be? And how can future listeners be empowered to see themselves in new and exciting ways?

What do you think?

Our team, by order of introduction:
Mary Gray (Microsoft Research), Blake Durham (University of Oxford), Mack Hagood (Miami University), Nick Seaver (University of California – Irvine), Tarleton Gillespie (Cornell University), Trevor Pinch (Cornell University), Jeremy Morris (University of Wisconsin-Madison), Diedre Loughridge (University of California – Berkley), Georgina Born (Oxford University), Aaron Trammell (Rutgers University), Jessa Lingel (Microsoft Research), Victoria Simon (McGill University), Aram Sinnreich (Rutgers University), Andrew Dubber (Birmingham City University), Norbert Schnell (IRCAM – Centre Pompidou), Bryan Pardo (Northwestern University), Josh McDermitt (MIT), Jonathan Sterne (McGill University), Matt Stahl (Western University), Nancy Baym (Microsoft Research), Annette Markham (Aarhus University), and Michela Magas (Music Tech Fest Founder).

Read the Manifesto here and sign on if you dig it. . . http://www.musictechifesto.org/

Aaron Trammell is co-founder and Multimedia Editor of Sounding Out! He is also a Media Studies PhD candidate at Rutgers University. His dissertation explores the fanzines and politics of underground wargame communities in Cold War America. You can learn more about his work at aarontrammell.com.

tape reelREWIND! . . .If you liked this post, you may also dig:

Listening to Tinnitus: Roles of Media when Hearing Breaks Down– Mack Hagood

Sounding Out! Podcast #15: Listening to the Tuned City of Brussels, The First Night– Felicity Ford and Valeria Merlini

“I’m on my New York s**t”: Jean Grae’s Sonic Claims on the City– Liana Silva-Ford

Sounding Out! Podcast #27: Interview with Jonathan Sterne

JSterne-hi-rez

CLICK HERE TO DOWNLOAD: Interview with Jonathan Sterne

SUBSCRIBE TO THE SERIES VIA ITUNES

ADD OUR PODCASTS TO YOUR STITCHER FAVORITES PLAYLIST

This podcast provokes Jonathan Sterne to jam on the history of Sound Studies, critique the soundscape, and talk about MP3s. That said, it was really just a way to talk about his super-cool music projects (really, check them out!). Aaron Trammell interviews Jonathan Sterne, and digs deep into the questions at the core of our discipline.

-

Jonathan Sterne teaches in the Department of Art History and Communication Studies and the History and Philosophy of Science Program at McGill University.  He is author of The Audible Past: Cultural Origins of Sound Reproduction (Duke, 2003), MP3: The Meaning of a Format (Duke 2012); and numerous articles on media, technologies and the politics of culture.  He is also editor of The Sound Studies Reader (Routledge, 2012).  Visit his website at http://sterneworks.org.

tape reelREWIND! . . .If you liked this post, you may also dig:

À qui la rue? : On Mégaphone and Montreal’s Noisy Public Sphere– Lilian Radovac

SO! Reads: Jonathan Sterne’s MP3: The Meaning of a Format– Aaron Trammell

Quebec’s #casseroles: on participation, percussion, and protest– Jonathan Sterne

%d bloggers like this: