Tag Archive | Music

Sounding Out! Podcast #49: Yoshiwara Soundwalk: Taking the Underground to the Floating World

SumidaRIverPleasureBoat

 

CLICK HERE TO DOWNLOADYoshiwara Soundwalk: Taking the Underground to the Floating World

SUBSCRIBE TO THE SERIES VIA ITUNES

ADD OUR PODCASTS TO YOUR STITCHER FAVORITES PLAYLIST

Join Gretchen Jude as she performs a soundwalk of the Yoshiwara district in Tokyo. Throughout this soundwalk, Jude offers her thoughts on the history, materiality, and culture of the Yoshiwara, Tokyo’s red-light district. An itinerary is provided below for the curious, as well as a translation of the Ume wa sati ka, intended to help orient listeners to the history of the Yoshiwara. What stories do the sounds of this district help to tell and can they help us to navigate its sordid history?

 

Itinerary:

From the outer regions of the capital’s northwestern surburban sprawl to Ikebukuro Station (Tobu Tojo Line), transferring from Ikebukuro to Iidabashi (Yurakucho Line), from Iidabashi to Ueno-okachimachi (Oedo Line), from Naka-okachimachi to Minowa (Hibiya Line), then by foot from Minowa to Asakusa:

Due east, past Tōsen Elementary School

South on a nameless narrow lane parallel to Edomachi Street, with a short stop at Yoshiwara Park

Turning from Edomachi Street west onto Nakanomachi Street

Curving around to the south, just past the Kuritsu-taito Hospital and Senzoku Nursery School, with a long stop at Benzaiten Yoshiwara Shrine

Due south toward the throngs of tourists at the Sensō-ji Temple grounds, then across the Sumida River to my hauta teacher’s studio in a quiet residential neighborhood south of the Tokyo Sky Tree

Screen Shot 2016-01-28 at 1.18.11 AM

Gretchen Jude is a PhD candidate in Performance Studies at the University of California Davis and a performing artist/composer based in the San Francisco Bay Area. Her doctoral research explores the intersections of voice and electronics in transcultural performance contexts, delving into such topics as presence and embodiment in computer music, language and cultural difference in vocal genres, and collaborative electroacoustic improvisation. Interaction with her immediate environment forms the core of Gretchen’s musical practice. Gretchen has been studying Japanese music since 2001 and holds multiple certifications in koto performance from the Sawai Koto Institute in Tokyo, as well as an MFA in Electronic Music and Recording Media from Mills College in Oakland, California. In the spring of 2015, a generous grant from the Pacific Rim Research Program supported Gretchen’s intensive study of hauta and jiuta singing styles in Tokyo. This podcast (as well as a chapter of her dissertation) are direct results of that support. Infinite thanks also to the gracious and generous assistance of Shibahime-sensei, Mako-chan and my many other friends and teachers in Japan.

All images used with permission by the author.

tape reelREWIND! . . .If you liked this post, you may also dig:

Park Sounds: A Kansas City Soundwalk for Fall – Liana M. Silva

Sounding Out! Podcast #46: Ruptures in the Soundscape of Disneyland – Cynthia Wang

Sounding Out! Podcast #37: The Edison Soundwalk – Frank Bridges

Culture Jamming and Game Sound: An Interview with foci + loci

tamara back 1

Sound-Improv-New-Media-ArtGuest Editors’ Note: Welcome to Sounding Out!‘s December forum entitled “Sound, Improvisation and New Media Art.” This series explores the nature of improvisation and its relationship to appropriative play cultures within new media art and contemporary sound practice. Here, we engage directly with practitioners, who either deploy or facilitate play and improvisation through their work in sonic new media cultures.

For our second piece in the series, we have interviewed New York City based performance duo foci + loci (Chris Burke and Tamara Yadao). Treating the map editors in video games as virtual sound stages, foci + loci design immersive electroacoustic spaces that can be “played” as instruments. Chris and Tamara bring an interdisciplinary lens to their work, having worked in various sonic and game-related cultures including, popular, electroacoustic and new music, chiptune, machinima (filmmaking using video game engines), and more.

As curators, we have worked with foci + loci several times over the past few years, and have been fascinated with their treatment of popular video game environments as tools for visual and sonic exploration. Their work is highly referential, drawing on artistic legacies of the Futurists, the Surrealists, and the Situationists, among others. In this interview, we discuss the nature of their practice(s), and it’s relationship to play, improvisation and the co-constituative nature of their work in relation to capital and proprietary technologies.

— Guest Editors Skot Deeming and Martin Zeilinger

1. Can you take a moment to describe your practice to our readers? What kind of work do you produce, what kind of technologies are involved, and what is your creative process?

foci + loci mostly produce sonic and visual video game environments that are played in live performance. We have been using Little Big Planet (LBP) on the Playstation 3 for about 6 years.

When we perform, we normally have two PS3s running the game with a different map in each. We have experimented with other platforms such as Minecraft and we sometimes incorporate spoken word, guitars, effects pedals, multiple game controllers (more than 1 each) and Game Boys.

Our creative process proceeds from discussions about the ontological differences between digital space and cinematic space, as well as the freeform or experimental creation of music and sound art that uses game spaces as its medium. When we are in “Create Mode” in LBP, these concepts guide our construction of virtual machines, instruments and performance systems.

[Editor’s Note: Little Big Planet’s has several game modes. Create Mode is the space within the game where users can create their own LBP levels and environments. As player’s progress through LBP’s Story Mode, players unlock and increasing number of game assets, which can be used in Create Mode.]

2. Tell us about your background in music? Can you situate your current work in relation to the musical traditions and communities that you were previously a part of?

CB: I have composed for film, TV, video games and several albums (sample based, collage and electronic). Since 2001 I’ve been active in the chipmusic scene, under the name glomag. Around the same time I discovered machinima and you could say that my part in foci + loci is the marriage of these two interests – music and visual. Chipmusic tends to be high energy and the draw centers around exciting live performances. It’s immensely fun and rewarding but I felt a need to step back and make work that drew from more cerebral pursuits. foci + loci is more about these persuits for me: both my love of media theory and working with space and time.

TY: I’m an interdisciplinary artist and composer. I studied classical piano and percussion during my childhood years. I went on to study photography, film, video, sound, digital media and guitar in college and after. I’ve primarily been involved with the electroacoustic improv and chipmusic scenes, both in NYC. I’ve been improvising since 2005, and I’ve been writing chipmusic since 2011 under the moniker Corset Lore.

My work in foci + loci evolved out of the performance experience I garnered in the electroacoustic improv scene. My PS3 replaced my laptop. LBP replaced Ableton Live and VDMX. I think I felt LBP had more potential as a sonic medium because an interface could be created from scratch. Eventually, the game’s plasticity and setting helped to underscore its audiovisual aspect by revealing different relationships between sound and image.

3. Would you describe your work as a musical practice or an audio-visual performance practice?

FL: We have always felt that in game space, it is more interesting to show the mechanism that makes the sound as well as the image. These aspects are programmed, of course, but we try to avoid things happening “magically,” and instead like to give our process some transparency. So, while it is often musical, sound and image are inextricably linked. And, in certain cases, the use of a musical score (including game controller mappings) has been important to how our performance unfolds either through improvisation or timed audiovisual events. The environment is the musical instrument, so using the game controller is like playing a piano and wielding a construction tool at the same time. It has also been important in some contexts to perform in ‘Create Mode’ in order to simply give the audience visual access to  LBP‘s programming backend. In this way, causal relationships between play and sound may be more firmly demonstrated.

4. There are many communities of practice that have adopted obsolete or contemporary technologies to create new, appropriative works and forms. Often, these communities recontextualize our/their relationships to technologies they employ. To what extent do you see you work in relation to communities of appropriation-based creative expression?

CB: In the 80s-90s I was an active “culture jammer,” making politically motivated sound montage works for radio and performance and even dabbling in billboard alterations. Our corporate targets were selling chemical weapons and funding foreign wars while our media targets were apologists for state-sanctioned murder. Appropriating their communications (sound bites, video clips, broadcasting, billboards) was an effort to use their own tools against them. In the case of video game publishers and console manufacturers, there is much to criticize: sexist tropes in game narratives, skewed geo-political subtexts, anti-competitive policies, and more. Despite these troubling themes, the publishers (usually encouraged by the game developers) have occasionally supported the “pro-sumer” by opening up their game environments to modding and other creative uses. This is a very positive shift from, say, the position of the RIAA or the MPAA, where derivative works are much more frequently shut down. My previous game-related series, This Spartan Life, was more suited to tackling these issues. As for foci + loci, it’s hard to position work that uses extensively developed in-game tools as being “appropriative,” but I do think using a game engine to explore situationist ideas or the ontology of game space, as we do in our work, is a somewhat radical stance on art. We hope that it encourages more players to creatively express their ideas in similar ways.

TY: Currently, the ‘us vs. them’ attitude that characterized the 80s and 90s is no longer as relevant as it once was because corporations are now giving artists technology for their own creative use. However, they undermine this sense of benevolence by claiming that consumers could be the next Picasso if they buy said piece of technology in their marketing—as if the tool is more important than the artist/artwork. Little Big Planet is marketed this way. On the whole, I think these issues complicate artists’ relationships with their media.

Often our work tends to be included in hacker community events, most recently the ‘Music Games Hackathon’ at Spotify (NYC), because, while we don’t necessarily hack the hardware or software, our approach is a conceptual hack or subversion. At this event, there were a variety of conceptual connections made between music, hacks and games; Double Dutch, John Zorn’s Game Pieces, Fluxus, Xenakis and Stockhausen were all compared to one another. I gave a talk at the Hackers on Planet Earth Conference in 2011 about John Cage, Marcel Duchamp, Richard Stallman and the free software movement. In Stallman’s essay ‘On Hacking,’ he cited John Cage’s ‘4’33″‘ as an early example of a music hack. In my discussion, I pointed to Marcel Duchamp, a big influence on Cage, whose readymades were essentially hacked objects through their appropriation and re-contextualization. I think this conceptual approach informs foci + loci’s current work.

[Editors’ note: Recently celebrating its 10th anniversary, This Spartan Life is a machinima talk show that takes place within the multiplayer game space of the First Person Shooter game Halo. This Spartan Life was created by Chris Burke in 2005. The show has featured luminaries including Malcolm McClaren, Peggy Awesh, and many more.]

8570574546_a047ca71f6_b5. You mention the ontological differences between game spaces and cinematic spaces. Can you clarify what you mean by this? Why is this such as important distinction and how does it drive the work?

CB: We feel that there is a fundamental difference in the way space is represented in cinema through montage and the way it’s simulated in a video game engine. To use Eisenstein’s terms, film shots are “cells” which collide to synthesize an image in the viewers mind. Montage builds the filmic space shot by shot. Video game space, being a simulation, is coded mathematically and so has a certain facticity. We like the way the mechanized navigation of this continuous space can create a real time composition. It’s what we call a “knowable” space.

6. Your practice is sound-based but relies heavily on the visual interface that you program in the gamespace. How do you view this relationship between the sonic and the visual in your work?

TY: LBP has more potential as a creative medium because it is audiovisual. The sound and image are inextricably linked in some cases, where one responds to the other. These aspects of interface function like the system of instruments we (or the game console) are driving. Since a camera movement can shape a sound within the space, the performance of an instrument can be codified to yield a certain effect. This goes back to our interest in the ontology of game space.

7. Sony (and other game developers) have been criticized for commodifying play as work – players produce and upload levels for free, and this free labour populates the Little Big Planet ecology. How would you position the way you use LBP in this power dynamic between player and IP owner?

CB: We are certainly more on the side of the makers than the publishers, but personally I think the “precarious labor” argument is a stretch with regard to LBP. Are jobs being replaced (International Labor Rights definition of precarious work)? Has a single modder or machinima maker suggested they should be compensated by the game developer or publisher for their work? Compensation actually does happen occasionally. This Spartan Life was, for a short time, employed by Microsoft to make episodes of the show for the developer’s Halo Waypoint portal. I have known a number of creators from the machinima community who were hired by Bioware, Blizzard, Bungie, 343 Industries and other developers. Then there’s the famous example of Minh Le and Jess Cliffe, who were hired by Valve to finish their Half-Life mod, Counterstrike. However, compensating every modder and level maker would clearly not be a supportable model for developers or publishers.

Having said all that, I think our work does not exactly fit into Sony’s idea of what LBP users should be creating. We are resisting, in a sense, by providing a more art historical example of what gamers can do with this engine beyond making endless game remakes, side-scrollers and other overrepresented forms. We want players to open our levels and say “WTF is this? How do I play it?” Then we want them to go into create mode and author LBP levels that contain more of their own unique perspectives and less of the game.

[Corset Lore is Tamara Yadao’s chiptune project.]

8. What does it mean to improvise with new interfaces? Has anything ever gone horribly wrong during a moment of improvisation? Is there a tension between improvisation and culture jamming, or do the two fit naturally together?

CB: It’s clear that improvising with new interfaces is freer and sometimes this means our works in progress lack context and have to be honed to speak more clearly. This freedom encourages a spontaneous reaction to the systems we build that often provokes the exploitation of weaknesses and failure. Working within a paradigm of exploitation seems appropriate to us, considering our chosen medium. In play, there is always the possibility of failure, or in a sense, losing to the console. When we design interfaces within console and game parameters we build in fail-safes while also embracing mechanisms that encourage failure during our performance/play.

In an elemental way, culture jamming is a more targeted approach, whereas improvisation seems to operate with a looser agenda. Improvisation is already a critical approach to the structures of game narrative. Improvising with a video game opens up the definition of what a game space is, or can be.

T&C Pompidou1

All images used with permission by foci + loci.

foci + loci are Chris Burke and Tamara Yadao.

Chris Burke came to his interest in game art via his work as a composer, sound designer and filmmaker. As a sound designer and composer he has worked with, among others, William Pope L., Jeremy Blake, Don Was, Tom Morello and Björk. In 2005 he created This Spartan Life which transformed the video game Halo into a talk show. Within the virtual space of the game, he has interviewed McKenzie Wark, Katie Salen, Malcolm McLaren, the rock band OK Go and others. This and other work in game art began his interest in the unique treatment of space and time in video games. In 2012, he contributed the essay “Beyond Bullet Time” to the “Understanding Machinima” compendium (2013, Continuum).

Tamara Yadao is an interdisciplinary artist and composer who works with gaming technology, movement, sound, and video. In Fall 2009, at Diapason Gallery, she presented a lecture on “the glitch” called “Post-Digital Music: The Expansion of Artifacts in Microsound and the Aesthetics of Failure in Improvisation.” Current explorations include electro-acoustic composition in virtual space, 8-bit sound in antiquated game technologies (under the moniker Corset Lore), movement and radio transmission as a live performance tool and the spoken word. Her work has been performed and exhibited in Europe and North America, and in 2014, Tamara was the recipient of a commissioning grant by the Jerome Fund for New Music through the American Composers Forum.

tape reelREWIND! . . .If you liked this post, you may also dig:

Improvisation and Play in New Media, Games, and Experimental Sound Practices — Skot Deeming and Martin Zeilinger

Sounding Out! Podcast #41: Sound Art as Public Art — Salomé Voegelin

Sounding Boards and Sonic Styles — Josh Ottum

Sounding Out! Podcast #47: Finding the Lost Sounds of Kaibah

IMG_0936

CLICK HERE TO DOWNLOADFinding the Lost Sounds of Kaibah

SUBSCRIBE TO THE SERIES VIA ITUNES

ADD OUR PODCASTS TO YOUR STITCHER FAVORITES PLAYLIST

In the early 1960s Native American women had few opportunities and rights as citizens. During this politically charged era, a young Navajo woman, Kay Bennett, or “Kaibah”, defied those restrictions by recording and releasing her own albums. Almost fifty years later, we present this conversation with Rachael Nez, a Navajo scholar and filmmaker, whose research explores “Songs from the Navajo Nation” through Kaibah’s records. Kaibah self-published her own albums until she was signed by Canyon records, wrote and published her own books, and traveled the world performing Navajo music everywhere from the Middle East to Europe. Rachael looks at how Kaibah’s music acts as a site for the circulation of Indigenous knowledge, oral history, and resistance.

In this podcast Marcella Ernest speaks with Rachael about the scarcity of materials relating to Kaibah’s history. Although there is no archive of her work, and no coherent trace of her story in one site, she explains how we can piece together a story of Kaibah based on her albums and songs. This dialogue considers the ways in which Indigenous erasure can be recuperated through sound. The project of finding the lost sounds of Kaibah is a fascinating story of how sound can be used to reconstitute indigeneous identity. What social and cultural norms conspire to obfuscate a Navajo woman of such prestige and talent? Finding the lost sounds of Kaibah is a conversation about (re)searching to find a lost sound.

Marcella Ernest is a Native American (Ojibwe) interdisciplinary video artist and scholar. Her work combines electronic media with sound design with film and photography in a variety of formats; using multi-media installations incorporating large-scale projections and experimental film aesthetics. Currently living in California, Marcella is completing an interdisciplinary Ph.D. in American Studies at the University of New Mexico. Drawing upon a Critical Indigenous Studies framework to explore how “Indianness” and Indigenity are represented in studies of American and Indigenous visual and popular culture, her primary research is an engagement with contemporary Native art to understand how members of colonized groups use a re-mix of experimental video and sound design as a means for cultural and political expressions of resistance.

www.marcellakwe.com

Featured image is used with permission by the author.

tape reelREWIND! . . .If you liked this post, you may also dig:

Sounding Out! Podcast #24: The Raitt Street Chronicles: A Survivor’s History – Sharon Sekhon and Manuel “Manny” Escamilla

Sounding Out! Podcast #20: The Sound of Rio’s Favelas: Echoes of Social Inequality in an Olympic City— Andrea Medrado

The “Tribal Drum” of Radio: Gathering Together the Archive of American Indian Radio–Josh Garrett Davis

Sounds of Science: The Mystique of Sonification

NYU_full

Hearing the Unheard IIWelcome to the final installment of Hearing the UnHeardSounding Out!s series on what we don’t hear and how this unheard world affects us. The series started out with my post on hearing, large and small, continued with a piece by China Blue on the sounds of catastrophic impacts, and Milton Garcés piece on the infrasonic world of volcanoes. To cap it all off, we introduce The Sounds of Science by professor, cellist and interactive media expert, Margaret Schedel.

Dr. Schedel is an Associate Professor of Composition and Computer Music at Stony Brook University. Through her work, she explores the relatively new field of Data Sonification, generating new ways to perceive and interact with information through the use of sound. While everyone is familiar with informatics, graphs and images used to convey complex information, her work explores how we can expand our understanding of even complex scientific information by using our fastest and most emotionally compelling sense, hearing.

– Guest Editor Seth Horowitz

With the invention of digital sound, the number of scientific experiments using sound has skyrocketed in the 21st century, and as Sounding Out! readers know, sonification has started to enter the public consciousness as a new and refreshing alternative modality for exploring and understanding many kinds of datasets emerging from research into everything from deep space to the underground. We seem to be in a moment in which “science that sounds” has a special magic, a mystique that relies to some extent on misunderstandings in popular awareness about the processes and potentials of that alternative modality.

For one thing, using sound to understand scientific phenomena is not actually new. Diarist Samuel Pepys wrote about meeting scientist Robert Hooke in 1666 that “he is able to tell how many strokes a fly makes with her wings (those flies that hum in their flying) by the note that it answers to in musique during their flying.” Unfortunately Hooke never published his findings, leading researchers to speculate on his methods. One popular theory is that he tied strings of varying lengths between a fly and an ear trumpet, recognizing that sympathetic resonance would cause the correct length string to vibrate, thus allowing him to calculate the frequency. Even Galileo used sound, showing the constant acceleration of a ball due to gravity by using an inclined plane with thin moveable frets. By moving the placement of the frets until the clicks created an even tempo he was able to come up with a mathematical equation to describe how time and distance relate when an object falls.

Illustration from Robert Hooke's Micrographia (1665)

Illustration from Robert Hooke’s Micrographia (1665)

There have also been other scientific advances using sound in the more recent past. The stethoscope was invented in 1816 for auscultation, listening to the sounds of the body. It was later applied to machines—listening for the operation of the technological gear. Underwater sonar was patented in 1913 and is still used to navigate and communicate using hydroacoustic phenomenon. The Geiger Counter was developed in 1928 using principles discovered in 1908; it is unclear exactly when the distinctive sound was added. These are all examples of auditory display [AD]; sonification-generating or manipulating sound by using data is a subset of AD. As the forward to the The Sonification Handbook states, “[Since 1992] Technologies that support AD have matured. AD has been integrated into significant (read “funded” and “respectable”) research initiatives. Some forward thinking universities and research centers have established ongoing AD programs. And the great need to involve the entire human perceptual system in understanding complex data, monitoring processes, and providing effective interfaces has persisted and increased” (Thomas Hermann, Andy Hunt, John G. Neuhoff, Sonification Handbook, iii)

Sonification clearly enables scientists, musicians and the public to interact with data in a very different way, particularly compared to the more numerous techniques involving vision. Indeed, because hearing functions quite differently than vision, sonification offers an alternative kind of understanding of data (sometimes more accurate), which would not be possible using eyes alone. Hearing is multi-directional—our ears don’t have to be pointing at a sound source in order to sense it. Furthermore, the frequency response of our hearing is thousands of times more accurate than our vision. In order to reproduce a moving image the sampling rate (called frame-rate) for film is 24 frames per second, while audio has to be sampled at 44,100 frames per second in order to accurately reproduce sound. In addition, aural perception works on simultaneous time scales—we can take in multiple streams of audio data at once at many different dynamics, while our pupils dilate and contract, limiting how much visual data we can absorb at a single time. Our ears are also amazing at detecting regular patterns over time in data; we hear these patterns as frequency, harmonic relationships, and timbre.

Image credit: Dr. Kevin Yager, data measured at X9 beamline, Brookhaven National Lab.

Image credit: Dr. Kevin Yager, Brookhaven National Lab.

But hearing isn’t simple, either. In the current fascination with sonification, the fact that aesthetic decisions must be made in order to translate data into the auditory domain can be obscured. Headlines such as “Here’s What the Higgs Boson Sounds Like” are much sexier than headlines such as “Here is What One Possible Mapping of Some of the Data We Have Collected from a Scientific Measuring Instrument (which itself has inaccuracies) Into Sound.” To illustrate the complexity of these aesthetic decisions, which are always interior to the sonification process, I focus here on how my collaborators and I have been using sound to understand many kinds of scientific data.

My husband, Kevin Yager, a staff scientist at Brookhaven National Laboratory, works at the Center for Functional Nanomaterials using scattering data from x-rays to probe the structure of matter. One night I asked him how exactly the science of x-ray scattering works. He explained that X-rays “scatter” off of all the atoms/particles in the sample and the intensity is measured by a detector. He can then calculate the structure of the material, using the Fast Fourier Transform (FFT) algorithm. He started to explain FFT to me, but I interrupted him because I use FFT all the time in computer music. The same algorithm he uses to determine the structure of matter, musicians use to separate frequency content from time. When I was researching this post, I found a site for computer music which actually discusses x-ray scattering as a precursor for FFT used in sonic applications.

To date, most sonifications have used data which changes over time – a fly’s wings flapping, a heartbeat, a radiation signature. Except in special cases Kevin’s data does not exist in time – it is a single snapshot. But because data from x-ray scattering is a Fourier Transform of the real-space density distribution, we could use additive synthesis, using multiple simultaneous sine waves, to represent different spatial modes. Using this method, we swept through his data radially, like a clock hand, making timbre-based sonifications from the data by synthesizing sine waves using with the loudness based on the intensity of the scattering data and frequency based on the position.

We played a lot with the settings of the additive synthesis, including the length of the sound, the highest frequency and even the number of frequency bins (going back to the clock metaphor – pretend the clock hand is a ruler – the number of frequency bins would be the number of demarcations on the ruler) arriving eventually at set of optimized variables.

Here is one version of the track we created using 10 frequency bins:

.

Here is one we created using 2000:

.

And here is one we created using 50 frequency bins, which we settled on:

.

On a software synthesizer this would be like the default setting. In the future we hope to have an interactive graphic user interface where sliders control these variables, just like a musician tweaks the sound of a synth, so scientists can bring out, or mask aspects of the data.

To hear what that would be like, here are a few tracks that vary length:

.

.

.

Finally, here is a track we created using different mappings of frequency and intensity:

.

Having these sliders would reinforce to the scientists that we are not creating “the sound of a metallic alloy,” we are creating one sonic representation of the data from the metallic alloy.

It is interesting that such a representation can be vital to scientists. At first, my husband went along with this sonification project as more of a thought experiment rather than something that he thought would actually be useful in the lab, until he heard something distinct about one of those sounds, suggesting that there was a misaligned sample. Once Kevin heard that glitched sound (you can hear it in the video above), he was convinced that sonification was a useful tool for his lab. He and his colleagues are dealing with measurements 1/25,000th the width of a human hair, aiming an X-ray through twenty pieces of equipment to get the beam focused just right. If any piece of equipment is out of kilter, the data can’t be collected. This is where our ears’ non-directionality is useful. The scientist can be working on his/her computer and, using ambient sound, know when a sample is misaligned.

procedure

It remains to be seen/heard if the sonifications will be useful to actually understand the material structures. We are currently running an experiment using Mechanical Turk to determine this kind of multi-modal display (using vision and audio) is actually helpful. Basically we are training people on just the images of the scattering data, and testing how well they do, and training another group of people on the images plus the sonification and testing how well they do.

I’m also working with collaborators at Stony Brook University on sonification of data. In one experiment we are using ambisonic (3-dimensional) sound to create a sonic map of the brain to understand drug addiction. Standing in the middle of the ambisonic cube, we hope to find relationships between voxels, a cube of brain tissue—analogous to pixels. When neurons fire in areas of the brain simultaneously there is most likely a causal relationship which can help scientists decode the brain activity of addiction. Computer vision researchers have been searching for these relationships unsuccessfully; we hope that our sonification will allow us to hear associations in distinct parts of the brain which are not easily recognized with sight. We are hoping to leverage the temporal pattern recognition of our auditory system, but we have been running into problems doing the sonification; each slice of data from the FMRI has about 300,000 data points. We have it working with 3,000 data points, but either our programming needs to get more efficient, or we have to get a much more powerful computer in order to work with all of the data.

On another project we are hoping to sonify gait data using smartphones. I’m working with some of my music students and a professor of Physical Therapy, Lisa Muratori, who works on understanding the underlying mechanisms of mobility problems in Parkinsons’ Disease (PD). The physical therapy lab has a digital motion-capture system and a split-belt treadmill for asymmetric stepping—the patients are supported by a harness so they don’t fall. PD is a progressive nervous system disorder characterized by slow movement, rigidity, tremor, and postural instability. Because of degeneration of specific areas of the brain, individuals with PD have difficulty using internally driven cues to initiate and drive movement. However, many studies have demonstrated an almost normal movement pattern when persons with PD are provided external cues, including significant improvements in gait with rhythmic auditory cueing. So far the research with PD and sound has be unidirectional – the patients listen to sound and try to match their gait to the external rhythms from the auditory cues.In our system we will use bio-feedback to sonify data from sensors the patients will wear and feed error messages back to the patient through music. Eventually we hope that patients will be able to adjust their gait by listening to self-generated musical distortions on a smartphone.

As sonification becomes more prevalent, it is important to understand that aesthetic decisions are inevitable and even essential in every kind of data representation. We are so accustomed to looking at visual representations of information—from maps to pie charts—that we may forget that these are also arbitrary transcodings. Even a photograph is not an unambiguous record of reality; the mechanics of the camera and artistic choices of the photographer control the representation. So too, in sonification, do we have considerable latitude. Rather than view these ambiguities as a nuisance, we should embrace them as a freedom that allows us to highlight salient features, or uncover previously invisible patterns.

__

Margaret Anne Schedel is a composer and cellist specializing in the creation and performance of ferociously interactive media. She holds a certificate in Deep Listening with Pauline Oliveros and has studied composition with Mara Helmuth, Cort Lippe and McGregor Boyle. She sits on the boards of 60×60 Dance, the BEAM Foundation, Devotion Gallery, the International Computer Music Association, and Organised Sound. She contributed a chapter to the Cambridge Companion to Electronic Music, and is a joint author of Electronic Music published by Cambridge University Press. She recently edited an issue of Organised Sound on sonification. Her research focuses on gesture in music, and the sustainability of technology in art. She ran SUNY’s first Coursera Massive Open Online Course (MOOC) in 2013. As an Associate Professor of Music at Stony Brook University, she serves as Co-Director of Computer Music and is a core faculty member of cDACT, the consortium for digital art, culture and technology.

Featured Image: Dr. Kevin Yager, data measured at X9 beamline, Brookhaven National Lab.

Research carried out at the Center for Functional Nanomaterials, Brookhaven National Laboratory, is supported by the U.S. Department of Energy, Office of Basic Energy Sciences, under Contract No. DE-AC02-98CH10886.

tape reelREWIND! ….. If you liked this post, you might also like:

The Noises of Finance–Nicholas Knouf

Revising the Future of Music Technology–Aaron Trammell

A Brief History of Auto-Tune–Owen Marshall

%d bloggers like this: