Tag Archive | Data Sonification

Critical bandwidths: hearing #metoo and the construction of a listening public on the web

“A focus on listening [with technology] shifts the idea of freedom of speech from having a platform of expression to having the possibility of communication” (K. Lacey)

One of the biggest social media event of the past decade, #metoo stands out as a pivotal shift in the future of gender relations. Despite its persistence since October 2017, #metoo is still under-theorized, and since its permutations generate countless hashtag sub-categories each passing week, making sense of it presents a conceptual quagmire. Tracing its history, identifying key moments, mapping its pro- and counter-currents present equally tough challenges to both data science and feminist scholars.

Meta-communication about #metoo abounds. Infographics and visualizations attempt to contain its organic growth into perceivable maps and charts; pop news media constantly report on its evolution in likes, counts, and retweets, as well as—and increasingly—in number of convictions, lawsuits, and reports. At the same time, #metoo has arguably created a discernible listening public in the way that Kate Lacey (2013) argues emerged with national radio: women’s stories have never been listened to with such wide reach and rapt attention.

How did #metoo create new listening publics? “#metoo” by Flickr User Prachatai, (CC BY-NC-ND 2.0)

The project I discuss here takes ‘hearing’ #metoo a step further into the auditory realm in the form of data sonification so as to to re-imagine an audience compelled to earwitness not just the scope but the emotional impact of women’s stories. Data sonification is a growing field, which from its inception has crossed between art and science. It involves a conceptual or semantic translation of data into relevant sonic parameters in a way that utilizes perceptual gestalts to convey information through sound.

Brady Marks and I created the #metoo sonification you’ll hear below by drawing from a public dataset spanning October 2017 to the early Spring of 2018 obtained from data.world. Individual tweets using the hashtag are sonified using female battle cries from video games; the number of retweets and followers forms a sort of swelling and contracting background vocal texture to represent the reach of each message. The dataset is then sped up anywhere between 10x to 1000x in order to represent perceivable ebbs and flows of the hashtag’s life over time. The deliberate aim in this design was to convey a different sensibility of social media content, one that demands emotional and intellectual attention over a duration of time. Given Twitter’s visual zeitgeist whereby individual tweets are perceived at a glance and quickly become lost in the noise of the platform the affective attitude towards “contagious” events becomes arguably impersonal. A sonification such as this asks the listener to spend 30 minutes listening to 1 month of #metoo: something impossible to achieve on the actual platform, or in a single visualization. The aim, then, is to interrupt social media’s habitual and disposable engagements with pressing civic debates. 

MeToo: It's About Power | Morningside Center for Teaching Social  Responsibility

A critique of big data visualization

To date, there have been more than 19 million #MeToo tweets from over 85 countries; on Facebook more than 24 million people participated in the conversation by posting, reacting, and commenting over 77 million times since October 15, 2017. In a global information society ‘big data’ is translated into creative infographics in order to simultaneously educate an overwhelmed public and elicit urgency and accord for political action. Yet ideological and political considerations around the design of visual information have lagged behind enthusiasm for making data ‘easy to understand’. At the other end of the spectrum, social media delivers personalized micro-trends directly and in real time to always-mobile users, reinforcing their information silos (Rambukkana 2015). Between these extremes, the mechanisms by which relevant local, marginalized or emergent issues come to be communicated to the wider public are constrained.

“A wordcloud featuring #metoo” by www.scootergenius.com (CC BY 2.0)

With this big idea in mind, the question we ask here is what would it mean to hear data?  Emergent work in sonification suggests that sound may afford a unique way to experience large-scale data suitable for raising public awareness of important current issues (Winters & Weinberg 2015). The uptake of sonification by the artistic community (see Rory Viner, Robert Alexander, among many others) signals its strengths in producing affective associations to data for non-specialized audiences, despite its shortcomings as a scientific analysis tool (Supper, 2018). Some of the more esoteric uses of sonification have been in the service of capturing what Supper calls ‘the sublime’ – as in Margaret Schedel’s “Sounds of Science: The Mystique of Sonification.”

Who’s listening on social media?

Within the Western canon of sound studies “constitutive technicities” (Gallope 2011) or what Sterne calls “perceptual technics” embody historically situated ways of listening that center technology as a co-defining factor in our relationship with sound. Within this frame, media sociologist Kate Lacey traces the emergence of the modern listening public through the history of radio. Using the metaphor of ‘listening in” and “listening out,” Lacey reframes media citizenship by pointing out that listening is a cultural as well as a perceptual act with defined political dimensions:

Listening out is the practice of being open to the multiplicity of texts and voices and thinking of texts in the context of and in relation to a difference and how they resonate across time and in different spaces. But at the same time, it is the practice and experience of living in a media age that produces and heightens the requirement, the context, the responsibilities and the possibilities of listening out (198)

According to Lacey, a focus on listening instead of spectatorship challenges the implicit active/passive dualism of civic participation in Western contexts. More importantly, she argues, we need to move away from the notion of “giving voice” and instead create meaningful possibilities to listen, in a political sense. Data sonification doesn’t so much ‘give voice to the voiceless’ but creates a novel relationship to perceiving larger patterns and movements.

Our interactions with media, therefore, are always already presumptive of particular dialogical relations. Every speech act, every message implies a listening audience that will resonate understanding. In other words, how are we already listening in to #metoo?  How and why might data sonification enable us to “listen out” for it instead? In order to get a different hearing, what should #metoo sound like?

What should #metoo sound like? “de #metoo à #wetogether” by Flickr user Jeanne Menjoulet, Paris 8 mars 2018 (CC BY 2.0)

Sonifying #metoo: the battle cries of gender-based violence

It is unrealistic to expect that your everyday person will read large archives of testimony on sexual harassment and gender-based violence. Because of their massive scale, archives of #metoo testimony pose a significant challenge to the possibility for meaningful communication around this issue.  Essentially drowning each other out, individual voices remain unheard in the zeitgeist of media platforms that automates quantification while speeding up engagement with individual contributions. To reaffirm the importance of voice would mean to reaffirm inter-subjectivity and to recognize polyphony as an “existential position of humanity” (Ihde 2007, 178). This was the problem to sonify here: how to retain individual voices while creating the possibility for listening to the whole issue at hand. Inspired by the idea of listening out, myself and artist collaborator Brady Marks set out to sonify #metoo as a way of eliciting the possibility for a new listening public.

Sophitia Alexandra of the Soul Calibur franchise, close up from Flickr User Ngo Quang Minh‘s image from Soul Calibur III (CC BY-SA 2.0)

The #metoo sonification project intersected deeply with my work on the female voice in videogames. My choice to use a mixed selection of battle cry samples from Soul Calibur, an arcade fighting game, was intuitive. Battle cries are pre-recorded banks of combat sounds that video game characters perform in the course of the story. Instances of #metoo on Twitter presumably represent the experiences of individual women, pumping a virtual fist in the air, no longer silent about the realities of gender-based violence. So hearing #metoo posts as battle cries of powerful game heroines made sense to me. But it’s the meta layers of meaning that are even more intuitive: as I’ve discussed elsewhere, female battle cries are notoriously gendered and sexualized. Listening to a reel of sampled battle cries is almost indistinguishable from listening to a pornographic soundscape. Abstracted in this sonification, away from the cartoonish hyper-reality of a game world, these voices are even more eerie, giving almost physical substance to the subject matter of #metoo. Just as the female voice in media secretly fulfils the furtive desires of the “neglected erogenous zone” of the ear (Pettman 2017, 17), #metoo is an embodiment of the conflation of sex with consent: the basis of what we now call ‘rape culture.’

Sonifying real-time data such as Twitter presents not only semantic (how should it sound like) but also time-scale challenges. If we are to sonify a month – e.g. the month of November 2017 (just weeks after the explosion of #metoo) – but we don’t want to spend a month listening, then that involves some conceptual time-scaling. Time-scaling means speeding up instances that already happen multiple times a second on a platform as instantaneous and global as Twitter. Below are samples of three different sonifications of #metoo data, following different moments in the initial explosion of the hashtag and rendered at different time compressions. Listen to them one at a time and note your sensual and emotive experience of tweets closer to real-time playback, compared to the audible patterns that emerge from compressing longer periods of time inside the same length audio file. You might find that the density is different. Closer to real-time the battle cries are more distinctive, while at higher time compressions what emerges instead is an expanding and contracting polyphonic texture.

Vocalizations of female pleasure/affect, video game battle cries already have a special relationship to technologies of audio sampling and digital reproduction as Corbett & Kapsalis describe in Aural sex: the female orgasm in popular sound.”  This means that the perceptual technics involved in listening to recorded female voices are already coded with sexual connotations. Battle cries in games are purposely exaggerated so as to carry the bulk of emotional content in the game’s experiential matrix. Roland Barthes’ notion of the “grain of the voice”—the presence of the body in (singing) voice—is frequently evoked in describing the substantive role that game voices play in the construction of game world immersion and realism. In the #metoo sonification, I decontextualize the grain of the voice—there are no visual images, narrative, or gameplay; the battle cries are also acousmatic, in that there are no bodies visually represented from which these sounds emanate.

The battle cry in this #metoo sonification is the ultimate disembodied voice, resisting what Kaja Silverman (1988) calls the “norm of synchronization” with a female body in The Acoustic Mirror (83). As acousmatic voices, these battle cries could be said to exist on a different conceptual and perceptual plane, “disturbing the taxonomies upon which patriarchy depends,” to quote Dominic Pettman in Sonic Intimacy. (22). In other words, the sounds exist in a boundary space between combat sounds and orgasmic sounds highlighting for the listener the dissonance between the supposed empowerment of ‘speaking out’ within a culture that remains staunchly set up to sexualize women; something one can hardly ignore given the media’s reserved treatment of #metoo.

“Princess Zelda’s New Mouth” by Flickr UserMouthGuy2013, Public Domain

Liberated from the game world these voices now speak for themselves in the #metoo sonification, their sensuality all the more hyper-real. The player has no control here, as the battle cries are not linked to specific game actions, rather they are synchronized autonomously to instances of #metoo confessionals.  In fact, the density of the sonification as time speeds up will overwhelm listeners with its boundlessness; echoing how contemporary media treats the sounds of the female orgasm as a renewable and inexhaustible resource, even as reports of sexual harassment and gender-based violence continue to pile on in 2021. Yet we intend that the subject matter resists pleasure, rendering the sonic experience traumatic as the chilling realization sets in that listeners are hailed to accountability by #metoo. The experience should instead be unsettling, impactful, grotesque, and deeply embodied. 

Concluding remarks

Listening both metaphorically and literally goes to the very heart of questions to do with the politics and experience of living and communicating in the media age. In her paper on the sonic geographies of the voice, AM Kanngieser notes in “A Sonic Geography of Voice“: “The voice, in its expression of affective and ethico-political forces, creates worlds” (337). It is not just in the grain but in the enunciation that battle cries find their political significance in this sonification. As the hyper-real gasps and moans of game heroines animate individual moments of #metoo the codification of cartoonish voices resists being subconsciously “absorbed into the dialogic exchange” (342) of habitual media consumption. Listening to the sonification is instead an experience of re-coding the voice, reconfiguring the embedded meanings of game sound to a new and contradictory context: a space that challenges neoliberal appropriations of radical communication and discourse (348). This is not data sonification that delights the listener or simply grants them access to ‘information’ in a different format; rather it calls on the listener to de-normalize their received technicity and perceptions and to connect to the emotional inter-subjectivity of this call to action.

Most importantly, the #metoo sonification invites the auditeur to listen in, to take an active role in the reconfiguration of meanings and absorb their political dimensions. These are the stories of #metoo; these are the voices of women, of men, of marginalized peoples, emerging from the zeitgeist of Twitter to ask us to earwitness gender-based violence. We are a new listening public, wanting and needing to create new worlds. A critical bandwidth is the smallest perceivable unit of auditory change, in psychology terms. This sonification begs the question, how many battle cries will it take for us to end gender-based violence by fostering equitable worlds?

Featured Image: “Listen to What You See” by Flickr User Hernán Piñera (CC BY-SA 2.0)

Milena Droumeva is an Assistant Professor and the Glenfraser Endowed Professor in Sound Studies at Simon Fraser University specializing in mobile media, sound studies, gender, and sensory ethnography. Milena has worked extensively in educational research on game-based learning and computational literacy, formerly as a post-doctoral fellow at the Institute for Research on Digital Learning at York University. Milena has a background in acoustic ecology and works across the fields of urban soundscape research, sonification for public engagement, as well as gender and sound in video games. Current research projects include sound ethnographies of the city (livable soundscapes), mobile curation, critical soundmapping, and sensory ethnography. Check out Milena’s Story Map, “Soundscapes of Productivity” about coffee shop soundscapes as the office ambience of the creative economy freelance workers. 

Milena is a former board member of the International Community on Auditory Displays, an alumni of the Institute for Research on Digital Learning at York University, and former Research Think-Tank and Academic Advisor in learning innovation for the social enterprise InWithForward.  More recently, Milena serves on the board for the Hush City Mobile Project founded by Dr. Antonella Radicchi, as well as WISWOS, founded by Dr. Linda O Keeffe.

tape reel

REWIND! . . .If you liked this post, you may also dig:

One Scream is All it Takes: Voice Activated Personal Safety, Audio Surveillance, and Gender ViolenceMaría Edurne Zuazu 

Gendered Sonic Violence, from the Waiting Room to the Locker Room–Rebecca Lentjes

SO! Amplifies: Mendi+Keith Obadike and Sounding Race in America

detritus 1 & 2 and V.F(i)n_1&2 : The Sounds and Images of Postnational Violence in Mexico–Luz María Sánchez

Hearing EugenicsVibrant Lives

Sounds of Science: The Mystique of Sonification

Hearing the Unheard IIWelcome to the final installment of Hearing the UnHeardSounding Out!s series on what we don’t hear and how this unheard world affects us. The series started out with my post on hearing, large and small, continued with a piece by China Blue on the sounds of catastrophic impacts, and Milton Garcés piece on the infrasonic world of volcanoes. To cap it all off, we introduce The Sounds of Science by professor, cellist and interactive media expert, Margaret Schedel.

Dr. Schedel is an Associate Professor of Composition and Computer Music at Stony Brook University. Through her work, she explores the relatively new field of Data Sonification, generating new ways to perceive and interact with information through the use of sound. While everyone is familiar with informatics, graphs and images used to convey complex information, her work explores how we can expand our understanding of even complex scientific information by using our fastest and most emotionally compelling sense, hearing.

– Guest Editor Seth Horowitz

With the invention of digital sound, the number of scientific experiments using sound has skyrocketed in the 21st century, and as Sounding Out! readers know, sonification has started to enter the public consciousness as a new and refreshing alternative modality for exploring and understanding many kinds of datasets emerging from research into everything from deep space to the underground. We seem to be in a moment in which “science that sounds” has a special magic, a mystique that relies to some extent on misunderstandings in popular awareness about the processes and potentials of that alternative modality.

For one thing, using sound to understand scientific phenomena is not actually new. Diarist Samuel Pepys wrote about meeting scientist Robert Hooke in 1666 that “he is able to tell how many strokes a fly makes with her wings (those flies that hum in their flying) by the note that it answers to in musique during their flying.” Unfortunately Hooke never published his findings, leading researchers to speculate on his methods. One popular theory is that he tied strings of varying lengths between a fly and an ear trumpet, recognizing that sympathetic resonance would cause the correct length string to vibrate, thus allowing him to calculate the frequency. Even Galileo used sound, showing the constant acceleration of a ball due to gravity by using an inclined plane with thin moveable frets. By moving the placement of the frets until the clicks created an even tempo he was able to come up with a mathematical equation to describe how time and distance relate when an object falls.

Illustration from Robert Hooke's Micrographia (1665)

Illustration from Robert Hooke’s Micrographia (1665)

There have also been other scientific advances using sound in the more recent past. The stethoscope was invented in 1816 for auscultation, listening to the sounds of the body. It was later applied to machines—listening for the operation of the technological gear. Underwater sonar was patented in 1913 and is still used to navigate and communicate using hydroacoustic phenomenon. The Geiger Counter was developed in 1928 using principles discovered in 1908; it is unclear exactly when the distinctive sound was added. These are all examples of auditory display [AD]; sonification-generating or manipulating sound by using data is a subset of AD. As the forward to the The Sonification Handbook states, “[Since 1992] Technologies that support AD have matured. AD has been integrated into significant (read “funded” and “respectable”) research initiatives. Some forward thinking universities and research centers have established ongoing AD programs. And the great need to involve the entire human perceptual system in understanding complex data, monitoring processes, and providing effective interfaces has persisted and increased” (Thomas Hermann, Andy Hunt, John G. Neuhoff, Sonification Handbook, iii)

Sonification clearly enables scientists, musicians and the public to interact with data in a very different way, particularly compared to the more numerous techniques involving vision. Indeed, because hearing functions quite differently than vision, sonification offers an alternative kind of understanding of data (sometimes more accurate), which would not be possible using eyes alone. Hearing is multi-directional—our ears don’t have to be pointing at a sound source in order to sense it. Furthermore, the frequency response of our hearing is thousands of times more accurate than our vision. In order to reproduce a moving image the sampling rate (called frame-rate) for film is 24 frames per second, while audio has to be sampled at 44,100 frames per second in order to accurately reproduce sound. In addition, aural perception works on simultaneous time scales—we can take in multiple streams of audio data at once at many different dynamics, while our pupils dilate and contract, limiting how much visual data we can absorb at a single time. Our ears are also amazing at detecting regular patterns over time in data; we hear these patterns as frequency, harmonic relationships, and timbre.

Image credit: Dr. Kevin Yager, data measured at X9 beamline, Brookhaven National Lab.

Image credit: Dr. Kevin Yager, Brookhaven National Lab.

But hearing isn’t simple, either. In the current fascination with sonification, the fact that aesthetic decisions must be made in order to translate data into the auditory domain can be obscured. Headlines such as “Here’s What the Higgs Boson Sounds Like” are much sexier than headlines such as “Here is What One Possible Mapping of Some of the Data We Have Collected from a Scientific Measuring Instrument (which itself has inaccuracies) Into Sound.” To illustrate the complexity of these aesthetic decisions, which are always interior to the sonification process, I focus here on how my collaborators and I have been using sound to understand many kinds of scientific data.

My husband, Kevin Yager, a staff scientist at Brookhaven National Laboratory, works at the Center for Functional Nanomaterials using scattering data from x-rays to probe the structure of matter. One night I asked him how exactly the science of x-ray scattering works. He explained that X-rays “scatter” off of all the atoms/particles in the sample and the intensity is measured by a detector. He can then calculate the structure of the material, using the Fast Fourier Transform (FFT) algorithm. He started to explain FFT to me, but I interrupted him because I use FFT all the time in computer music. The same algorithm he uses to determine the structure of matter, musicians use to separate frequency content from time. When I was researching this post, I found a site for computer music which actually discusses x-ray scattering as a precursor for FFT used in sonic applications.

To date, most sonifications have used data which changes over time – a fly’s wings flapping, a heartbeat, a radiation signature. Except in special cases Kevin’s data does not exist in time – it is a single snapshot. But because data from x-ray scattering is a Fourier Transform of the real-space density distribution, we could use additive synthesis, using multiple simultaneous sine waves, to represent different spatial modes. Using this method, we swept through his data radially, like a clock hand, making timbre-based sonifications from the data by synthesizing sine waves using with the loudness based on the intensity of the scattering data and frequency based on the position.

We played a lot with the settings of the additive synthesis, including the length of the sound, the highest frequency and even the number of frequency bins (going back to the clock metaphor – pretend the clock hand is a ruler – the number of frequency bins would be the number of demarcations on the ruler) arriving eventually at set of optimized variables.

Here is one version of the track we created using 10 frequency bins:

.

Here is one we created using 2000:

.

And here is one we created using 50 frequency bins, which we settled on:

.

On a software synthesizer this would be like the default setting. In the future we hope to have an interactive graphic user interface where sliders control these variables, just like a musician tweaks the sound of a synth, so scientists can bring out, or mask aspects of the data.

To hear what that would be like, here are a few tracks that vary length:

.

.

.

Finally, here is a track we created using different mappings of frequency and intensity:

.

Having these sliders would reinforce to the scientists that we are not creating “the sound of a metallic alloy,” we are creating one sonic representation of the data from the metallic alloy.

It is interesting that such a representation can be vital to scientists. At first, my husband went along with this sonification project as more of a thought experiment rather than something that he thought would actually be useful in the lab, until he heard something distinct about one of those sounds, suggesting that there was a misaligned sample. Once Kevin heard that glitched sound (you can hear it in the video above), he was convinced that sonification was a useful tool for his lab. He and his colleagues are dealing with measurements 1/25,000th the width of a human hair, aiming an X-ray through twenty pieces of equipment to get the beam focused just right. If any piece of equipment is out of kilter, the data can’t be collected. This is where our ears’ non-directionality is useful. The scientist can be working on his/her computer and, using ambient sound, know when a sample is misaligned.

procedure

It remains to be seen/heard if the sonifications will be useful to actually understand the material structures. We are currently running an experiment using Mechanical Turk to determine this kind of multi-modal display (using vision and audio) is actually helpful. Basically we are training people on just the images of the scattering data, and testing how well they do, and training another group of people on the images plus the sonification and testing how well they do.

I’m also working with collaborators at Stony Brook University on sonification of data. In one experiment we are using ambisonic (3-dimensional) sound to create a sonic map of the brain to understand drug addiction. Standing in the middle of the ambisonic cube, we hope to find relationships between voxels, a cube of brain tissue—analogous to pixels. When neurons fire in areas of the brain simultaneously there is most likely a causal relationship which can help scientists decode the brain activity of addiction. Computer vision researchers have been searching for these relationships unsuccessfully; we hope that our sonification will allow us to hear associations in distinct parts of the brain which are not easily recognized with sight. We are hoping to leverage the temporal pattern recognition of our auditory system, but we have been running into problems doing the sonification; each slice of data from the FMRI has about 300,000 data points. We have it working with 3,000 data points, but either our programming needs to get more efficient, or we have to get a much more powerful computer in order to work with all of the data.

On another project we are hoping to sonify gait data using smartphones. I’m working with some of my music students and a professor of Physical Therapy, Lisa Muratori, who works on understanding the underlying mechanisms of mobility problems in Parkinsons’ Disease (PD). The physical therapy lab has a digital motion-capture system and a split-belt treadmill for asymmetric stepping—the patients are supported by a harness so they don’t fall. PD is a progressive nervous system disorder characterized by slow movement, rigidity, tremor, and postural instability. Because of degeneration of specific areas of the brain, individuals with PD have difficulty using internally driven cues to initiate and drive movement. However, many studies have demonstrated an almost normal movement pattern when persons with PD are provided external cues, including significant improvements in gait with rhythmic auditory cueing. So far the research with PD and sound has be unidirectional – the patients listen to sound and try to match their gait to the external rhythms from the auditory cues.In our system we will use bio-feedback to sonify data from sensors the patients will wear and feed error messages back to the patient through music. Eventually we hope that patients will be able to adjust their gait by listening to self-generated musical distortions on a smartphone.

As sonification becomes more prevalent, it is important to understand that aesthetic decisions are inevitable and even essential in every kind of data representation. We are so accustomed to looking at visual representations of information—from maps to pie charts—that we may forget that these are also arbitrary transcodings. Even a photograph is not an unambiguous record of reality; the mechanics of the camera and artistic choices of the photographer control the representation. So too, in sonification, do we have considerable latitude. Rather than view these ambiguities as a nuisance, we should embrace them as a freedom that allows us to highlight salient features, or uncover previously invisible patterns.

__

Margaret Anne Schedel is a composer and cellist specializing in the creation and performance of ferociously interactive media. She holds a certificate in Deep Listening with Pauline Oliveros and has studied composition with Mara Helmuth, Cort Lippe and McGregor Boyle. She sits on the boards of 60×60 Dance, the BEAM Foundation, Devotion Gallery, the International Computer Music Association, and Organised Sound. She contributed a chapter to the Cambridge Companion to Electronic Music, and is a joint author of Electronic Music published by Cambridge University Press. She recently edited an issue of Organised Sound on sonification. Her research focuses on gesture in music, and the sustainability of technology in art. She ran SUNY’s first Coursera Massive Open Online Course (MOOC) in 2013. As an Associate Professor of Music at Stony Brook University, she serves as Co-Director of Computer Music and is a core faculty member of cDACT, the consortium for digital art, culture and technology.

Featured Image: Dr. Kevin Yager, data measured at X9 beamline, Brookhaven National Lab.

Research carried out at the Center for Functional Nanomaterials, Brookhaven National Laboratory, is supported by the U.S. Department of Energy, Office of Basic Energy Sciences, under Contract No. DE-AC02-98CH10886.

tape reelREWIND! ….. If you liked this post, you might also like:

The Noises of Finance–N. Adriana Knouf

Revising the Future of Music Technology–Aaron Trammell

A Brief History of Auto-Tune–Owen Marshall

%d bloggers like this: