Archive | Race RSS for this section

One Scream is All it Takes: Voice Activated Personal Safety, Audio Surveillance, and Gender Violence

Just a few days ago,  London Metro Police Officer Wayne Couzens pled guilty to the rape and murder of Sarah Everard by, a 33-year-old woman he abducted while she walked home from a friend’s house.  Since the news broke of her disappearance in March 2021, the UK has been going through a moment of national “soul-searching.” The national reckoning has included a range of discussions–about casual and spectacular misogynistic violence, about a victim-blaming criminal justice system that fails to address said violence–and responses, including a vigil in south London that was met with aggressive policing, that has itself entered into and furthered the UK’s soul-searching. There has also been a surge in the installation of personal safety apps on mobile phones; One Scream (OS), “voice activated personal safety,” is one of them.

Available for Android and iOS devices, OS claims to detect and be triggered by a woman’s (true) “panic scream,” and, after 20 seconds and unless the alarm is cancelled, it will send both a text message to the user’s chosen contacts and an automated call with the location to a nominated contact. The app is meant to help women in situations where dialing 999, (assumed to be the natural and preferred response to danger), is not viable for the user and, in the ideal embodiment, this nominated contact, “the helper,” is the police. OS did automatically contact police (and required a paid subscription) in 2016, but it did not work out well and by 2018, was declared a work in progress: “What we really want is for the app to dial 999 when it detects a panic scream, but first, we need to prove how accurate it is. That’s where you come in. . .” OS is currently in beta and free (while in beta). It is unclear whether the developers have given up with that utmost expression of OS

OS is based on the premise that men fight and women scream —“It is an innate response for females in danger to scream for help”—and its correct functioning requires its users to be ready to do so, even if such an innate and instinctive response doesn’t come naturally to them: “If you do not scream, the app will not be able to detect you.” However, there are two discriminations in terms of scream analysis, in how the app discriminates while listening for and to screams, and in failing to detect or respond to them. The first has to do with who can use the app (i.e., whose panicked screams are able to trigger it) in the first place. This is presented in terms of gender and age—for the moment, OS can listen to “girls aged 14+ and women under 60,” where cisgender, as in anything OS, is taken for granted.   It is, however, a matter of acoustic parameters set by the developers (notably, of reaching a certain high pitch and loudness threshold). Which is why the app was implemented to include a “screamometer” for potential users to scream, hard, figure out, and see whether they can reach “the intensity that is needed to set it off” (confetti means they do). The second one discriminates true panicked screams from other types of screams (e.g., happiness, untrue panic). As presented by the developers, both discriminations are problematic and misleading, and so is “the science behind screaming” One Scream‘s website boasts of. 

The app does not quite distinguish true from fake screams, nor joy from panic for that matter.  Instead, One Scream listens for “roughness,” which a team of scream researchers—it truly is a “tiny science lesson” —has identified as scream’s “privileged acoustic niche” for communicating alarm.  According to this 2015 study in Current Biology,  “roughness” is the distinctive quality of effective, compelling human screams (and of artificial alarms) in terms of their ability to trigger listeners and in terms of perceived urgency. Abrupt increases in loudness and pitch are not unique to screams. The rougher the scream, then, the greater its perceived “alarmess” and its alarming effect. That’s why developers say OS “hears real distress,” essentially “just as your own ear.” However, other studies suggest your own ears might not be so great at distinguishing happiness from fear and scream research, and particularly the specific “bit” OS builds on, by and large assumes, relies on, and furthers the irrelevance of “real” on the scream vocalizer end.  

In OS’s pledge to its users, the app’s fine-tuning to its scream niche—i.e., to rough temporal modulations between 30 and 150Hz—is as important, as is the developers (flawed) insistence on the irredeemably uniqueness of true panic’s scream vocalizations, which they posit are instinctive and can’t be plotted or counterfeit: “Experience has shown that it is difficult for women to fake their scream.” Yet, current scream analysis and research primarily and largely relies on screams delivered by human research subjects (often university students, ideally drama students) in response to prompts for the purposes of studying them as well as, especially, on screams extracted from commercial movies and sound effect libraries. The same applies to the other types of vocalizations (e.g., neutral and valenced speech, screamed sentences, laughter, etc.) produced or retrieved for the purposes of figuring out what it is that makes a scream a scream, and how to translate that into a set of quantifiable parameters to capitalize on that knowledge, regardless of the agenda. 

Because of their interest for audio surveillance applications, screams are currently a contested object and a hot commodity. Much as is the case with other scream distinction/detection enterprises, the initial training of OS most likely involved that vast and available bank of crafted scream renditions—by professional actors, machines, combinations of those, by and for an industry otherwise partial to female non-speech sounds—conveniently the exact type of “thick with body” female voicings OS is also invested in. For some readers, myself included, this might come across as creepy and, science-wise, flimsy. 

Screencap of ad for Chilla, a scream alert app developed in India

Scream research often relies on how human listeners recruited for the cause respond to audio samples. Apparently, whether the scream is “real,” acted, or post-produced is neither something study subjects necessarily distinguish nor a determining factor in how they rate and react. In terms of machines learning to scream-mine audio data, it is what it is: “natural corpora with extreme emotional manifestation and atypical sounds events for surveillance applications” are scarce, unreliable, and largely unavailable because of their private character. That is no longer the case for OS, which has been accruing, and machine-learning from, its beta-user screams as well as how users themselves monitor/rate their screams and the app’s sensibility. OS users’ screams might not be exactly ad lib, as users/vocalizers first practice with the “screamometer” to learn to scream for and as a means to interface with OS, but it’s as natural a corpora as it gets, and it’s free for the users of the screams. OS not only echoes “voice stress analysis” technologies invested in distinguishing true from fake or in ranking urgency, but, as part and parcel of a larger scream surveillance enterprise, also public surveillance technologies such as ShotSpotter, all of which Lawrence Abu Hamdan has brilliantly dissected in his essay on the recording of the police gunshots that killed Michael Brown in Ferguson, Missouri in 2014.

Chilla is a strikingly similar app developed and available in India—although there’s a nuanced difference in the developer’s rationale for Chilla, which in its pursuance of scream-activated personal safety also aims to compensate for the fact that many girls and women don’t call “parents or police” for help when harassed or in danger. As presented, Chilla responds both to assaults and to women’s ambivalence towards their guardians. The latter is, too, a manifestation of the breadth of gender-based violence as a socio-cultural problem, one that Chilla is trained to fail to listen to and one that, because of OS’s particular niche user market, is simply out of the purview of its UK counterpart.

That problem–and that failure–is neither exclusive to India nor to scream-activated personal safety apps. Calling 999 in the UK, 911 in the US, or 091 in Spain, where I am writing, doesn’t come naturally to many targets of sexual and gender-based violence because they don’t conceive police as a help or because, directly, they see it as a risk—to themselves and/or to others. As Angela Ritchie has copiously documented in Invisible No More: Police Violence Against Black Women and Women of Color, women of color and Black women in particular are at extremely high risk for rape and sexual abuse by police officers, as high as 1 in 5 women in New York City alone.

OS, then,  is framed as a pragmatic, partial answer to a problem it doesn’t solve: “We should never have to dress in a certain way…but we do.” The specifics of how OS would actually “save” or even has saved its users in particular scenarios go unexplained, because OS is meant to help with feeling safe; getting into the details, and the what ifs, compromises that service. This sense of safety has two components and is based on two promises: one, that OS will listen to your (panic) scream, and, two, as of now via the intermediacy of your contacts, the police will go save you. The second component and its assumed self-evidence speaks to the app’s whiteness and of its target market of white, securitized, cisgender female subjects. 

Image of woman walking alone, entitled “Can You Hear Me Screaming?” by Flickr User Stefano Corso (CC BY-NC-ND 2.0)

Over and above its acoustic profiling, the app is simply not designed with every woman in mind. OS’s branding is about a certain lifestyle—of going for early runs and dates with cis-men, of taking time for yourself because you’re super busy at your white-collar job and going for night runs, of taking inspiration from “world” women and skipping if running isn’t for you.  This lifestyle is also sold: sold as always under the threat of rape–despite its “rightfulness”–sold in a way that animates the feelings of insecurity and disempowerment that One Scream advertizes itself as capable of reversing.  Safety, then, is sold as retrievable with OS

Wearable or otherwise portable technologies to keep women “safe,” specifically from sexual assaults, are not new and are varied. These have been vigorously protested, particularly from feminist standpoints other than the white, securitized, capitalist brand OS professes—because, in (partly) delegating safety on technologies women then become personally responsible for, these technologies  further “blame” women.  For authorities and the patriarchy, this shift in blame is a relief. In discussing the racialized securitization of US university campuses, Kwame Holmes notes how despite “reactionary attacks” on campus feminism (e.g., so-called “snowflakes” complaining about bad sex) and authorities’ effective reluctance to acknowledge and challenge rape culture, anti-sexual assault technologies tend to be welcomed and accepted. As Holmes also notes, there’s no paradox in that. Those technologies flatten the discussion, deactivate more radical feminist critiques and potential strategies, and protect the status quo—not so much women and not those who, whenever an alarm sounds and especially when security forces respond, readily become insecure.  

For some readers, OS might have a dystopian sci-fi movie feel. Filmmakers have come up with more radical, yet low-tech, “solutions” and uses of high-pitched triggers. In Born in Flames (Lizzie Borden, 1983), blowing whistles, the Women’s Army bicycle brigade confronts rapists and sexual assaulters. The WA members, too, confront sexual harassers on the New York City subway, which wasn’t imagined to be equipped with CCTV.

It is not a stretch to think that OS could potentially amplify the insecurities of Black and brown people subject to white panic (screams) and to its violence, something other audio surveillance technologies are already contributing to, at least it’s not a greater stretch than to entertain situations in which police would show up and save an OS user before it’s too late. Even if it’s never triggered, as developers seem to assume will be the case for the majority of installed units—”Many people have never faced a situation where they have had to panic scream”—it’s trapped in a securitization logic that ultimately relies on masculine authority, one that calls for the expansion of CCTV cameras, wherein women are never quite secure (see Sarah Everard’s vigil). 

One Scream’s FAQs cover selected worries that users have or OS anticipates they might have. Among these, there are privacy concerns (i.e., does it listen to your conversations?) and the fear the alarm will activate “when it shouldn’t.” In the Apple Store user reviews, there’s a more popular type of concern: OS not responding to users’ screams. In other words, there’s simultaneously a worry about OS listening and detecting too much and about OS failing to listen “when it matters.” These anxieties around OS’s listening excesses and insufficiencies touch on (audio) surveillance paradoxical workings: does OS encroach on the everyday life of those within users’ cell phones’ earshot while not necessarily delivering on an otherwise modest promise of safety in highly specific scenarios? There’s a unified developer response to these concerns: OS “is trained to detect panic screams only.”

Featured Image: By Flicker User Dirk Haun. Image appears to be a woman screaming on a street corner, but is actually an advertisement on the window of a T-Mobile cell phone shop (CC BY 2.0)

María Edurne Zuazu works in music, sound, and media studies, and researches the intersections of material culture and sonic practices in relation to questions of cultural memory, social and environmental justice, and the production of knowledge (and of ignorance) in the West during the 20th and 21st centuries. María has presented on topics ranging from sound and multimedia art and obsolete musical instruments, to aircraft sound and popular music, and published articles on telenovela, weaponized uses of sound, music and historical memory, and music videos. She received her PhD in Music from The CUNY Graduate Center, and has been the recipient of Fulbright and Fundación La Caixa fellowships. She is a 2021-2022 Fellow at Cornell’s Society for the Humanities. 

tape reel

REWIND! . . .If you liked this post, you may also dig:

Flâneuse>La caminanta–Amanda Gutierrez

Sounding Out! Podcast #63: The Sonic Landscapes of Unwelcome: Women of Color, Sonic Harassment, and Public Space

Echo and the Chorus of Female MachinesAO Roberts

Vocal Gender and the Gendered Soundscape: At the Intersection of Gender Studies and Sound Studies–Christine Ehrick

Listen to yourself!: Spotify, Ancestry DNA, and the Fortunes of Race Science in the Twenty-First Century

If you could listen to your DNA, what would it sound like? A few answers, at random: In 1986, the biologist and amateur musician Susumo Ohno assigned pitches to the nucleotides that make up the DNA sequence of the protein immunoglobulin, and played them in order. The gene, to his surprise, sounded like Chopin.

With the advent of personalized DNA sequencing, a British composition studio will do one better, offering a bespoke three-minute suite based on your DNA’s unique signature, recorded by professional soloists—for a 300GBP basic package; or 399GBP for a full orchestral arrangement.

But the most recent answer to this question comes from the genealogy website Ancestry.com, which in Fall 2018 partnered with Spotify to offer personalized playlists built from your DNA’s regional makeup. For a comparatively meager $99 (and a small bottle’s worth of saliva) you can now not only know your heritage, but, in the words of Ancestry executive Vineet Mehra, “experience” it. Music becomes you, and through music, you can become yourself.

screencap by SO! ed JS

As someone who researches for a living the history of connections between music and genetics I am perhaps not the target audience for this collaboration. My instinct is to look past the ways it might seem innocuous, or even comical­—especially when cast against the troubling history of the use of music in the rhetoric of American eugenics, and the darker ways that the specter of debunked race science has recently returned to influence our contemporary politics.

During the launch window of the Spotify collaboration, the purchase of a DNA kit was not required, so in the spirit of due diligence I handed over to Spotify what I know of my background: English, Scottish, a little Swedish, a color chart of whites of various shade. (This trial period has since ended, so I have not been able to replicate these results—however, some sample “regional” playlists can be found on the collaboration homepage).

screen capture by SO! editor JLS

While I mentally prepared myself to experience the sounds of my own extreme whiteness, Ancestry and Spotify avoid the trap of overtly racialized categories. In my playlist, Grime artist Wiley is accorded the same Englishness as the Cure. And ‘Scottish-Irish’, still often a lazy shorthand for ‘White’, boasted more artists of color than any other category. Following how the genetic tests themselves work, geography, rather than ethnicity, guides the algorithm’s hand.

As might be expected, the playlists lean toward Spotify’s most popular sounds: “song machine” pop, and hip-hop. But in smaller regions with less music in Spotify’s catalog, the results were more eclectic—one of the few entries of Swedish music in my playlist was an album of Duke Ellington covers from a Stockholm-based big band, hardly a Swedish “national sound.”  Instead, the music’s national identity is located outside of the sounding object, in the information surrounding it, namely the location tag associated with the recording. In other words: this is a nationalism of metadata.

One of the common responses to the Ancestry-Spotify partnership was, as, succinctly expressed by Sarah Zhang at The Atlantic: ‘Your DNA is not your culture’. But because of the muting of musical sound in favor of metadata, we might go further: in Spotify’s catalog, your culture is not even your culture. The collaboration works because of two abstractions—the first, from DNA, to a statistical expression of probable geographic origin; and second from musical sound and style characteristics, to metadata tags for a particular artist’s location. In both of these moves, traditional sites of social meaning—sounding music, and regional or familial cultural practice—are vacated.

Synthetic Memetic / Matthew Gardiner (AU): Gardiner composed a DNA sequence in such a way that the series of nucleotide bases in it correspond to the letters of the song title “Never Gonna Give You Up” by Rick Astley, and then integrated them symbolically into a pistol. Credit: Sergio Redruello / LABoral Attribution-NonCommercial-NoDerivs 2.0 Generic (CC BY-NC-ND 2.0)

There is a way in which this model could come across as subversive (which has not gone unnoticed by Ancestry’s advertising team). Hijacking the presumed whiteness of a Scotland or a Sweden to introduce new music by communities previously barred from the possibility of ‘Scotishness’ or ‘Swedishness’ could be a tremendously powerful way of building empathy. It could rebut the very possibility of an ethno-state. But the history of music and genetics suggests we might have less cause for optimism.

In the 1860s, Francis Galton, coiner of the word ‘eugenics’, turned to music to back up his nascent theory of ‘hereditary genius’—that artistic talent, alongside intelligence, madness, and other qualities were inherited, not acquired. In Galton’s view, musical ability was the surest proof that talents were inherited, not learned, for how else could child prodigies stir the soul in ways that seem beyond their years? The fact of music’s irreducibility, its romantic quality of transcendence, was for Galton what made it the surest form of scientific proof.

Galton’s ideas flourished in America in the first decades of the twentieth century. And while American eugenics is rightly remembered for its violence—from a sequence of forced sterilization laws beginning with Indiana in 1907, to ever-tightening restrictions on immigration, and scientific propaganda against “miscegenation” under Jim Crow—its impact was felt in every area of life, including music. The Eugenics Record Office, the country’s leading eugenic research institution, mounted multiple studies on the inheritance of musical talent, following Galton’s idea that musical ability offered an especially persuasive test-case for the broader theory of heritability. For 10 years the Eastman School of Music experimented on its newly admitted students using a newly-developed kind of “musical IQ test”, psychologist Carl Seashore’s “Measures of Musical Talent”, and Seashore himself presented results from his tests at the Second International Congress of Eugenics in New York in 1923, the largest gathering of the global eugenics movement ever to take place. His conclusion: that musical ability was innate and inherited—and if this was true for music, why not for criminality, or degeneracy, or any other social ill?

From “The Measurement of Musical Talent,” Carl E. Seashore, The Musical Quarterly Vol. 1, No. 1 (Jan., 1915), p. 125.

Next to the tragedy of the early twentieth century, Spotify and Ancestry teaming up seems more like a farce. But scientific racism is making a comeback. Bell Curve author Charles Murray’s career is enjoying a second wind. Border patrol agents hunt “fraudulent families” based on DNA swabs, and the FBI searches consumer DNA databases without customer’s knowledge. ‘Unite the Right’ rally organizer Jason Kessler ranked races by IQ, live on NPR.. And, while Ancestry sells itself on liberal values, many white supremacists have gone after ‘scientific’ confirmation for their sense of superiority, and consumer DNA testing has given them the answers they sought (though, often, not the answers they wanted.)

As consumer genetics gives new life to the assumptions of an earlier era of race science, the Spotify-Ancestry collaboration is at once a silly marketing trick, and a tie, whether witting or unwitting, to centuries of hereditarian thought. It reminds us that, where musical eugenics afforded a legitimizing glow to the violence of forced sterilization, the Immigration Acts, and Jim Crow, Spotify and Ancestry can be seen as sweeteners to modern-day race science:  to DNA tests at the border, to algorithmic policing, and to “race realists” in political office. That the appeal of these abstractions—from music to metadata, from culture to geography, from human beings to genetic material—is also their danger. And finally, that if we really want to hear our heritage, listening, rather than spitting in a bottle, might be the best place to start.

Featured Image:  “DNA MUSIC” Creative Commons Attribution-Share Alike 4.0 International

Alexander Cowan is a PhD candidate in Historical Musicology at Harvard University. He holds an MMus from King’s College, London, and a BA in Music from the University of Oxford. His dissertation, “Unsound: A Cultural History of Music and Eugenics,” explores how ideas about music and musicality were weaponized in British and US-American eugenics movements in the first half of the twentieth century, and how ideas from this period survive in both modern music science, and the rhetoric of the contemporary far right.

tape reelREWIND! . . .If you liked this post, you may also dig:

Hearing Eugenics–Vibrant Lives

In Search of Politics Itself, or What We Mean When We Say Music (and Music Writing) is “Too Political”–Elizabeth Newton

Poptimism and Popular Feminism–Robin James

Straight Leanin’: Sounding Black Life at the Intersection of Hip-hop and Big Pharma–Kemi Adeyemi

%d bloggers like this: