Archive | Sound and Surveillance RSS for this section

Mimicked Voices and Nonhuman Listening: AI Deepfakes, Speech, and Sonic Manipulation in the Digital War on Ukraine

The essays collected in this series (link to the Introduction) trace how nonhuman listening operates through sound, speech, and platformed media across distinct but interconnected domains. Across these accounts, listening no longer secures meaning or relation; it becomes a site of contestation, where sound is mobilized, processed, and weaponized within systems that privilege circulation, recognition, and response over truth. In this contribution, Olga Zaitseva-Herz examines how nonhuman listening operates under conditions of war, where AI-generated voices and deepfakes destabilize the very grounds of auditory trust. Through the case of Ukraine, she shows how platforms and political actors alike exploit algorithmic listening systems to amplify affect, circulate disinformation, and transform voice into a tool of psychological warfare. Listening, in this context, becomes not a means of understanding but a terrain of uncertainty. –Guest Editor Kathryn Huether

Russia’s full-scale invasion of Ukraine has unfolded as the most digitally mediated war to date, shaped not only by what circulates online but by how content is heard, interpreted, and amplified.  Here, listening is not limited to human hearing: it also includes algorithmic systems that detect, rank, and amplify content, as well as political actors and online publics who interpret and recirculate it. Social media platforms—Telegram, Instagram, TikTok, Facebook—have become sites of psychological warfare where AI-generated audio, video, text, and image-based content are crafted to manipulate perception and provoke rapid emotional responses, often through algorithmic systems attuned to virality and affect. Ukrainian political authorities regularly caution users by saying that everything one reads, hears, or sees could be a psychological weapon. This is not rhetorical. Content is often designed to produce outrage, shock, and despair—emotions that travel quickly across platforms and influence public mood.

AI is used to create fake news videos, synthetic voices, and deepfake conversations, complicating how authenticity is heard and assessed. Some recordings circulating on social media simulate “leaked” phone calls revealing political dissent or strategic plans that are then shared on social media sites such as Telegram, Instagram, and Facebook. At the same time, the fact that people’s original voices can now also be generated with AI means that one can claim that their recorded voice is AI-generated. A widely circulated case involved Russian music producer Iosif Prigozhin, whose alleged call criticizing the Kremlin provoked significant backlash. Soon after he claimed the recording was an AI forgery – a statement whose truth remains unclear, but which strategically exploits growing public awareness of deepfakes as a means of discrediting or distancing from damaging material. Deepfakes thus do not merely deceive; they also destabilize the conditions of listening and trust, turning listening itself into a site of strategic uncertainty.. This uncertainty exploits a growing crisis of trust in listening itself, where voices can always be disavowed as synthetic. Against this backdrop, music and voice emerge as especially powerful media for manipulation, parody, retaliation, and symbolic struggle.

Graafika. Kuulaja. / Creator: Keerend, Avo (autor) / Date: 1980 / Providing institution: Pärnu Muuseum / Aggregator: E-Varamu / Providing Country: Estonia / CC0 1.0 / Graafika. Kuulaja. by Keerend, Avo (autor) – 1980 – Pärnu Museum, Estonia – CC0.

AI Songs as a Tool of Revenge

AI generative tools are also used for irony or parody, such as in the viral remake “Samotni Moskali,” [Lonely Muscovites], which mocks the Ukrainian pop star Ani Lorak, who moved to Russia. On November 13th, 2023, Ukrainian journalist and politician Anton Gerashchenko’s Telegram channel posted a video remake of Ani Lorak’s old song “Poludneva Speka” [Midday Heat], renamed “Samotni Moskali.” This video quickly went viral on social media. Her big hit from the ’00s has been remade into strongly pro-Ukrainian content, featuring clips from current frontlines to illustrate new lyrics generated by an AI voice engineered to closely mimic Lorak’s vocal timbre and affect. The parody relies on listeners recognition of her voice and affective style, while the imitation introduces a strong contentual shift between the original and synthetic lyrics.

This social media burst was a response to Ani Lorak’s claimed political neutrality in the context of Russia´s full-scale war against Ukraine, despite clear signs from her that supported Russia. These actions seemed aimed at revenge and at the same time, the public breakup of her Ukrainian fan base, showing the impact of her choices, while her Ukrainian audience felt betrayed.  It led to many satirical memes, including AI-generated songs related to her stage persona, appearing on social media. Knowing that, under current Russian politics, she could get into trouble there if the government took the promoted `support´ for the Ukrainian army seriously. The revenge group went even further by creating a homepage called “Ani Lorak Foundation,” completely dedicated to fundraisers for the Ukrainian army, which is represented like Lorak’s own project where she showcases her support of Ukrainian battalions. Some military drones deployed by the Ukrainian side even ended up bearing stickers with the name of the “Ani Lorak Foundation.“ This case demonstrates how AI tools became instruments of public satire, sabotage and protest in the context of the current full-scale war.

AI Songs as a Weapon

During the full-scale invasion, Russia has been using AI-generated music as a weapon for propaganda and disinformation. In 2023, multiple songs in Ukrainian were created to disrupt Ukraine’s military mobilization efforts and went viral. One of these, the song “Mamo, Ia Ukhyliant” [Mother, I am a Draft Dodger], became particularly popular in a multitude of variations. Their circulation shows how platforms “listen” to wartime content through metrics of repetition, provocation, and affective intensity, amplifying messages not because they are true, but because they are likely to generate reaction and spread. These songs were algorithmically promoted on TikTok and successfully sparked a viral challenge aimed at undermining Ukraine’s mobilization in 2024 by encouraging Ukrainian men to evade the draft, flee, and party abroad instead. In return, Ukrainian intelligence has released an official statement that these songs are products of the Russian disinformation campaign.

This example shows how AI-generated songs are actively used as powerful tools of war, spreading political messages and influencing people’s political choices. Also, the fact that all these songs about draft evasion were released in Ukrainian highlights the goal of targeting Ukrainian men specifically, since Russian men usually don’t speak Ukrainian and therefore wouldn’t be affected by the content. Furthermore, the presence of a large number of these `draft dodger’ songs at the same time created the impression of widespread societal acceptance through repetition and algorithmic amplification. In this way, repetition itself became a signal of apparent legitimacy: the more frequently such content circulated, the more easily platforms and audiences could register it as evidence of broader consensus around draft evasion within Ukrainians.

Photo by Jon Tyson on Unsplash

AI Pictures on Facebook Mimicking Sound and Sonic Affect

Visual disinformation follows similar viral patterns. There has been a surge of AI-generated images with war-related content, often mimicking sound to intensify emotional impact and prompt affective listening by showing a screaming child amid the rubble or a crying soldier in a Ukrainian uniform, paired with a patriotic, pro-Ukrainian message that encourages interaction, such as a like or comment. Even without actual sound, such images solicit a kind of affective listening in which suffering is not literally heard but imagined, projected, and emotionally registered through visual cues. Meanwhile, although this truth-blurring pattern attracted significant attention among many Ukrainians, ironic counter-memes emerged, mocking its primitive approach.

According to warnings from the Ukrainian online security agency, these accounts aim to interact with pro-Ukrainian users, ultimately adding them as friends or followers. Then, when they build a large enough audience, they shift the type of content they share to pro-Russian. The strategy relies on gathering an audience that is specifically pro-Ukrainian, as they interact with images of crying soldiers or the suffering of the Ukrainian people at the front. In this sense, the filtering process functions as a form of nonhuman listening at the level of audience formation: platforms and account managers learn which publics respond to particular emotional cues, cultivate those publics through repeated engagement, and later redirect them toward different ideological content. This creates a filtering mechanism through which an initially pro-Ukrainian audience is gathered, profiled, and later ideologically redirected, alienating loyal followers while pulling political opinion in a more pro-Russian direction.

Pro-Russian AI Songs in Germany to weaken Support of Ukraine

In Germany, AI-generated songs are being utilized as propaganda tools to promote pro-Russian sentiment and anti-Ukrainian views. The right-wing party AfD has embraced AI songs as a potent tool in this regard. Multiple mostly anonymous YouTube accounts have emerged spreading right-wing ideas, with these songs not only addressing German political issues but also openly supporting Russia. For instance, one song titled “Meine Stimme Habt ihr nicht” [You don’t get my vote] features an AI-created avatar of a tall, strong woman holding German and Russian flags. The version of the same song was also released in Russian. The lyrics criticize Germany’s political course, including military aid to Ukraine, and expresses a desire to be friends with Russia.  Its circulation across German and Russian suggests that listening is being calibrated for different national and linguistic publics, allowing similar political messages to be heard through distinct affective and ideological frames shaped by language, audience, and context.

Contemporary propaganda is increasingly shaped not just by human intent but by rapidly developing nonhuman listening systems—both in production and amplification. Algorithmic listening and perception are exploited to privilege what provokes, not what is true, complicating efforts to regulate digital hate, emotion, and influence. In this context, listening becomes not only a human practice of interpretation, but also a technical system of detection, ranking, and amplification—and, crucially, a site of failure where truth, trust, and perception can no longer be reliably aligned.

Featured Image: Photo by Stanislav Vlasov on Unsplash.

Olga Zaitseva-Herz is an ethnomusicologist working at the intersection of Ukrainian music, war, displacement, and digital culture. She is currently a postdoctoral researcher at the Kule Centre for Ukrainian and Canadian Folklore at the University of Alberta and a guest scholar at Think Space Ukraine at the University of Regensburg. Her research examines how song operates as a medium of political mediation, cultural diplomacy, and historical memory, with a particular focus on popular music and AI-generated sound during Russia’s full-scale invasion of Ukraine. Combining perspectives from ethnomusicology, sound studies, and media analysis, her work investigates how music shapes narratives of resistance, belonging, and global visibility, and how sonic practices illuminate the broader entanglements of culture, technology, and power.

REWIND! . . .If you liked this post, you may also dig:

Hate & Non-Human Listening, an Introduction–Kathryn Huether

Your Voice is (Not) Your PassportMichelle Pfeifer 

Mapping the Music in Ukraine’s Resistance to the 2022 Russian InvasionMerje Laiapea

SO! Amplifies: An Interactive Map of Music as Ukrainian Resistance to the 2022 Russian InvasionMerje Laiapea





They Can Hear Us: Surveillance and Race in “A Quiet Place”

The family in A Quiet Place (2018) lives a life marked by incessant trauma. Invisible to the hunters who are far more powerful than they are, the family remains safe from direct assault as long as they remain unheard by the hunters, who can’t see them. But that same invisibility means the everyday mundanities of life become a constant struggle marked by the terror of the horrific death that will claim them should they make an errant sound. A trip to the pharmacy could prove fatal; a hungry child could summon the hunters and put in danger the entire family. When sketched out in these broad strokes, A Quiet Place, as Kathryn Adams Burton pointed out to me when we left the theater, summons terror from its viewers by depicting the kind of institutional surveillance and violence that endanger Black lives in the US, without one person of color in the entire movie. Thinking with Simone Browne’s Dark Matters (2015), Jennifer Stoever’s The Sonic Color Line (2016), and Jared Sexton’s Amalgamation Schemes (2008), I argue here that A Quiet Place places white characters in a non-white relationship with surveillance, which they overcome in a way that projects white ingenuity and strength and reinforces the centuries-old notion that those who live under the eye and ear of hyper-surveillance tactics do so because they deserve to and because they are not exceptional enough to evade those tactics.

 

surveillance screenshotThe Quiet family’s invisibility is literal: the creatures who hunt them have no sense equivalent to human vision and instead track their prey using hyper-developed listening abilities. They remain vigilant for the audible traces of their victims; sound is the thing that can put the family in trouble. Simone Browne highlights in Dark Matters the significance of visibility and invisibility in the history of antiblack surveillance in the US. Lantern laws in 18th century New York City stipulated that enslaved black and indigenous people must carry a lit lantern if they were in the streets after dark, a regulation that Browne understands as an act of “racializing surveillance,” a “form of knowledge production about the black, indigenous, and mixed-race subject” (79). Specifically, the knowledge created through the lantern laws marked bodies of color as “un-visible,” in need of illumination in order to be properly seen. And here “seen” slips into a couple of different meanings, encompassing not only the ocular but also the notion of “seeing” that connotes understanding and discernment.

The early technology of lantern surveillance, as well as the boundaries delineated by sundown towns, marked black, indigenous, and mixed-race bodies as untrustworthy, scheming, and therefore in need of ongoing surveillance that would make these bodies visible to the eye. At the heart of Dark Matters is Browne’s contention that the history and techniques of surveillance cannot be understood separate from their racializing work: “surveillance…is the fact of antiblackness” (10). So while the Quiet family is white, their relationship to the powerful beings that hunt them–an existence unseeable and unknowable apart from heightened measures of surveillance–appropriates signifiers of racialized surveillance in order to heighten the stakes of the movie’s characters.

feet sand screenshots

The family walks on sand in order to muffle their footsteps.

While Browne focuses primarily on acts of looking as mechanisms for violently enforcing the color line in Dark Matters, Jennifer Stoever traces the history of that same color line through listening practices. Stoever isn’t explicitly engaging surveillance studies the way Browne is, but her theorization of the “listening ear”–the social and political norms that shape how we hear race–includes surveillance acts that, like lantern laws, mark voices perceived to be non-white as always already ready to be monitored, bounded, and eliminated should they exceed their boundaries (13). For both Browne and Stoever, the act of surveilling uncovers a racializing sleight of hand: non-Whiteness is held up as that which stands out, though this racialization is proven backwards if we look and listen a bit closer. US looking and listening norms condition people to organize blackness and brownness and noise as aberrations against natural, invisible, inaudible whiteness, but it takes a good deal of white supremacist work to create this illusion (by “white supremacy,” I mean the social and political practices and institutions that reify and reward whiteness). Looking through brighter lights and sharper camera lenses at non-White subjects and listening through amplification devices and ubiquitous bugs to non-White subjects are both ways of drawing attention away from whiteness–the racialized construct that fuels US social, legal, and political praxis–and toward non-whiteness.

Stoever opens The Sonic Color Line by considering the violence visited upon Jordan Davis, Sandra Bland, and a Spring Valley High School student when each was considered too loud and unruly by white listening ears trained to surveil blackness. The Quiet family is listened to in the same way Davis, Bland, and the Spring Valley student were, in the same way non-whiteness has been surveilled in the US: with dire consequences for being too loud. But, by erasing black and brown bodies and histories from the screen, A Quiet Place divorces these surveillance tactics from their real-world context, where they work as tools of white supremacist systems to “fix and frame blackness as an object of surveillance” (Browne 7). Part of the fantasy of A Quiet Place involves “fixing and framing” whiteness as the objects of sonic surveillance practices that have historically worked to preserve and reward whiteness, not target it.

view of the far, screenshotWhile the Quiet family is subjected to antiblack surveillance techniques, they are otherwise marked as white–and not just based on what their skin color looks like. Farmers in a rural, hilly region of Upstate New York, the Quiet family navigates the apocalypse with a libertarian aplomb. They’re stocked and loaded when the government fails to protect its citizens, and they’re also aware of but not in collaboration with other survivors in the surrounding area. Operating outside the bustle of urban noise, which Stoever notes is marked as non-White by the listening ear, the Quiet family likely boasts generations of working class whites who benefited from the kind of social safety nets built by the New Deal, only to mistake the wealth those social programs built to be fully the fruits of their own hard work.

john krasinski watching screenshot

The father, played by John Krasinski, looks over their plot of land.

The independence and autonomy that the Quiet family demonstrates is not on its own a marker of whiteness, but the kind of wealth accumulation that makes non-collaborative survival possible is the kind that’s historically been more readily available to white folks in the US. It’s a history that is flattened, as is the history of the surveillance that shapes their lives. Their wealth simply exists, and viewers aren’t meant to wonder where it came from or at whose expense. Likewise, viewers learn very little about what the hunters are, where they came from, and why they’re here. The hunters just appear, terrifying sonic surveillers who carry signifiers of antiblack listening practices but who remain detached from the antiblack history of surveillance.

The racialized terror at the heart of A Quiet Place grows from the fear of being denied one’s whiteness, being subjected to the same controlling surveillance measures that have helped maintain the color line for centuries in the US. It’s a standard white sci-fi nightmare scenario where technologies spin out of control and subjugate all of humanity, white people included. It’s also a white exceptionalist fantasy, where whiteness–not just white people but the wealth and freedom created for white people by white supremacist systems–conquers the unconquerable. Jared Sexton’s Amalgamation Schemes can prove helpful here, as he outlines the way racial ideology has shifted in recent decades to permit multiculturalism so long as it preserves whiteness. While systems like slavery and segregation were buttressed by explicit white supremacy, where whiteness = good and non-whiteness = bad, contemporary racial hierarchies are maintained by conceding that multiculturalism = virtuous and race-based solidarity = problematic. Here, white supremacy cloaks itself in diversity, hybridity, mixedness and points to any group that coheres around racial identity as regressive.

give thanks screenshotFlattening history is crucial to that ideological shift. In order to maintain a racial hierarchy that tips in favor of whiteness, past violence and kleptocratic seizures of money, resources, and lives must be removed from the equation so that the kind of multiculturalism that Sexton critiques can proceed as if all who participate do so on a level playing field. Whiteness becomes “something equivalent to the…ethnicities and cultures of nonwhite immigrants and American Indians” (Sexton 66). The field, of course, isn’t level when white supremacy has funneled centuries of ill-gotten gains to whiteness, so this kind of multiculturalism is a way of gaming the system, mixing up racial signifiers so that white folks can take on just enough racial signifiers to blend into a racially diverse society without giving up the power and privilege that continues to give them a leg up.

A Quiet Place follows a calculus similar to the multiculturalism Sexton describes. First, the movie extracts emotional responses of terror and dread through a mixture of racial signifiers, subjecting white characters to forms of surveillance rooted in antiblackness. With no historical context to explain the forms of surveillance the hunters use or the characters’ previous relationships to surveillance, the Quiet family’s whiteness becomes just another ethnicity, a flattened way of being in the world divorced from the white supremacist context that funnels resources their way. Their privilege and power become as invisible to viewers as they are to the hunters. By masking that privilege, A Quiet Place clears space for a fantasy world where the white heroes have survived by virtue of being simply more clever, more resourceful, more brave, more everything than all the black and brown people who have, by implication of their absence from the film, been killed off by the hunters.

all white screenshotA Quiet Place, then, takes a family of multiculturally white characters and positions them in roles white characters have become accustomed to occupying: that of world saviors–some of them even martyrs. Here, hyper-surveillance is simply a fact of life, and those who are able to live life free of the dire consequences of that hyper-surveillance are able to do so because they are exceptional. By this logic, what protects you from the police is either your innocence or your guile, not your whiteness. What guarantees your safety when you publicly challenge government policies is the righteousness of your cause, not your whiteness. What allows you to move in the dark without a lantern or to listen to your music loudly in public spaces without being shot or to cross borders without fear is your inherent virtue, not your whiteness. And when surveillance is positioned as a fact of life, and when those who avoid the crushing consequences of surveillance are understood to do so because they are virtuously exceptional, then those who are targeted, hunted, and killed using hyper-surveillance tactics are understood to be deserving of their fate because they are not virtuous or exceptional enough to avoid it. This is the logic that frames slavery as a choice, that cages children at the border, that influences and fixes elections across the globe but takes umbrage when subjected to the same tactics.

 

One terrible irony of a movie like A Quiet Place is that its flattened hyper-surveillance context makes it incapable of seeing and hearing the deep and rich history of black and brown evasion of hyper-surveillance. There’s an ingenuity coursing through activities of evading surveillance–“looking back,” marronage, and fugitivity chronicled by writers including Sylvia Wynter, Franz Fanon, Katherine McKittrick, and Simone Browne, among others–an ingenuity that evades hyper-surveillance and simultaneously exposes hyper-surveillance as antiblack while arguing against the notion that it is simply a fact of life and signalling avenues to freedom. Instead of those stories, though, the white Quiet family whispers to us a familiarly unsettling refrain: the white Quiet family, alone, can eradicate these terrors. The white Quiet family, alone, can fix this. The white Quiet family, alone, are exceptional.

Featured image, and all images in this post are screenshots from “A Quiet Place ALL TRAILERS – Emily Blunt & John Krasinski 2018 Horror Movie” by Youtube user Flicks And The City Clips.

Justin Adams Burton is Assistant Professor of Music at Rider University. His research revolves around critical race and gender theory in hip hop and pop, and his book, Posthuman Rap, is available now. He is also co-editing the forthcoming (2018) Oxford Handbook of Hip Hop Music Studies. You can catch him at justindburton.com and on Twitter @j_adams_burton. His favorite rapper is one or two of the Fat Boys.

tape reel

REWIND! . . .If you liked this post, you may also dig:
 
Teach Me How To Dougie Like A Mediocre White Man–Justin Burton
 
Resounding Silence and Soundless Surveillance, From TMZ Elevator to Beyoncé and Back Again–Priscilla Peña Ovalle
 
Quiet on the Set?: The Artist and the Sound of a Silent Resurgence– April Miller