Mimicked Voices and Nonhuman Listening: AI Deepfakes, Speech, and Sonic Manipulation in the Digital War on Ukraine


The essays collected in this series (link to the Introduction) trace how nonhuman listening operates through sound, speech, and platformed media across distinct but interconnected domains. Across these accounts, listening no longer secures meaning or relation; it becomes a site of contestation, where sound is mobilized, processed, and weaponized within systems that privilege circulation, recognition, and response over truth. In this contribution, Olga Zaitseva-Herz examines how nonhuman listening operates under conditions of war, where AI-generated voices and deepfakes destabilize the very grounds of auditory trust. Through the case of Ukraine, she shows how platforms and political actors alike exploit algorithmic listening systems to amplify affect, circulate disinformation, and transform voice into a tool of psychological warfare. Listening, in this context, becomes not a means of understanding but a terrain of uncertainty. –Guest Editor Kathryn Huether
—
Russia’s full-scale invasion of Ukraine has unfolded as the most digitally mediated war to date, shaped not only by what circulates online but by how content is heard, interpreted, and amplified. Here, listening is not limited to human hearing: it also includes algorithmic systems that detect, rank, and amplify content, as well as political actors and online publics who interpret and recirculate it. Social media platforms—Telegram, Instagram, TikTok, Facebook—have become sites of psychological warfare where AI-generated audio, video, text, and image-based content are crafted to manipulate perception and provoke rapid emotional responses, often through algorithmic systems attuned to virality and affect. Ukrainian political authorities regularly caution users by saying that everything one reads, hears, or sees could be a psychological weapon. This is not rhetorical. Content is often designed to produce outrage, shock, and despair—emotions that travel quickly across platforms and influence public mood.
AI is used to create fake news videos, synthetic voices, and deepfake conversations, complicating how authenticity is heard and assessed. Some recordings circulating on social media simulate “leaked” phone calls revealing political dissent or strategic plans that are then shared on social media sites such as Telegram, Instagram, and Facebook. At the same time, the fact that people’s original voices can now also be generated with AI means that one can claim that their recorded voice is AI-generated. A widely circulated case involved Russian music producer Iosif Prigozhin, whose alleged call criticizing the Kremlin provoked significant backlash. Soon after he claimed the recording was an AI forgery – a statement whose truth remains unclear, but which strategically exploits growing public awareness of deepfakes as a means of discrediting or distancing from damaging material. Deepfakes thus do not merely deceive; they also destabilize the conditions of listening and trust, turning listening itself into a site of strategic uncertainty.. This uncertainty exploits a growing crisis of trust in listening itself, where voices can always be disavowed as synthetic. Against this backdrop, music and voice emerge as especially powerful media for manipulation, parody, retaliation, and symbolic struggle.

AI Songs as a Tool of Revenge
AI generative tools are also used for irony or parody, such as in the viral remake “Samotni Moskali,” [Lonely Muscovites], which mocks the Ukrainian pop star Ani Lorak, who moved to Russia. On November 13th, 2023, Ukrainian journalist and politician Anton Gerashchenko’s Telegram channel posted a video remake of Ani Lorak’s old song “Poludneva Speka” [Midday Heat], renamed “Samotni Moskali.” This video quickly went viral on social media. Her big hit from the ’00s has been remade into strongly pro-Ukrainian content, featuring clips from current frontlines to illustrate new lyrics generated by an AI voice engineered to closely mimic Lorak’s vocal timbre and affect. The parody relies on listeners recognition of her voice and affective style, while the imitation introduces a strong contentual shift between the original and synthetic lyrics.
This social media burst was a response to Ani Lorak’s claimed political neutrality in the context of Russia´s full-scale war against Ukraine, despite clear signs from her that supported Russia. These actions seemed aimed at revenge and at the same time, the public breakup of her Ukrainian fan base, showing the impact of her choices, while her Ukrainian audience felt betrayed. It led to many satirical memes, including AI-generated songs related to her stage persona, appearing on social media. Knowing that, under current Russian politics, she could get into trouble there if the government took the promoted `support´ for the Ukrainian army seriously. The revenge group went even further by creating a homepage called “Ani Lorak Foundation,” completely dedicated to fundraisers for the Ukrainian army, which is represented like Lorak’s own project where she showcases her support of Ukrainian battalions. Some military drones deployed by the Ukrainian side even ended up bearing stickers with the name of the “Ani Lorak Foundation.“ This case demonstrates how AI tools became instruments of public satire, sabotage and protest in the context of the current full-scale war.
AI Songs as a Weapon
During the full-scale invasion, Russia has been using AI-generated music as a weapon for propaganda and disinformation. In 2023, multiple songs in Ukrainian were created to disrupt Ukraine’s military mobilization efforts and went viral. One of these, the song “Mamo, Ia Ukhyliant” [Mother, I am a Draft Dodger], became particularly popular in a multitude of variations. Their circulation shows how platforms “listen” to wartime content through metrics of repetition, provocation, and affective intensity, amplifying messages not because they are true, but because they are likely to generate reaction and spread. These songs were algorithmically promoted on TikTok and successfully sparked a viral challenge aimed at undermining Ukraine’s mobilization in 2024 by encouraging Ukrainian men to evade the draft, flee, and party abroad instead. In return, Ukrainian intelligence has released an official statement that these songs are products of the Russian disinformation campaign.
This example shows how AI-generated songs are actively used as powerful tools of war, spreading political messages and influencing people’s political choices. Also, the fact that all these songs about draft evasion were released in Ukrainian highlights the goal of targeting Ukrainian men specifically, since Russian men usually don’t speak Ukrainian and therefore wouldn’t be affected by the content. Furthermore, the presence of a large number of these `draft dodger’ songs at the same time created the impression of widespread societal acceptance through repetition and algorithmic amplification. In this way, repetition itself became a signal of apparent legitimacy: the more frequently such content circulated, the more easily platforms and audiences could register it as evidence of broader consensus around draft evasion within Ukrainians.

AI Pictures on Facebook Mimicking Sound and Sonic Affect
Visual disinformation follows similar viral patterns. There has been a surge of AI-generated images with war-related content, often mimicking sound to intensify emotional impact and prompt affective listening by showing a screaming child amid the rubble or a crying soldier in a Ukrainian uniform, paired with a patriotic, pro-Ukrainian message that encourages interaction, such as a like or comment. Even without actual sound, such images solicit a kind of affective listening in which suffering is not literally heard but imagined, projected, and emotionally registered through visual cues. Meanwhile, although this truth-blurring pattern attracted significant attention among many Ukrainians, ironic counter-memes emerged, mocking its primitive approach.
According to warnings from the Ukrainian online security agency, these accounts aim to interact with pro-Ukrainian users, ultimately adding them as friends or followers. Then, when they build a large enough audience, they shift the type of content they share to pro-Russian. The strategy relies on gathering an audience that is specifically pro-Ukrainian, as they interact with images of crying soldiers or the suffering of the Ukrainian people at the front. In this sense, the filtering process functions as a form of nonhuman listening at the level of audience formation: platforms and account managers learn which publics respond to particular emotional cues, cultivate those publics through repeated engagement, and later redirect them toward different ideological content. This creates a filtering mechanism through which an initially pro-Ukrainian audience is gathered, profiled, and later ideologically redirected, alienating loyal followers while pulling political opinion in a more pro-Russian direction.
Pro-Russian AI Songs in Germany to weaken Support of Ukraine
In Germany, AI-generated songs are being utilized as propaganda tools to promote pro-Russian sentiment and anti-Ukrainian views. The right-wing party AfD has embraced AI songs as a potent tool in this regard. Multiple mostly anonymous YouTube accounts have emerged spreading right-wing ideas, with these songs not only addressing German political issues but also openly supporting Russia. For instance, one song titled “Meine Stimme Habt ihr nicht” [You don’t get my vote] features an AI-created avatar of a tall, strong woman holding German and Russian flags. The version of the same song was also released in Russian. The lyrics criticize Germany’s political course, including military aid to Ukraine, and expresses a desire to be friends with Russia. Its circulation across German and Russian suggests that listening is being calibrated for different national and linguistic publics, allowing similar political messages to be heard through distinct affective and ideological frames shaped by language, audience, and context.
Contemporary propaganda is increasingly shaped not just by human intent but by rapidly developing nonhuman listening systems—both in production and amplification. Algorithmic listening and perception are exploited to privilege what provokes, not what is true, complicating efforts to regulate digital hate, emotion, and influence. In this context, listening becomes not only a human practice of interpretation, but also a technical system of detection, ranking, and amplification—and, crucially, a site of failure where truth, trust, and perception can no longer be reliably aligned.
—
Featured Image: Photo by Stanislav Vlasov on Unsplash.
—
Olga Zaitseva-Herz is an ethnomusicologist working at the intersection of Ukrainian music, war, displacement, and digital culture. She is currently a postdoctoral researcher at the Kule Centre for Ukrainian and Canadian Folklore at the University of Alberta and a guest scholar at Think Space Ukraine at the University of Regensburg. Her research examines how song operates as a medium of political mediation, cultural diplomacy, and historical memory, with a particular focus on popular music and AI-generated sound during Russia’s full-scale invasion of Ukraine. Combining perspectives from ethnomusicology, sound studies, and media analysis, her work investigates how music shapes narratives of resistance, belonging, and global visibility, and how sonic practices illuminate the broader entanglements of culture, technology, and power.
—

REWIND! . . .If you liked this post, you may also dig:
Hate & Non-Human Listening, an Introduction–Kathryn Huether
Your Voice is (Not) Your Passport—Michelle Pfeifer
Mapping the Music in Ukraine’s Resistance to the 2022 Russian Invasion—Merje Laiapea
SO! Amplifies: An Interactive Map of Music as Ukrainian Resistance to the 2022 Russian Invasion—Merje Laiapea
“This AI will heat up any club”: Reggaetón and the Rise of the Cyborg Genre


This series listens to the political, gendered, queer(ed), racial engagements and class entanglements involved in proclaiming out loud: La-TIN-x. ChI-ca-NA. La-TI-ne. ChI-ca-n-@. Xi-can-x. Funded by an Andrew W. Mellon Foundation as part of the Crossing Latinidades Humanities Research Initiative, the Latinx Sound Cultures Studies Working Group critically considers the role of sound and listening in our formation as political subjects. Through both a comparative and cross-regional lens, we invite Latinx Sound Scholars to join us as we dialogue about our place within the larger fields of Chicanx/Latinx Studies and Sound Studies. We are delighted to publish our initial musings with Sounding Out!, a forum that has long prioritized sound from a queered, racial, working-class and “always-from-below” epistemological standpoint. —Ed. Dolores Inés Casillas
—
Busco la colaboración universal donde todos los Benitos puedan llegar a ser Bad Bunny. –FlowGPT, TikTok
In November of 2023, the reggaetón song “DEMO #5: NostalgIA” went viral on various digital platforms, particularly TikTok. The track, posted by user FlowGPT, makes use of artificial intelligence (Inteligencia Artificial) to imitate the voices of Justin Bieber, Bad Bunny, and Daddy Yankee. The song begins with a melody reminiscent of Justin Bieber’s 2015 pop hit “Sorry.” Soon, reggaetón’s characteristic boom-ch-boom-chick drumbeat drops, and the voices of the three artists come together to form a carefully crafted, unprecedented crossover.
Bad Bunny’s catchy verse “sal que te paso a buscar” quickly inundated TikTok feeds as users began to post videos of themselves dancing or lip-syncing to the song. The song was not only very good but it also successfully replicated these artists– their voices, their style, their vibe. Soon, the song exited the bounds of the digital and began to be played in clubs across Latin America, marking a thought-provoking novelty in the usual repertoire of reggaetón hits. In line with the current anxieties around generative AI, the song quickly generated public controversy. Only a few weeks after its release, ‘nostalgIA’ was taken down from most digital platforms.

The mind behind FlowGPT is Chilean producer Maury Senpai, who in a series of TikTok responses explained his mission of creative democratization in a genre that has been historically exclusive of certain creators. In one video, FlowGPT encourages listeners to contemplate the potential of this “algorithm” to allow songs by lesser-known artists and producers to reach the ears of many listeners, by replicating the voices of well-known singers. Maury Senpai’s production process involved lyric writing, extensive study of the singers’ vocals, and the Kits.ai tool.
Therefore, contrary to FlowGPT’s robotic brand, ‘nostalgIA’ was the product of careful collaboration between human and machine– or, what Ross Cole calls “cyborg creativity.” This hybridization enmeshes the artist and the listener, allowing diverse creators their creative desires. Cyborg creativity, of course, is not an inherent result of GenAI’s advent. Instead, I argue that reggaetón has long been embedded in a tradition of musical imitation and a deep reliance on technological tools, which in turn challenges popular concerns about machine-human artistic collaboration.
Many creators worry that GenAI will co-opt a practice that for a long time has been regarded as strictly human. GenAI’s reliance on pre-existing data threatens to hide the labor of artists who contributed to the model’s output. We may also add the inherent biases present in training data. Pasquinelli and Joler propose that the question “Can AI be creative?” be reformulated as “Is machine learning able to create works that are not imitations of the past?” Machine learning models detect patterns and styles in training data and then generate “random improvisation” within this data. Therefore, GenAI tools are not autonomous creative actors but often operate with generous human intervention that trains, monitors, and disseminates the products of these models.
The inability to define GenAI tools as inherently creative on their own does not mean they can’t be valuable for artists seeking to experiment in their work. Hearkening back to Donna Haraway’s concept of the cyborg, Ross Cole argues that
Such [AI] music is in fact a species of hybrid creativity predicated on the enmeshing of people and computers (…) We might, then, begin to see AI not as a threat to subjective expression, but another facet of music’s inherent sociality.
Many authors agree that unoriginal content—works that are essentially reshufflings of existing material—cannot be considered legitimate art. However, an examination of the history of the reggaetón genre invites us to question this idea. In “From Música Negra to Reggaetón Latino,” Wayne Marshall explains how the genre emerged from simultaneous and mutually-reinforcing processes in Panamá, Puerto Rico, and New York, where artists brought together elements of dancehall, reggae, and American hip hop. Towards the turn of the millennium, the genre’s incorporation of diverse musical elements and the availability of digital tools for production favored its commercialization across Latin America and the United States.
The imitation of previous artists has been embedded in the fabric of reggaetón from a very early stage. Some of the earliest examples of reggaetón were in fact Spanish lyrics placed over Jamaican dancehall riddims— instrumental tracks with characteristic melodies. When Spanish-speaking artists began to draw from dancehall, they used these same riddims in their songs, and continue to do so today. A notable example of this pattern is the Bam Bam riddim, which is famously used in the song “Murder She Wrote” by Chaka Demus & Pliers (1992).
This riddim made its way into several reggaetón hits, such as “El Taxi” by Osmani García, Pitbull, and Sensato (2015).
We may also observe reggaetón’s tradition of imitation in frequent references to “old school” artists by the “new school,” through beat sampling, remixes, and features. We see this in Karol G’s recent hit “GATÚBELA,” where she collaborates with Maldy, former member of the iconic Plan B duo.
Reggaetón’s deeply rooted tradition of “tribute-paying” also ties into its differentiation from other genres. As the genre grew in commercial value, perhaps to avoid copyright issues, producers cut down on their direct references to dancehall and instead favored synthesized backings. Marshall quotes DJ El Niño in saying that around the mid-90s, people began to use the term reggaetón to refer to “original beats” that did not solely rely on riddims but also employed synthesizer and sequencer software. In particular, the program Fruity Loops, initially launched in 1997, with “preset” sounds and effects provided producers with a wider set of possibilities for sonic innovation in the genre.
The influence of technology on music does not stop at its production but also seeps into its socialization. Today, listeners increasingly engage with music through AI-generated content. Ironically, following the release of Bad Bunny’s latest album, listeners expressed their discontent through AI-generated memes of his voice. One of the most viral ones consisted of Bad Bunny’s voice singing “en el McDonald’s no venden donas.”
The clip, originally sung by user Don Pollo, was modified using AI to sound like Bad Bunny, and then combined with reggaetón beats and the Bam Bam riddim. Many users referred to this sound as a representation of the light-heartedness they saw lacking in the artist’s new album. While Un Verano Sin Ti (2022) stood out as an upbeat summer album that addressed social issues such as U.S. imperialism and machismo, Nadie Sabe lo que va a Pasar Mañana (2023) consisted mostly of tiraderas or disses against other artists and left some listeners disappointed. In a 2018 post for SO!, Michael S. O’Brien speaks of this sonic meme phenomenon, where a sound and its repetition come to encapsulate collective discontent.
Another notorious case of AI-generated covers targets recent phenomenon Young Miko. As one of the first openly queer artists to break into the urban Latin mainstream, Young Miko filled a long-standing gap in the genre—the need for lyrics sung by a woman to another woman. Her distinctive voice has also been used in viral AI covers of songs such as “La Jeepeta,” and “LALA,” originally sung by male artists. To map Young Miko’s voice over reggaetón songs that advance hypermasculinity– through either a love for Jeeps or not-so-subtle oral sex– represents a creative reclamation of desire where the agent is no longer a man, but a woman. Jay Jolles writes of TikTok’s modifications to music production, namely the prioritization of viral success. The case of AI-generated reggaetón covers demonstrates how catchy reinterpretations of an artist’s work can offer listeners a chance to influence the music they enjoy, allowing them to shape it to their own tastes.
Examining the history of musical imitation and digital innovation in reggaetón expands the bounds of artistry as defined by GenAI theorists. In the conventions of the TikTok platform, listeners have found a way to participate in the artistry of imitation that has long defined the genre. The case of FlowGPT, along with the overwhelmingly positive reception of “nostalgIA,” point towards a future where the boundaries between the listener and the artist are blurred, and where technology and digital spaces are the platforms that allow for an enhanced cyborg creativity to take place.
—
Featured Image: Screenshot from ““en el McDonald’s no venden donas.” Taken by SO!
—
Laurisa Sastoque is a Colombian scholar of digital humanities, history, and storytelling. She works as a Digital Preservation Training Officer at the University of Southampton, where she collaborates with the Digital Humanities Team to promote best practices in digital preservation across Galleries/Gardens, Libraries, Archives, and Museums (GLAM), and other sectors. She completed an MPhil in Digital Humanities from the University of Cambridge as a Gates Cambridge scholar. She holds a B.A. in History, Creative Writing, and Data Science (Minor) from Northwestern University.
—

REWIND!…If you liked this post, you may also dig:
Boom! Boom! Boom!: Banda, Dissident Vibrations, and Sonic Gentrification in Mazatlán—Kristie Valdez-Guillen
Listening to MAGA Politics within US/Mexico’s Lucha Libre –Esther Díaz Martín and Rebeca Rivas
Ronca Realness: Voices that Sound the Sucia Body—Cloe Gentile Reyes
Echoes in Transit: Loudly Waiting at the Paso del Norte Border Region—José Manuel Flores & Dolores Inés Casillas
Experiments in Agent-based Sonic Composition—Andreas Pape


















Recent Comments