CFP: “Racial Bias in Speech AI,” Due 1 Nov 2021
CFP: “Siri, can Black vocalics get some 💖??: Racial Bias in Speech AI,” Due 1 NOV 2021
In summer 2021, sound artist, engineer, musician, and educator Johann Diedrick covened a panel at the intersection of racial bias, listening, and AI technology at Pioneerworks in Brooklyn, NY. Diedrick, 2021 Mozilla Creative Media award recipient and creator of such works as Dark Matters (most recently at Squeaky Wheel in Buffalo, NY), is currently working on identifying the origins of racial bias in voice interface systems due to the absence of Black speech in the datasets used to train them. Dark Matters, according to Squeaky Wheel, “exposes the absence of Black speech in the datasets used to train voice interface systems in consumer artificial intelligence products such as Alexa and Siri. Utilizing 3D modeling, sound, and storytelling, the project challenges our communities to grapple with racism and inequity through speech and the spoken word, and how AI systems underserve Black communities.”
The panel on racially biased voice interface systems featured Diedrick in conversation with artist James Allistair Sprang, linguist Kọ́lá Túbọ̀sún, and SO! editor and scholar Jennifer Lynn Stoever, and was one of the ONLY live events of Summer 2021 (at least it felt that way!). We at SO! wanted to share this work with our audience, time shifting it into a brand new forum coming at you in January and February 2022, but with a twist!!!
Would YOU like to join this panel?
Sounding Out! are sending out an open call for your art, writing, research, and/or an exciting convergence of all three to engage with the critical questions that Diedrick set out for the Pioneerworks panelists. This expanded forum, named “Siri, can Black vocalics get some 💖??: Racism in Speech AI,” in a nod to Ikechúkwú Onyewueni’s March 2021 Tweet heard around the world, will be published in conversation with the OG panelists!
- What’s the history of the relationship between racial bias and Black speech, locally + globally? What is the relationship between these pre-existent histories and the development of speech artificial intelligence?
- How are these racially-biased systems already in our daily lives? How does mis-recognition and mis-understanding impact Black diasporic speech communities?
- Are we training AI’s speech recognition or is AI attempting to re-train our speech?
- How are these racially-biased systems related to other voice technologies we might already be familiar with? How do these systems function?
- How we might create more equitable futures for all speech communities with and via AI? How might we decolonize AI and the tech industry itself?
- How would a liberatory artificial intelligence act? What are the networks, communities, and infrastructures we need to build our tomorrows?
- Specifically for People of Color: What would we want to be made possible? How could we redesign these systems for us, by us? What would we hope for them to accomplish? How would we want to use these technologies for our own needs, wants, and desires?
Remember, your writing can be personal, critical, historical, academic-y, or an iconoclastic mix of all of those. We welcome research-based posts and posts examining aural experiences through a first-person narrative style; many of our posts mix both. We also welcome ideas for podcasts as well as artistic posts that use the blog format to create an original audio-visual experience.
Please submit your 200 word-proposal to firstname.lastname@example.org. Please don’t forget to read our mission statement, submission guidelines (and peruse some posts) before sending us your stuff. Johann Diedrick will be curating posts for the conversation, along with SO! editor Jennifer Stoever.
Selections Announced and Contacted: October 26th, 2021
First drafts due: December 15, 2021
Second Drafts Due: February 7th, 2022
Layout and Publication beginsNote: even if you your work isn’t selected for the need of the specific forum, it doesn’t necessarily mean your piece won’t be published by SO! at a later point in time. Just. Send. Us. Your. Stuff!