person

Mutale Nkonde

Mutale Nkonde is a Non-Resident Fellow at the Digital Civil Society Lab.

She is currently the Executive Director of AI for the People a non profit that seeks to use popular culture to educate Black audiences about the social justice implications of the use of AI Technologies in public life. She is also a fellow at the Berkman Klein Center for Internet and Society at Harvard Law School and prior to this she was an AI Policy advisor and was part of a team that introduced Algorithmic Accountability and Deep Fakes Accountability Acts to the US House of Representatives as well as the No Barriers to Biometric Barriers Act. Her work sits at the intersection of race and technology and she is fascinated by how the ideas that uphold systemic racism in the analogue world are advanced and reproduced through the design and deployment of advanced technical systems. Nkonde’s work has been featured in the New York Times, Fast Company and Harvard Business Review and she speaks widely on these issues.

Research Project

Disinformation Creep: How Breaking News Stories are used to Engage in Online Voter Suppression

Project Lead: Mutale Nkonde

OVERVIEW

This project is designed to test our hypothesis that Black disinformation agents are using breaking news stories to reduce Black voter turnout. These news stories include the racial impacts of the COVID-19 pandemic and discussions around systemic racism raised by Black Lives Matter protests. These protests spread across the nation in response to the murder of George Floyd, a 46-year-old African American man who was lynched by Officer Derek Chavin in Minneapolis on May 25th. Video of this death was widely uploaded to social media platforms and shared across the world.

This is a two part research project to uncover how Black voters in the city of Philadelphia are being targeted by disinformation agents operating on the Twitter Platform. In the first part of the project, we conduct a computational analysis of hashtags being used to encourage Black voter apathy, from four groups, who will be discussed in an upcoming paper. They are not named here because some of these accounts conduct targeted networked attacks on people who are critical of their methods. These attacks are well-documented, and, in order to protect our project team, we are engaging in strategic silence, an approach used to de-platform the KKK by Jewish groups, who targeted them because of their support of African American protesters during the Civil Rights Movement. We refer to these disinformation agents as “micro celebrities,” a phrase coined by Becca Lewis, a PhD student in the Communications Department at Stanford. “Micro celebrities” – more commonly known as “influencers” in the main stream press – are social media users who are largely unknown to general audiences, yet extremely influential within niche social media communities

Our project is based in the city of Philadelphia because it was the subject of a 1899 ethnographic study of Black life in the city conducted by W.E.B. DuBois. During his field work, DuBois lived in the city’s seventh ward and examined every aspect of Black life. Dubois’ discovery that health disparities in tuberculosis infection and death rates were not due to race but the social conditions which Black Philadelphians lived in is what brought us back to this locality, especially given its similarity to contemporary findings about the disparate impact of COVID-19 on Black Americans. He also describes how Black political power in the city was co-opted by the white political class. This mirrors the way disinformation agents are attempting to manipulate Black voter behavior in the 2020 general election. The major difference now is that these disinformation agents are also Black Americans. This project seeks to continue this ethnographic study of Philadelphia’s Black populations by adding digital analysis to this cannon. Our project focuses on the unique position of Black, domestic, micro celebrities to suppress the Black vote using techniques that mirror not only targeted Black voter manipulation in Philadelphia, but Soviet disinformation playbooks as well.

APPROACH

This project is a collaboration with Dr. Maria Rodriguez, Assistant Professor at the State University of New York’s Buffalo campus. Her lab will use the Rest, Streaming, and Premium Archive Twitter API’s to examine the content of the hashtags used within messages sent by aforementioned micro celebrities and hashtags. The aim of this timeline and hashtag scraping will be to infer themes within tweet content, as well as to visualize retweet patterns prior to the intervention in order to infer how COVID-19 and political disinformation spread before attempted mitigation. Then, our team will develop tweet inclusion and exclusion criteria in accordance with community ethical standards. We are not pursuing an IRB for this project because we are engaging in community based journalism. However, we will use IRB standards as a guide to ensure we meet their ethical standards for using geo-location, using of one of the appropriate hashtags and/or following one of the micro celebrities listed above. Furthermore, we will not include any identifiable information in our final report or final data set to ensure the privacy and confidentiality of unsuspecting Twitter users.

PROCESS

Once we have a sound understanding of how these disinformation agents are entering into online discussions and creeping in information that suggest Black voters should not vote in 2020, we will design an intervention. We are not discussing the design of this intervention publicly. To support this work, our team secured a grant from the Independent Community Fund, supported by the Knight and Independent Public Media Foundations.

Learn more here.

Fellowship Impact

In the beginning of 2020, before the pandemic, Mutale Nkonde thought the organization she envisioned, AI for the People, would be a policy shop. As public, philanthropic and media attention grew around both the movement for Black Lives and our digitally dependent, ever-surveillant world during pandemic conditions, Nkonde realized that the organization would best build not only from her policy expertise but from her experience as a former television journalist. Communications and narrative work precedes and informs policy, and that’s where AI for the People has been making a difference.

At a time when the racialized harms of technology are finally getting center stage attention, Nkonde says people not only began to understand what it is that she and her colleagues in the fellowship do, but they became genuinely interested in it. During her fellowship Nkonde’s research on the racialized nature of election disinformation, and her work to ban facial recognition technology has garnered attention from companies, lawmakers, and the New York Emmy Awards. She also built strong relationships across academia, engaging with colleagues at Stanford Law Stanford Engineering, Brown University, the Data Science Institute at Columbia University, and the Monk School of Policy at McGill University.

Products

Automated Anti Blackness: Facial Recognition in Brownsville

Featured in the Harvard Kennedy School African American Policy Journal, p. 30

By Mutale Nkonde, Non-Resident Fellow, Digital Civil Society Lab, Stanford PACS

Read

New Positions & Appointments

Speeches & Presentations

Protecting the Black Vote During COVID-19

Watch

The Technology and Structural Inequality Series: Bias and Discrimination in AI

Watch

Race + Data Science Lecture Series: Mutale Nkonde, AI for the People

Learn more

How to get more truth from social media

Watch

How Big Tech and Racial Disparities Intersect

Listen