Big Tech S3E9: Mutale Nknonde On How Biased Tech Design and Racial Disparity Intersect

March 18, 2021

 
 

Listen to this week’s new episode of Big Tech, where Mutale Nknonde, founder of AI for the People discusses how our technology has inherent biases that align with those of its creators and disadvantage under-represented groups.

 

In this episode of Big Tech, Taylor Owen speaks with Mutale Nkonde, founder of AI for the People (AFP). She shares her experiences of discrimination and bias working in journalism and at tech companies in Silicon Valley. Moving into government, academia and activism, Nkonde has been able to bring light to the ways in which biases baked into technology’s design disproportionately affect racialized communities. For instance, during the 2020 US presidential campaign, her communications team was able to detect and counter groups who were targeting Black voters in social media groups, by weaponizing misinformation, with the specific message to not vote. In her role with AFP, she works to produce content that empowers people to combat racial bias in tech. One example is the “ban the scan” advocacy campaign with Amnesty International, which seeks to ban the use of facial recognition technology by government agencies.

In their conversation, Mutale and Taylor discuss the many ways in which technology reflects and amplifies bias. Many of the issues begin when software tools are designed by development teams that lack diversity or actively practise forms of institutional racism, excluding or discouraging decision-making participation by minority ethnic group members. Another problem is the data sets included in training the systems; as Nkonde explains, “Here in the United States, if you’re a white person, 70 percent of white people don’t actually know a Black person. So, if I were to ask one of those people to bring me a hundred pictures from their social media, it’s going to be a bunch of white people.” When algorithms that are built with this biased data make it into products — for use in, say, law enforcement, health care and financial services — they begin to have serious impacts on people’s lives, most severely when law enforcement misidentifies a suspect. Among the cases coming to light, “in New Jersey, Nijeer Parks was not only misidentified by a facial recognition system, arrested, but could prove that he was 30 miles away at the time,” Nkonde recounts. “But, because of poverty, [Parks] ended up spending 10 days in jail, because he couldn’t make bail. And that story really shows how facial recognition kind of reinforces other elements of racialized violence by kind of doubling up these systems.” Which is why Nkonde is working to ban facial recognition technology from use, as well as fighting for other legislation in the United States that will go beyond protecting individual rights to improving core systems for the good of all.

 
 
Previous
Previous

Big Tech S3E10: Nicole Perlroth On the Cyber Weapons Arms Race

Next
Next

Big Tech S3E8: Rod Sims On Australia’s New Law to Rebalance Media Power