How Generative AI Can Help Fight Bigotry | Opinion

As generative AI has come sweeping into America's consciousness this year, all sorts of concerns have risen as well. One of the most prominent involves bias. These tools, designed by people based on massive historical datasets, can reinforce and even automate bigotry, making racist or sexist predictions (on everything from crime to employment to loans) or slanting the information they provide.

But generative AI technology can also be used to do just the opposite: to help fight against bigotry. We know this because we're working with businesses and organizations to do just that. In fact, we're bringing our efforts to an international anti-discrimination initiative.

Many businesses, research institutions, social campaigns, and other organizations say they're committed to weeding out all forms of bias. But to do this, they need a clear understanding of the challenges faced by members of marginalized groups. They need to see how people's real lives and experiences differ from stereotypes and false assumptions. They need to learn the structural and recurring problems that stand in the way of equal opportunity. And they need to hear from large numbers of people in these groups—people who may not have felt comfortable discussing such sensitive topics in public forums like social media platforms.

Generative AI offers an unprecedented opportunity to provide all this. By using a generative AI tool and providing it with additional, specific data, organizations can create studies and surveys with open-ended questions that are most likely to elicit thoughtful responses. These organizations can also find large numbers of people in all sorts of demographic groups and get them these questions quickly. They can also ensure that real people are filling out these surveys and providing information about their own real experiences—avoiding the bots that often snatch up surveys and provide false, even biased data.

Large language models (LLMs)—used by people with expertise who provide these models with crucial context—can then read through responses and discover insights. They can highlight common experiences, descriptions, themes and pain points that all these offer. Perhaps most importantly, AI can understand people's natural language, recognizing the different meanings of words and knowing when different words describe the same thing. Empowered with these tools, today's machines can capture the clearest sense of what all these respondents are saying.

For example, we worked with a business in the video game industry, which has been trying to combat bigotry. We helped the organization reach out to large numbers of women, BIPOC, and members of the LGBTQ community to learn about their experiences and used generative AI to analyze the responses. We found that more than 70 percent of people in these groups had witnessed toxic behavior while gaming or interacting within gaming communities, and more than 1 in 4 had stopped playing certain games as a result.

Because the technology can understand open-ended responses, we found unique insights for each group. Many BIPOC respondents described being called racial slurs and derogatory names. Members of the LGBTQ community were commonly targeted for their gender, sexual orientation, or race as well. Many women said they were fetishized, insulted, and criticized for playing certain games.

Learning How People Act Against Bigotry

There's another side to this effort as well. The many organizations focused on ending various forms of bigotry need information. They need to know how people in different demographic groups react when they hear or see an act of discrimination. In order to encourage people to take new, stronger actions, activists need to know what holds people back.

Hands on a laptop
Hands on a laptop are seen. Matthew Horwood/Getty Images

We're working with the Erase Indifference Challenge, started by the Auschwitz Pledge Foundation, to provide this kind of insight. Using generative AI, we performed a benchmark study to explore what people in different countries and of different generations do to stand up to antisemitism, racism, misogyny, and discrimination targeting LGBTQ people, migrants, and refugees. Using generative AI, we analyzed the responses of 1,000 people, focusing first on millennials and Gen Z. The findings revealed a clear gap between intention and action, helping the campaign determine where to focus global efforts.

The discoveries made with the help of generative AI can have an exponential effect. These findings will become part of the data that trains AI tools, improving them. So rather than allowing bias in historic data sets to metastasize, people can use AI to collect crucial information and make these tools more fair—enabling people to help ensure that the arc of AI applications bends toward justice.

Adam Bai is chief strategy officer of Glimpse.

Neil Dixit is CEO of Glimpse.

The views expressed in this article are the writers' own.

Editor's Picks

Newsweek cover
  • Newsweek magazine delivered to your door
  • Unlimited access to Newsweek.com
  • Ad free Newsweek.com experience
  • iOS and Android app access
  • All newsletters + podcasts
Newsweek cover
  • Unlimited access to Newsweek.com
  • Ad free Newsweek.com experience
  • iOS and Android app access
  • All newsletters + podcasts