About this Event
We use AI every day — whether we realize it or not. From Google searches and Instagram feeds to Siri and job applications, AI shapes a lot of what we see and experience. But here’s the thing: AI isn’t neutral. It learns from data, and that data often reflects real-world biases. Since most AI models, especially large language models (LLMs), are trained on huge chunks of internet data, they end up learning patterns that include racism, sexism, and other harmful stereotypes. That means the tools we think are "smart" can actually repeat or even amplify the same issues we already face in society. This especially impacts people of color. For example, facial recognition tech tends to misidentify Black and Brown faces more often than white ones. Hiring algorithms can filter out applicants with names that sound "ethnic." Even LLMs can reinforce harmful narratives when answering questions or generating text. But it’s not all bad news. The goal isn’t to get rid of AI — it’s to make it better. We can start by creating more diverse and representative training data, auditing AI systems regularly, and making sure people from different backgrounds are part of the development and decision-making process. It’s also important to push for transparency — knowing how and why an AI system makes certain choices helps us hold it accountable. At the end of the day, AI should help everyone, not just a few. That starts with calling out bias and working toward fairness in tech.
Presented by Jagrit Dhingra, Peer Educator, Unity Center.