AI+Series%3A+Part+2

Haley Pedersen '25

AI Series: Part 2

The Real Consequences of AI Discrimination

In my previous article for this series, I gave a general overview of what’s currently happening with AI, including what AI is and how it fits into the education system. Diving deeper, one of the most pressing questions concerns the potential for internal bias and racism being inadvertently encoded into AI algorithms, which can have serious real-world consequences for individuals and communities.

AI systems are only as unbiased as the data used to train them. If the data reflects historical patterns of bias and discrimination—such as a bias against people of color—then the AI will likely reproduce those patterns due to a phenomenon known as “algorithmic bias” or “data bias.” When AI systems are trained on incomplete data, it results in systematic discrimination and a less equitable society. Just like when we make decisions based on incomplete information, AI systems can make the wrong choices if they don’t have all the necessary data. Imagine you’re applying for a job and you find out that the company is using AI to sort through resumes. Sounds like a fair and unbiased process, right? Not necessarily. If the AI was trained on a limited set of resumes, it could be biased towards certain candidates and unfairly discriminate against others. For example, if the training data consisted mostly of resumes from men, the system may be more likely to favor male candidates and reject qualified female candidates. The company then must realize that their system is biased and take steps to address it, such as collecting more diverse training data or adjusting the algorithm to account for factors such as gender and race. The complex nature of these biases renders them incredibly challenging to uncover. Once they insinuate themselves into AI systems, they can be difficult to eradicate as they can be at the forefront of how the AI makes decisions. Confronting these problems requires a solution that ensures the lessening of biases ingrained in data and algorithms. AI companies must be transparent about what biases their systems may be susceptible to.

The problem of biased AI can also concern fields such as criminal justice, where AI algorithms could decide bail, sentencing, and parole in trials. One assessment at Carlow University found that, “Risk assessment tools are driven by algorithms informed by historical crime data, using statistical methods to find patterns and connections. Thus, it will detect patterns associated with crime, but patterns do not look at the root causes of crime. Often, these patterns represent existing issues in the justice system. As a result, data can reflect social inequities, even if variables such as gender, race or sexual orientation are removed.” This issue is especially prevalent in facial recognition systems, a significant way law enforcement agencies help identify suspects. The MIT Technology Review even argues that, “These facial recognition practices have garnered well-deserved scrutiny for whether they, in fact, improve safety or simply perpetuate existing inequities. Researchers and civil rights advocates, for example, have repeatedly demonstrated that face recognition systems can fail spectacularly, particularly for dark-skinned individuals—even mistaking [28 different] members of Congress [both white and black] for convicted criminals.” AI bias within the justice system has been proven to be particularly problematic if these AI systems are used to make decisions that impact people’s lives and freedoms as they will lead to unfair treatment and discrimination against certain groups in a system built on the idea of fairness. It’s not just about the technology’s accuracy; biased AI can lead to real-world consequences like wrongful arrests or unfair hiring practices. Addressing these issues and ensuring that AI is developed and deployed justly is essential.

While it may be a complex issue, it is paramount to address AI bias. We must approach this issue constructively and positively, focusing on proactive solutions that promote diversity and inclusivity. One approach could be to ensure that the data sets used to train AI systems are representative of all groups. There also needs to be more transparency in the development and deployment of AI systems, with clear explanations of how decisions are made and how biases are mitigated. In conjunction with these, we must also ensure that we have “explainable AI,” in which the program that answers the query given to it can explain why it made its decisions. Explainable AI is crucial to build trust and transparency between humans and AI systems, allowing us to understand how AI algorithms make decisions and identify potential biases. Finally, involving diverse stakeholders in developing AI systems is important as well, including those from traditionally marginalized communities. By taking these steps, we can work towards creating AI systems that are fair, unbiased, and truly reflective of the diversity of our society.

The Advocate • Copyright 2024 • FLEX WordPress Theme by SNOLog in

Comments (0)

All The Advocate Picks Reader Picks Sort: Newest

Your email address will not be published. Required fields are marked *