top of page
  • Writer's pictureYaima Valdivia

AI Bias


Image generated with DALL-E by OpenAI

Artificial Intelligence systems are essential to the forward march of technology, yet they are not without imperfections. One of the most critical and pervasive issues is AI bias, which occurs when an AI system reflects or amplifies biases in its training data or algorithm design, leading to outcomes that can be discriminatory and perpetuate societal inequalities.


AI systems learn from vast datasets, and if these datasets contain biased historical data, the AI is likely to replicate these biases. For example, racial biases can emerge when AI facial recognition technologies are trained predominantly on datasets lacking diversity. This was evident in the case of Amazon's Rekognition system. In 2018, the American Civil Liberties Union (ACLU) conducted a study using Amazon's facial recognition technology and found that it incorrectly matched 28 members of the United States Congress with individuals in a database of mugshots. The mismatches disproportionately involved people of color, which raised serious concerns about the reliability and fairness of facial recognition technologies, especially in the context of law enforcement and surveillance. This incident served as a stark demonstration of how biases in AI systems, particularly those related to race, can arise from the use of non-representative data. In the case of facial recognition technology, if the system is trained on datasets that predominantly consist of faces from particular racial groups rather than others, it may need to be more accurate at identifying individuals from underrepresented groups. This not only leads to inaccuracy issues but also to potential discrimination and ethical concerns about how such technology is used and who is affected by its outcomes.


Gender bias is another concern, as seen with Amazon's AI recruiting tool that favored male candidates. In 2018, investigators found that Amazon's system failed to evaluate candidates impartially, showing a bias against gender neutrality. The company trained the AI on resumes submitted over the previous 10-year period. Since the tech industry has historically been dominated by men, the majority of the resumes were from male candidates, which led the machine learning model to learn to prefer male candidates over female candidates inadvertently. Amazon found that its system was penalizing resumes that included the word "women's," as in "women's chess club captain," it was also downgrading graduates from women's colleges. Once the issue materialized, they scrapped the project because they were unable to ensure the neutrality of the system. This case became a textbook example of how machine learning algorithms can perpetuate and amplify societal biases if they are not carefully designed and continuously monitored. It highlighted the importance of auditing AI systems for bias, ensuring that training data is as representative as possible.


In criminal justice, the COMPAS system, designed to predict recidivism, was also found to be racially biased. Black defendants were more likely to be incorrectly judged as high risk compared to their white counterparts, according to a ProPublica investigation. Such biases in AI systems can perpetuate racial disparities in the criminal justice system, affecting sentencing and bail decisions.


Mitigating AI bias involves leveraging a variety of algorithmic strategies alongside the adoption of robust ethical frameworks. Debiasing data is a primary step, which can be done by balancing datasets, reweighing instances, or generating synthetic data to create more equitable training sets. Algorithmically, one can apply fairness constraints or regularizations during the model training process by incorporating mathematical formulations that penalize the model for biased predictions. This can guide the learning process away from biases and towards more equitable outcomes. Another approach is adversarial debiasing, where a model is trained simultaneously with an adversary model that tries to determine the sensitive attribute (like race or gender) from the primary model's predictions. The main model's objective is to make it difficult for the adversary to predict these attributes, thereby encouraging it to learn representations that do not encode the sensitive attributes.


Addressing biases is a multifaceted challenge that necessitates a comprehensive approach to ethics. Ethical frameworks for AI must prioritize the identification and mitigation of biases. This involves a commitment to transparency, which entails meticulous documentation of AI decision-making processes, the data sources for training, and the known constraints of the systems.


Accountability plays a prominent role in managing AI biases. It involves delineating clear responsibilities for the outcomes of AI decisions, including establishing channels for redress when AI systems cause harm. Moreover, ensuring fairness is pivotal in battling biases, which means designing systems that operate impartially without unjustly advantaging or disadvantaging any particular demographic.


Ethical AI requires an ongoing dialogue that encompasses a broad spectrum of society, engaging not only technologists but also individuals from various sectors that AI decisions might influence. This inclusive conversation is crucial to address the dynamic nature of biases as they emerge in the evolving systems. The ethical oversight process is not a singular event but a continuous endeavor that demands vigilant observation and a proactive approach to refining systems based on empirical insights. As technological capabilities expand, so must our collective efforts to ensure these advancements are harnessed in ways that promote equitable treatment and opportunity for all.

13 views

Recent Posts

See All

Comments


bottom of page