From personalized movie and music recommendations to processing medical data—artificial intelligence is revolutionizing various aspects of our lives. However, as AI systems become more integrated into our daily routines, the issue of AI bias has emerged as a significant concern.
In this article, we will delve into the concept of AI bias, exploring its common types and real-world examples. We will also discuss practical strategies for mitigating AI bias to ensure that AI technologies benefit everyone equally.
What is AI Bias?
The term “bias” is often used in psychology and it means a tendency, inclination, or prejudice toward or against something or someone based rather on stereotypes and personal opinions than facts and knowledge.
Analogically, AI bias (also known as machine learning bias or algorithm bias) refers to the systematic favoritism or discrimination exhibited by artificial intelligence systems. This bias arises when AI models produce results that reflect and perpetuate human biases and social inequalities present in the training data or the design of the algorithm itself. It can manifest in various stages, including data collection, algorithm development, and predictions, leading to unfair treatment of certain groups or ideas.
The Most Common Types of AI Biases
AI biases can manifest in various forms, each with distinct implications. Understanding these common types of biases is crucial for developing fairer and more inclusive AI systems. Here are some of the most prevalent types:
Data bias - This type of bias arises from biased datasets used to train AI models. If the training data is not representative of the broader population or contains inherent prejudices, the AI will likely replicate and even exacerbate these biases.
Selection bias - Selection bias happens when the data collected for training the AI is not randomly selected but instead follows a biased sampling method, leading to an unrepresentative training set.
Measurement bias - It occurs when the variables used to measure and collect data are biased. The tools or methods used for data collection might favor certain groups over others.
Implicit bias - arises when individuals make assumptions based on their own mental models and personal experiences, which may not be universally applicable.
Common form of implicit bias is confirmation bias - AI can exhibit it when it seeks out or gives more weight to information that confirms pre-existing beliefs or hypotheses, ignoring data that contradicts them.
During Which Point of the Process Bias Can Occur?
Bias often originates during the data collection phase. As was mentioned earlier, not diverse or representative data can lead to biased output. Data labeling may also become a point where bias occurs if the annotators have different interpretations of the same label. Next is model training stage - if the training data is unbalanced or the model's architecture isn't equipped to manage diverse inputs, the model may generate biased results. Bias can also arise during the last phase - deployment - if the system isn't tested with varied inputs or monitored for bias after being put into use.
Examples of AI Biases
Artificial intelligence has become an integral part of modern technology, influencing various aspects of daily life, from facial recognition to recruitment processes. However, despite its potential to enhance efficiency and accuracy, AI systems are not immune to biases. Let’s take a closer look at some of them.
Gender and Skin-type Bias in Commercial AI Systems
According to a new paper by researchers from MIT and Stanford University, three commercially used AI systems for face analysis showed biases. Researcher Joy Buolamwini used the Fitzpatrick scale (a skin color scale that divides skin tones into six categories) and discovered that these systems performed significantly worse when recognizing darker-skinned individuals. Furthermore, facial recognition had higher error rates for women than for men.
In the researchers’ experiments, the three programs’ error rates in determining the gender of light-skinned men were never worse than 0.8 percent. For darker-skinned women, however, the error rates ballooned — to more than 20 percent in one case and more than 34 percent in the other two.
Source: MIT News
This discovery prompted IBM to create new modelthat would perform better after being trained on a more diverse dataset and employing a more robust underlying neural network.
Recruitment Algorithm Discriminating Women
AI solutions can be used during the recruitment process and are especially useful for corporations that receive hundreds of resumes and have an extensive base of past applicants to match to current job openings. Amazon is one of the top companies that can leverage automation to their advantage, but even the e-commerce giant has a failed project in its history.
Amazon’s team was working on a system to review job applicants’ resumes. Computer models were trained to evaluate applicants by analyzing patterns in resumes submitted to the company over a decade. The problem was that the majority of these resumes were from men, reflecting the male dominance prevalent in the tech industry. The system continuously gave resumes from female applicants lower scores. Amazon decided to cancel the initiative in 2017.
The company managed to salvage some of what it learned from its failed AI experiment. It now uses a "much-watered down version" of the recruiting engine to help with some rudimentary chores, including culling duplicate candidate profiles from databases, one of the people familiar with the project said.
Source: Reuters
Unfiltered Input of Data Leads to Sharing Discriminatory Statements
The story of a Twitter bot (on the platform currently known as X) started innocently. Microsoft created Tay as an experiment. The goal was to teach the bot how to engage in casual and playful conversations, similar to those between users on social media. Everyone could chat with Tay, and that’s where the problems began.
Users started to feed the bot offensive content, and Tay began sharing those statements as its own in less than 24 hours. This incident demonstrates the risks of programming AI to interact in unrestricted human environments.
To do AI right, one needs to iterate with many people and often in public forums. We must enter each one with great caution and ultimately learn and improve, step by step, and to do this without offending people in the process. We will remain steadfast in our efforts to learn from this and other experiences as we work toward contributing to an Internet that represents the best, not the worst, of humanity.
Source:Microsoft blog
How to Mitigate AI Bias?
To ensure AI technologies are fair, transparent, and equitable, it is essential to implement strategies that mitigate bias throughout the AI development lifecycle. Let’s delve into effective methods and best practices for reducing bias in AI.
Diverse and Representative Datasets
Machine learning is only as good as the data that trains it. The training data should include diverse and representative samples of the population. Think about collecting data from various demographics, including different ages, genders, ethnicities, and socioeconomic backgrounds.
Regular Audits and Monitoring
Even diverse datasets can still be prone to biases, so there always should be people or systems that check on generated responses. Establish regular audit processes to review AI performance and detect biases. Regular tests and monitoring can prevent situations, where AI starts to provide biased responses.
Inclusive AI Development Teams
Assemble diverse teams to develop AI systems. Ensure that AI development teams include members from various backgrounds and perspectives. Diverse teams are more likely to identify and address potential biases.
Ethical Guidelines and Policies
Establish and follow ethical guidelines for AI development and deployment. Develop and enforce policies that promote fairness, trans parency, and accountability in AI. Follow industry best practices and adhere to regulatory standards like EU AI Act.
Consultation with a Specialist
An AI ethics specialist is responsible for ensuring that artificial intelligence technologies are developed and used in a manner that aligns with ethical principles and social responsibility.
At Primotly, we offer consultation with an AI ethics expert at any stage of the work. It is a good idea to analyze compliance needs even before implementing AI, during the planning phase. This helps avoid mistakes that can consume additional time and resources. We can talk about your concerns regarding the risk, prepare ethical and legal guidelines for implementing the service or product, and help you prepare for ethical implementation.
Ethical use consultations can also be conducted when an AI solution is already in—we also audit processes and data sets on which AI was trained to prevent the risk of bias and suggest improvements and changes.
We offer professional advice and implementation of AI solutions in accordance with ethical guidelines, ensuring transparency and integrity. Contact us to discuss your project.
How to Avoid AI Bias in Your Project
Legal regulations are striving to keep up with the development of AI, which is being applied in many industries, allowing companies to increase revenue and improve the quality of services. Understanding and applying ethical principles in the context of AI will not only help comply with tightening legal requirements but also avoid AI bias, which in the worst-case scenario can even lead to the failure of a project. It is worth paying attention to the ethical and fair use of AI in your projects to fully leverage the capabilities of artificial intelligence.
Sources:
https://itrexgroup.com/blog/ai-bias-definition-types-examples-debiasing-strategies
https://pub.towardsai.net/bias-vs-fairness-vs-explainability-in-ai
https://developers.google.com/machine-learning/crash-course/fairness/types-of-bias
https://www.ibm.com/resources/guides/predict/trustworthy-ai/avoid-bias/