How can we make AI more equitable and free of bias?
Humans are biased. While many of these biases are implicit or unconscious, they can be especially harmful when applied automatically. Like, say, when they’re programmed into artificial intelligence systems. Left unchecked, it can be easy for human developers to unknowingly infuse algorithms with certain preferences.
The most troubling algorithmic biases skew unfair and can be deeply discriminatory. Can AI be de-biased? Not without acknowledging why it occurs and making a concerted effort to call out our own potential biases.
Yes – there’s a very real bias in AI
From discriminatory hiring practices to outright racial profiling, AI biases impact our entire society, not just specifically targeted groups.
So how do AI systems develop prejudice in the first place?
“AI systems learn to make decisions based on training data, which can include biased human decisions or reflect historical or social inequities, even if sensitive variables such as gender, race or sexual orientation are removed,” according to the Harvard Business Review.
AI systems learn to make decisions based on training data, which can include biased human decisions or reflect historical or social inequities.
Flawed or limited data sampling is another biased AI entry point. Here, certain individuals or groups may be over- or under-represented in the datasets used to “train” the algorithms. This alters the output and can make some serious distortions.
You may remember when Amazon made recent headlines after firing a discriminatory hiring algorithm that favored men’s resumes over those of female applicants. Or perhaps you read the 2016 Pulitzer Prize-nominated report by ProPublica, which found that Correctional Offender Management Profiling for Alternative Sanctions (COMPAS) software falsely predicted Black defendants are “nearly twice as high as their white counterparts” to become second offenders.
If baseless biases can overlook qualified applicants or single out the innocent based solely on their skin tone, imagine what else these misguided algorithms are capable of.
Is bias-free AI possible?
As it stands, the process of creating responsive AI remains unfair at best and dangerously discriminatory at worst. Whether it is to remain that way depends on who you ask.
Microsoft Asia President Ralph Haupter says, “I don’t think AI will ever be free of bias, at least not as long as we stick to machine learning as we know it today. What’s crucially important, I believe, is to recognize that those biases exist, and that policymakers try to mitigate them.”
Optimists recognize the positive potential of AI and machine learning technologies, but they must also become change makers if we’re to collectively sidestep the risk of this technology remaining grossly biased-based.
So what can we do?
AI is going mainstream at an unprecedented pace. In a 2020 McKinsey report on The State of AI, “half of respondents say their organizations have adopted AI in at least one function.”
As AI becomes more and more prevalent, a top-down approach could be the best attempt at striking a fairer balance. HBR suggests that leaders invested in this work be mindful to:
- Stay up-to-date on rapid AI developments.
- Invest in bias mitigation processes like third-party audits before deploying AI company-wide.
- Prioritize “explainability techniques” that size up algorithmic results against differing human decisions to tease out potential biases.
- Have humans and machines collaborate to decrease potential bias, using “human-in-the-loop” processes.
- Increase investments and data when doing multi-disciplinary AI research.
- Diversify the AI community and engage different groups.
We all need to do our part to recognize and correct AI biases. No machine needs to be reinforcing discrimination or prejudice. To battle institutionalized inequality, we need to ensure that unlike most humans, automated decision-making remains as impartial as possible.