How can we trust AI to be un-bias?

You’ve heard it before. The use of AI is on the rise and rising fast. We’re experiencing technological breakthrough after breakthrough, but as we integrate artificial intelligence into our business processes, we’ve witnessed appalling biases held by the software.

Lurking beneath the surface

Behind every AI-system lies an algorithm. If you were to explain what an algorithm is to a five-year-old, you could roughly compare it to a calculator. As a user, you input a set of data, and the calculator uses this information to generate an output based on set laws and manipulations.

Many times, these algorithms fail. In 2015, Google’s photo algorithm identified African-Americans as gorillas. Apart from being inexcusable and wrong, this obvious flaw in the software shows how severe problems with new algorithms could lurk deep within the machine’s learning.

How could these detrimental biases slip past developers? Algorithmic decisions are often made in concealed “black boxes”, hiding their decision-making progress from the developers. Not only are the developers or users blind to the AI’s biases, but they may also have no idea it is happening.

Big data based on small minds

Recently, Amazon had to scrap their AI recruiting tool as it excluded female applicants based on historical data dating back to 1940. Basically, the software was designed to take a hundred resumés, and present the best five candidates. However, the developers identified that the AI based its assumptions on fifty years old female prejudices, and solely presented male candidates.

Amazon decided to shut the project down, afraid of what other biases their software held. We can only speculate, but the same prejudices that the AI deemed women unfit for leadership positions might as well have held biases towards different minorities and ethnicities based on attached photo or name of the applicant.  Amazon foresaw what biases their software may have developed and made amends, but how many big tech companies can say they have done the same?

How can we make sure our calculators are fair?

Make no mistake, AI is the future. To many groundbreaking advances are being made to jump ship on machine learning and artificial intelligence. If we are to use these “calculators” to save time and reduce cost, we must ensure that these algorithms are fair and unbiased.

The responsibility lies with its developers. Not only are the developers responsible for creating a system that does not allow the AI software to form bias decision-making progresses, but they must create a robust quality assurance strategy to reveal any forming discriminations.

Testing for inexcusable prejudices will not necessarily be an easy task since many of these old and outdated biases derived from a tough and bleak fight for equality. It requires the developers to ask a series of uncomfortable “what if’s”, but is essential to continually review and quality test AI software to prevent unfair business practices.

Learn and evolve

For us in Gture, it’s crucial to use the results witnessed by our colleagues in Amazon and Google to create awareness around the challenges of machine learning. We understand the possibilities and challenges concerning working with AI. Together with our clients, we help develop products and functions that give high yield and minimal risk. We follow our project from start to finish, always striving for improvement and quality. This process enables us to catch undesired development in our products and act fast to minimise the chances of our clients facing the same detrimental biases as Google and Amazon did.

Andreas Gravermoen

Author Andreas Gravermoen

More posts by Andreas Gravermoen

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.