Live Chat
Contact Us

Artificial intelligence (AI) is predicted to revolutionise the future of life as we know it. Used in everything from cars to HR and recruitment processes for organisations, the technology’s potential and promise is undeniable. But along with this revolution comes a great deal of risk - biased AI.

Here, we tell you everything you need to know about AI and machine learning bias.

WHAT IS AI BIAS?

AI or machine learning bias occurs when an algorithm produces results that are systematically prejudiced as a result of assumptions that take place in the machine learning process.  This happens because they are created by human individuals who have conscious (or unconscious) preferences that may go undiscovered until the algorithms are used.

High bias is a reflection of problems that are linked to the gathering or usage of data, where systems can draw improper conclusions about data sets. This is very often due to human intervention, or occurs because of a lack of cognitive assessment from researchers.

Some different types of cognitive bias that can be subconsciously applied to such algorithms include stereotyping, bandwagon effects - by which people do something mostly because others are doing it, for instance, ignoring their personal principles - confirmation bias or selective perception.

This is really important because machine learning depends on the quality, size and objectivity of learning data sets. Ultimately, this means that a lack of truly random or complete data can conclude in bias.

WHY IS IT IMPORTANT TO ELIMINATE BIAS?

It is essential for machine learning bias to be eliminated from algorithms because the technology is often applied to decisions with very important business implications, for instance, who to hire for a new job or who to approve for a loan. It can also have implications in medical environments, where any bias could impact diagnostics.

ADOPTING A MULTI-DISCIPLINARY APPROACH

One of the ways that bias in machine learning and AI can be avoided is through the adoption of a multi-disciplinary approach. This involves including stakeholders from various fields across the business to take part in the discussions of what constitutes as inclusive AI. While leaving these decisions to only technology experts may seem like a common sense approach to businesses, collaboration between different types of people, from different cultures and different professions has proven to be an effective way of preventing bias.

SERIOUS MORAL IMPLICATIONS

There is also a serious argument that bias in AI can have very dangerous and life-altering consequences for the individuals it affects.

In 2016, it was reported that the Correctional Offender Management Profiling for Alternative Sanctions (COMPAS) program, which is utilised by judges in many states in the US to help determine parole and other sentencing conditions, was actually racially biased.

COMPAS uses machine learning and historical data to predict the probability that a violent criminal will reoffend. However, it incorrectly predicted that black people are more likely to reoffend than they do, while also predicting that white people are less likely to reoffend than they actually do.

A separate study carried out by the Massachusetts Institute of Technology revealed that three commercially released facial-analysis programs from major technology companies were found to demonstrate skin type and gender biases. According to the findings, the three programs had various error rates. For instance, in determining the gender of light-skinned men the error rate was 0.8%; however, for darker-skinned women, this figure jumped to 20% in one program and to more than 34% in the other two.

Perhaps unsurprisingly, the findings of such studies raised questions about how neutral the technology is. Notably, researchers at a major unnamed US technology company claimed that the accuracy rate for their face-recognition system was more than 97%; however, the research revealed that the data set used to assess its performance was more than 77% male and more than 83% white.

HOW CAN WE ENSURE DATA IS NOT BIASED?

Businesses need to ensure the data being used to train machine-learning models for bias is fair. The data should be representative of different races, cultures, backgrounds and genders. It is essential that the data scientists selected to develop the algorithms shape the data samples in a way that minimises any bias, while business owners should evaluate this before rolling out the technology.

Contact Evaris today to talk about AI and machine learning. Our knowledgeable team are happy to guide you with all your IT requirements. Contact us by calling contact us by calling 0330 124 1245, or email us at [email protected].

Evaris

HELLO

Do you have a project that you would like to discuss?

Required
Required

Accreditations