Can an algorithm eradicate bias in our decision making?

It's tempting to assume that artificial intelligence and machine learning can ensure HR's decisions in key areas such as recruitment and performance management are completely unbiased. But there are still vulnerabilities, argues Jonathan Rennie - employment partner at UK law firm TLT.

The inclination to be prejudiced against certain groups of people, or to instinctively prefer a person who shares our own characteristics, has existed for as long as humans have formed themselves into social groups.

While this herd mentality might have been of evolutionary benefit, it becomes problematic when bias screens out individual merit in favour of unfair prejudice. This can have serious consequences in the workplace, when bias might determine whether a person is invited for a job interview or how they are rated at work.

It is easy to see the attraction of transferring objective assessments from human to machine. The natural assumption is that decision making based on algorithms or artificial intelligence (AI) not only improves efficiency, but also strips out prejudice and reduces human error, allowing organisations to zoom in on objective qualities.

Brave new world?

However, bias remains an issue which must be considered by employers entering the brave new world of algorithmic decision making - this can relate to the reliability of the data itself, the way it is assessed or acted on by humans, and the regulatory framework protecting individuals from unfair automated decision making.

How can problematic bias exist in systems which are making decisions based on cold, hard algorithms?

Some potential issues were described in the interim report on Bias in Algorithmic Decision Making published in July 2019 by the Centre for Data Ethics and Innovation (CDEI).

The report highlights that decision making processes that are driven by algorithms can share some of the same vulnerabilities as a human decision making process.

One issue is that the data or evidence on which decisions are made may be biased, because the people writing the algorithms allow their own prejudices to creep into the system.

Another potential problem is that the complexities of algorithmic decision making can throw up unintended results.

For example, while an employer might remove details of ethnicity before conducting recruitment sifting by algorithm, the system may use other data as a proxy for those characteristics - for example, postcodes that correlate closely with race.

Looking at this example, removing data on ethnicity from the dataset can make it impossible to evaluate whether indirect bias is taking place.

So, given that this technology is an early stage of development (albeit growing rapidly), employers must ensure that their AI systems are based on reliable data. There may still need to be some human review to ensure that systems are not creating unintended results.

Unintended consequences

Even assuming that the datasets on which AI decision making is based is reliable, being entirely blind to difference can result in unfairness.

This point is illustrated by a case which was brought against the Government Legal Service (GLS) by a job applicant who suffered from Asperger's Syndrome. Applicants were sifted using an automated decision making process involving multiple choice questions and the applicant in question argued that she was disadvantaged because she was not allowed to provide short, narrative answers.

The Employment Appeal Tribunal agreed, and found that the application process was indirectly discriminatory and the GLS had failed to make reasonable adjustments (Government Legal Service v Brookes, 2017).

Employers therefore need to be alive to the fact that eliminating difference in treatment does not automatically eliminate unfairness. In some circumstances, difference in treatment is actually required in order to prevent unfairness and this must be reflected in the design and implementation of automated decision making systems.

What employers should do

So what are the rules and exceptions when using automated decision making?

In addition to their general duties to avoid discrimination under the Equality Act 2010, employers must comply with the specific regulatory framework around preventing bias in automated decision making and profiling.

Under the General Data Protection Regulation (GDPR), data subjects have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal or similarly significant effects. This includes e-recruiting practices which have no human intervention.

There are some limited exceptions available (if the automated decision is necessary, undertaken with express consent, or authorised by law) and, in relation to the first two exceptions, the employer must have suitable measures in place to protect the data subject.

As a minimum, these protections must include the right of the data subject to:

  • obtain human intervention;
  • express their point of view; and
  • contest the decision.

Keep data secure

Under the GDPR, employers are also subject to separate restrictions on "profiling" which, as far as is relevant to employers, includes analysing employees' performance at work, health, reliability or behaviour.

Employers undertaking profiling must protect data subjects from bias by ensuring that appropriate mathematical or statistical procedures are used and that data is kept secure.

Importantly, and linking back to the risks around blindly applying automated decisions, employers must ensure that profiling does not have a discriminatory effect on individuals on the basis of a range of protected characteristics, namely: racial or ethnic origin, political opinion, religion or belief, trade union membership, genetic or health status or sexual orientation.

There is currently relatively little regulation around preventing bias in automated decision making, and we await further information from the CDEI, which has been tasked with identifying solutions and best practice linked to identifying and mitigating bias in recruitment decisions.

The CDEI's final report on bias in algorithmic decision making, with recommendations to government, will be submitted in December 2019, so further regulation may well be on its way next year.