Government automated-decision making

Robin Allen QC and Dee Masters, who run the www.ai-lawhub.com and tweet at @AILawHub, summarise the main points from their recent report on artificial intelligence and automated decision-making in government, warning that some local authorities could be acting in a discriminatory way.

Over the summer, we were instructed by The Legal Education Foundation (TLEF) to consider the equality implications of AI and automated decision-making in government, in particular, through consideration of the Settled Status scheme and the use of Risk-Based verification (RBV) systems.  

The paper was finished in September 2019 and ultimately, we concluded that there is a very real possibility that the current use of governmental automated decision-making is breaching the existing equality law framework in the UK. What is more, it is “hidden” from sight due to the way in which the technology is being deployed.

The TLEF very recently decided to make public our opinion. A copy is available here.

Settled Status

The Settled Status scheme has been established by the Home Office, in light of Brexit, to regularise the immigration status of certain Europeans living in the UK. Settled Status is ordinarily awarded to individuals who have been living in the UK for a continuous five-year period over the relevant time frame. In order to determine if an individual has been so resident, the Home Office uses an automated decision-making process to analyse data from the DWP and the HMRC which is linked to an applicant via their national insurance number. It appears that a case worker is also involved in the decision-making process but the government has not explained fully how its AI system works or how the human case worker can exercise discretion.

Importantly, only some of the DWP’s databases are analysed when the Home Office’s automated decision-making process seeks to identify whether an applicant has been resident for a continuous five-year period. Data relating to Child Benefits and / or Child Tax Credit is not interrogated. This is important because the vast majority of Child Benefit recipients are women and women are more likely to be in receipt of Child Tax Credits. In other words, women may be at a higher risk of being deemed incorrectly by the Home Office’s algorithm as not having the relevant period of continuous residency (which in turn will impact on their immigration status) because data is being assessed which does not best reflect them. To date, the government has not provided a compelling explanation for omitting what would appear to be highly relevant information and which is important to female applicants.

We conclude in our opinion that this system could very well lead to indirect sex discrimination contrary to section 19 of the Equality Act 2010. This is because:

  • The algorithm at the heart of the automated decision-making process is a “provision, criterion or practice” (PCP).
  • The data set used to inform the algorithm is probably also a PCP.
  • These PCPs are applied neutrally to men and women.
  • But, women may well find themselves at a “particular disadvantage” in relation to men since highly relevant data relating to them is excluded leading possibly to higher rates of inaccurate decision-making.

Whilst the Home Office would likely have a legitimate aim for its use of automated decision-making (e.g. speedy decision-making), it is arguable that the measure chosen to achieve that aim cannot by justified because it excludes relevant data, for no good reason, which places women at a disadvantage and which undermines the accuracy and effectiveness of the system.

There may well also be implications for disabled people since commentators have suggested that they and their carers will need to provide additional information as part of the Settled Status process.

Risk-based verification (RBV)

Local authorities are required under legislation to determine an individual’s eligibility for Housing Benefits and Council Tax Benefits. There is no fixed verification process but local authorities can ask for documentation and information from any applicant “as may reasonably be required“.

Since 2012, the DWP has allowed local authorities to voluntarily adopt RBV systems as part of the verification process so as to identify fraudulent claims.

RBV works by assigning a risk rating to each applicant; the level of scrutiny to applied to each application will then be dictated by the risk rating.

Some local authorities in the UK are using algorithmic software to determine this risk rating. However, there is no publicly available information which explains how such algorithms are being deployed or on what basis.

Whilst local authorities are undertaking Equality Impact Assessments, the ones which we have seen have tended to be very superficial. It is not fanciful to imagine that the RBV processes which are being deployed by local authorities might be acting in a discriminatory way. After all, there is some publicly available data which demonstrates that RBV schemes can act in surprising ways, for example, identifying high numbers of women as being at higher risk of committing fraud. Equally, the House of Commons Science and Technology Select Committee noted, as early as 2018, how machine learning algorithms can replicate discrimination.

Importantly, due to the complete lack of transparency as to how RBV machine learning algorithms work, applicants are not able to satisfy themselves that they are not being discriminated against. This is known as the “black box” problem and it is something which we discuss extensively in our opinion. Our view is that if there is some evidence that an individual has been discriminated against by an RBV system and this is coupled with a complete lack of transparency, then the burden of proof should shift to the local authority to prove that discrimination is not happening. This is an area where we anticipate litigation in the future.

Finally, in so far as prima facie indirect discrimination is identified and the local authority is required to justified its use of RBV, we expect that the justification defence may be difficult to satisfy because of evidence, which we outline in our paper, which suggests that RBV is not necessarily a very accurate way of identifying fraud.

GDPR

There are also important GDPR consequences here. Article 22 prevents organisations from using fully automated decision-making (and some local authorities do appear to be doing this in relation to RBV systems) where discrimination is occurring. Accordingly, in the future, we foresee equality claims against organisations which utilise AI systems like automated decision-making but also claims for breach of the GDPR.

Conclusion

Whilst we focused on two examples of government decision-making in our opinion for TLEF, there are very many ways in which important decisions are being increasingly taken in the public sector “by machine“. We see equality claims arising from AI and automated decision-making as the next battle ground over the coming decades. Careful planning and auditing of AI systems may well avoid litigation. This is why it is vitally important that all organisations, including the private sector, are acting now to ensure that their decision-making systems are defensible.

To watch out for …

Since we wrote our opinion, we have become aware of concerns about “government by machine” being discussed in other parts of Europe. 

Litigation on this issue is currently underway in the Netherlands in the case of NJCM c.s./De Staat der Nederlanden (SyRI) before the District Court of the Hague (case number: C/09/550982/ HAZA 18/388) concerning an AI system which is being extensively to monitor data on citizens. The case is known for short as “SyRI”. The United Nations Special Rapporteur on extreme poverty and human rights, Professor Philip Alston, has submitted an Amicus Brief which sets out his human rights concerns about the increasing development of digital welfare states. We shall be reporting on the outcome of this case in due course.

The Rapporteur has also warned in a report submitted to the United Nations on the 18 October 2019 that “as humankind moves, perhaps inexorably, towards the digital welfare future it needs to alter course significantly and rapidly to avoid stumbling zombie-like into a digital welfare dystopia.”

Robin Allen QC and Dee Masters are leading discrimination barristers at Cloisters chambers in London. This article first appeared on their AI Law Hub.

Sponsored Editorial

Need a transcript or recording?

Are you a Paralegal or a Legal Officer? Have you been asked to obtain a transcript of a recording for use as evidential material? Wondering where to start? Don’t worry – we speak to people in your position every single day – and we’ll be happy to help you too. Whether or not you choose to use our…