Equalities watchdog expresses concern at its limited ability to respond to risks of AI because of budget issues
Strong regulation of artificial intelligence systems is essential to preventing any repetition of the Post Office’s Horizon scandal, the Equality and Human Rights Commission has said.
It said in An update on our approach to regulating artificial intelligence that AI “comes with a wide range of risks, including risks of bias and discrimination as well as risks to human rights”.
But the regulator said it lacked the resources needed to regulate AI in the way it would wish to.
AI systems now in use are “many times more powerful and complex than the Horizon system.
“Strong, effective and sufficiently resourced regulation of AI is therefore essential to mitigate the risks and build trust in AI systems.”
It said AI offered many benefits but also had “the potential to result in breaches of the Equality Act 2010 and the Human Rights Act 1998 in many ways”.
The government’s policy paper A Pro-innovation Approach to AI Regulation said regulators should apply five principles: safety, security, robustness; appropriate transparency and explainability; fairness; accountability and governance; contestability and redress.
The EHRC said these principles “lacked sufficient emphasis on equality and human rights” and that it would need additional resourcing to take these on in addition to its existing work. ”As yet, government has not provided any additional resources”, the regulator said.
It would, though, incorporate the principles in its planned compliance and enforcement work and within its advisory role to governments and parliaments.
But the regulator warned: “Going beyond this would require a significant reallocation of our resources. This is not possible within our current strategic commitments and existing budget. We therefore have no plans for the remainder of our current strategic plan to develop dedicated guidance around the White Paper principles.”
The EHRC said in 2024–25 it would focus predominantly on reducing and preventing digital exclusion, particularly for older and disabled people in accessing local services.
It would also work on the use of AI in recruitment, and police use of facial recognition technology.
Mark Smulian