Amandeep Bains sets out the key considerations if AI is to be used in homelessness decision-making.

Artificial intelligence is creeping into every corner of public decision-making, and homelessness law is no exception. What should be a safeguard for vulnerable people, the homelessness procedure risks being hollowed out by algorithms that prioritise efficiency over fairness. Instead of protecting rights, AI threatens to turn the homelessness process into an empty formality.

Local authorities are under growing pressure to manage homelessness caseloads efficiently. Against this backdrop, local authorities may look to artificial intelligence (“AI”) to assist with homelessness applications and reviews. While such technology might promise efficiency, its use in this setting raises serious concerns.

AI systems often function as “black boxes”. If a housing officer relies on automated analysis or decision-support, applicants may be unable to see how conclusions were reached. This undermines the requirement that reviews be transparent and reasoned.

If trained on historical housing data, AI could replicate patterns of discriminatory decision-making, for example, disadvantaging groups already disproportionately affected by homelessness. The homelessness process, which should be a safeguard against unfairness, risks becoming another layer of systemic bias.

The statutory duty in reviews requires an officer to actively and independently reconsider the case. Delegating substantive aspects of that judgment to AI would risk hollowing out this safeguard. Local authorities remain legally responsible, but AI’s use could obscure responsibility when errors occur.

If an applicant seeks to review or appeal the outcome, it may be difficult to identify whether an AI tool influenced the decision. Without disclosure obligations, courts and advisers could struggle to scrutinise the reasoning, weakening the ability to hold local authorities to account.

The review process is intended as a crucial protection for homeless applicants. Introducing AI into this process risks undermining that safeguard by reducing transparency, embedding bias, and diluting accountability. The danger is that reviews cease to function as a genuine check on flawed decisions and instead become another barrier to justice for applicants.

But, the answer is not to ban technology outright — it’s to use it responsibly. Safeguards are possible.

What needs to happen?

The homelessness review process exists to protect some of the most vulnerable in society. Used carelessly, AI risks undermining that protection. Used carefully, with transparency and accountability at its heart, it could support — rather than endanger — the fairness of the system.

Amandeep Bains is a housing solicitor at Duncan Lewis. He acts for both landlords and tenants, privately and publicly funded, in a range of cases, including possession proceedings, homelessness cases, suitability reviews, housing litigation, eviction proceedings, disrepair claims, harassment issues, injunction proceedings, and judicial review applications.