What are the key contractual issues that public sector organisations should prepare for when implementing an AI solution? Justin Harrington explains.

This is the third article in Justin’s AI and Public Sector series. The first is here and the second is here.

Artificial intelligence (AI) solutions are becoming a common feature of public sector operations across England and Wales—powering chatbots, automating workflows, and enabling smarter decision-making. But whether you’re commissioning a predictive analytics tool for housing services, implementing a generative AI assistant, or rolling out AI to support case management, these technologies bring a new set of challenges to the contracting table.

AI is not just another software purchase. It often involves dynamic systems, evolving datasets, third-party integrations, and complex compliance considerations. For public sector customers, getting the contract right is critical.

In this context, many public bodies have learnt from experience that using their standard terms and conditions to purchase cloud services does not work.   This is even more true for the purchase of AI systems.   Indeed, even using a standard cloud contract to purchase AI systems is likely to leave you very exposed to some of the key risks arising from using AI.

This article sets out the key contractual issues public bodies should consider when implementing an AI solution and offers practical tips for managing risk and ensuring lawful, effective deployment.

1. Clearly define the scope and purpose of the AI solution

Definition of scope and drawing up a clear specification is fundamental for any IT contract.   But this is even more so for AI systems.   Unlike traditional software that performs fixed tasks, AI tools are designed to operate in more flexible or autonomous ways. This makes it essential to define and set out in detail in the agreement what the system is intended to do, what data it will use and what its outputs are expected to be.   To the extent possible, the agreement should contain measurable outputs that can then be used to assess contractual compliance.

Avoid vague descriptions like “smart automation” or “data insights platform” unless they are clearly explained. A well-defined specification helps manage expectations and provides a foundation for monitoring and accountability.

2. Ownership and use of data

AI systems are only as good as the data they are fed, and managing that data is one of the biggest legal and practical challenges in any AI contract.

Key points to address include:

3. Data protection and security

If the AI solution will process personal data, the contract must include robust data protection provisions, in line with UK GDPR and the Data Protection Act 2018.

This means:

Security should be an issue for all data, not just personal data. Consider requiring penetration testing, encryption standards, and audit logs—especially where the AI will handle sensitive, financial, or operational information.

4. Transparency and explainability

Public sector organisations are increasingly expected to ensure that decisions made (or influenced) by AI are transparent and explainable, particularly where individuals may be affected.   This is a formal requirement of the UK GDPR, but transparency and explainability are also one of the five AI principles in the UK Government’s 2023 White Paper.

Your contract should include:

This is especially important where AI is used to support high-impact services, such as benefits, planning, or social care, where fair and accountable decision-making is essential.

5. IP infringement of third parties’ rights

This has two aspects to it.   A customer will look to ensure that the AI system, including all the third-party software and open source software typically making up a system, will not infringe third-party IP rights.   This is relatively normal for IT contracts and is usually backed up by a warranty and an indemnity.  What is more contentious is where you rely on the supplier to provide you with data sets, where the question arises, does that dataset contain material that has been used in breach of any licence relating to it?   Some suppliers may be reluctant to provide the normal IP assurances in respect of this data.

6. Performance and liability

From a customer’s perspective, all IT systems should set out clearly what performance should be achieved. But equally, from the suppliers’ point of view, they will always want to cap their liability.

Public sector bodies should also retain the right to suspend or terminate the contract if the system fails to meet agreed standards or introduces unacceptable risks.

7. Human oversight and control

No AI system should operate without meaningful human oversight, especially in the public sector. It is a requirement of Article 22 UK GDPR and is often cited as good practice.

Ensure your contract:

It should be borne in mind that suppliers’ standard contracts for AI systems may require customers to carry out human oversight at all times in order to mitigate their risk and liability for erroneous output.

8. Responsible AI use

Finally, following the principles set out in the UK Government White Paper, many public bodies are adopting ethical principles for AI use, including fairness, inclusivity, and non-discrimination. These principles should be reflected in your contract.

You may want to include:

These clauses demonstrate your commitment to trustworthy AI and help build public confidence in how new technologies are deployed.

Conclusion

AI is not just another digital tool.  It represents a fundamental shift in how decisions are made, services are delivered, and information is processed. For public sector organisations, this means thinking carefully about how contracts are structured and risks are managed  in those contracts.

By addressing the key issues of scope, data rights, ip infringement, transparency, security, and ethics up front, you can ensure your AI implementation is not only effective, but also legally sound and publicly defensible.

Justin Harrington is a partner at Geldards.