The Information Commissioner’s Office (ICO) has published the final version of its guidance on Artificial Intelligence and Data Protection. James Cassidy looks at the key points.
The ICO's Guidance is aimed at assisting organisations with ensuring compliance with data protection law when developing and deploying AI technology.
The Guidance will be important to all those organisations looking at utilising AI in relation to their management of data and those offering AI solutions.
Purpose of the Guidance
AI has been identified by the ICO as one of its “top three strategic priorities”. The ICO recognises both the benefits that AI can bring to society but also the privacy risks associated with its use. In light of this, the ICO began work on its AI auditing framework in early 2019 in which it sought input on developing a clear framework for how it will audit AI technology’s “compliance with data protection obligations”. Since then the ICO has published a number of articles on how it sees this framework developing.
The Guidance adds to the ICO’s work in this area as it provides organisations with more detailed guidance on its thinking. The ICO published its first draft of this Guidance back in February 2020, which was subject to public consultation. The Guidance is not a statutory code, but instead provides advice and good practice recommendations for those working in data protection compliance (e.g. DPOs, general counsel, managers and the ICO’s own auditors) as well as technology specialists (e.g. machine learning and software developers, data scientists and cybersecurity and IT risk managers).
The Guidance should also be read alongside existing ICO resources on AI, including its report on Big Data, AI, and Machine learning (updated in 2017) and its more recent guidance on Explaining AI which focuses on how organisations can explain AI decisions in compliance with data protection laws (published May 2020).
The Guidance provides helpful assistance for organisations on how to mitigate against data protection risks associated with AI and how to apply the core principles set out in the General Data Protection Regulation (GDPR) to AI technology. The “headline takeaway” for organisations is to consider compliance with data protection law in the early stages of development, reinforcing a core principle in the GDPR, namely “data protection by design and by default”.
The Guidance provides advice and good practice recommendations on the following core data protection principles with respect to AI applications:
- Part 1: Accountability and governance. This section of the Guidance focuses on: (i) how to carry out data protection impact assessments (which, according to the ICO, in most cases will be mandatory for AI applications); (ii) how to identify controllers and processors; (iii) how to assess and address the risks to individuals; and (iv) how to document your approach to compliance.
- Part 2: Lawfulness, fairness and transparency. This section of the Guidance focuses on how these principles apply to AI and how to identify your legal basis for processing (which may vary for AI development and AI deployment). It also looks at the importance of “statistical accuracy” in AI systems and how to address bias and discrimination.
- Part 3: Data minimisation and security. This section of the Guidance focuses on how to identify and manage security risks (including privacy attacks) on AI systems, as well as how to ensure compliance with the principle of data minimisation.
- Part 4: Data subjects rights. This section of the Guidance focuses on how data subjects’ rights apply to AI systems (particularly as the data may change throughout the AI lifecycle or be contained within the AI model), including rights relating to solely automated decisions that produce a legal or similar effect (Article 22, GDPR).
Prior to developing and deploying AI systems that will involve the processing of personal data, organisations should carefully consider how it will comply with data protection legislation in line with the considerations set out in the ICO’s Guidance. As the ICO makes clear in its Guidance, “[d]ata protection should not be an afterthought” and organisations should “not underestimate the initial and ongoing level of investment of resources and effort that is required” when it comes to AI governance and risk management.
Next Steps for the ICO
The ICO has noted that it will continue to develop the Guidance to ensure it remains relevant, and accordingly, it welcomes feedback from future consultations. The ICO has indicated that another “forthcoming” output to its AI auditing framework is the development of a “toolkit designed to provide further practical support to organisations auditing the compliance of their own AI systems”.
The ICO has also signposted to other areas it hopes to provide further guidance, which will have broader application but also be relevant to AI systems. These include a general toolkit on accountability, updating its Cloud Computing Guidance in 2021, expanding its general guidance on security, and new guidance on anonymization.