Logo

Information watchdog seeks views on accuracy of generative AI models in third call for evidence

The Information Commissioner’s Office (ICO) has today (12 April) launched the latest instalment in its consultation series examining how data protection law applies to the development and use of generative AI.

The series, launched on 15 January, invites all stakeholders with an interest in generative AI to respond and help inform the ICO’s positions.

The latest call for evidence, consulting on Chapter 3: ‘Accuracy of training data and model outputs’, focuses on how data protection’s accuracy principle applies to the outputs of generative AI models, and the impact that accurate training data has on the output.

The regulator said: “Where people wrongly rely on generative AI models to provide factually accurate information about people, this can lead to misinformation, reputational damage and other harms.”

The ICO has already consulted on the lawfulness of web scraping to train generative AI models and examined how the purpose limitation principle should apply to generative AI models.

Further consultations on information rights and controllership in generative AI will follow in the summer, the watchdog revealed.

John Edwards, UK Information Commissioner said: “In a world where misinformation is growing, we cannot allow misuse of generative AI to erode trust in the truth. Organisations developing and deploying generative AI must comply with data protection law – including our expectations on accuracy of personal information.”

Generative AI refers to AI models that can create new content e.g. text, computer code, audio, music, images, and videos.

The latest consultation is open until 5pm on 10 May 2024.

Lottie Winson

(c) HB Editorial Services Ltd 2009-2022