AI transcription tools in social work introducing new risks to people and society, such as bias and ‘hallucinations’, report warns
- Details
AI transcription is rapidly being rolled out across social work, but current approaches to ethics and evaluation are “limited and light-touch”, a report by the Ada Lovelace Institute has claimed.
According to the Institute, the roll-out of AI transcription tools in social work is being “met with enthusiasm” by social workers - with many already reporting benefits from the time saved.
However, the report warned that AI tools are introducing new risks to people and society, such as bias and ‘hallucinations’, that “aren’t being fully assessed or mitigated”.
AI transcription tools are designed and marketed to make administrative tasks, such as note-taking and writing summaries of meetings, faster and more efficient.
For the study, researchers interviewed 39 social workers with experience of using AI transcription tools, from across 17 local authorities, as well as senior staff members involved in procuring and evaluating them.
The Institute then made the following findings and observations:
- Resource constraints in social care are inspiring widespread piloting, adoption and evaluation of AI transcription tools.
- Local authorities focus their evaluations on efficiency rather than the impact on people who draw on care.
- Almost all social workers report that using AI transcription tools brings meaningful benefits to their work, but they do not experience these benefits uniformly.
- Social workers assume full responsibility for AI transcription tools, but perceptions of reliability, accuracy and the need for oversight (‘human in the loop’) vary significantly.
- There is no consensus on when it is appropriate to use AI transcription tools in social care.
In order for AI transcription tools to be used “safely and responsibly”, the Institute called on the government to require that local authorities record their use of AI transcription tools through the ‘Algorithmic Transparency Reporting Standard’.
It also recommended social care regulators and local authorities work with other sectoral bodies to produce guidance for the use of AI transcription tools in statutory processes and formal proceedings, with “clear accountability structures”.
The Institute noted: “To enable end-to-end accountability, regulators and professional bodies should review and revise rules and guidance on professional ethics for social workers and support social workers to collaborate with legal and advisory bodies around procedures for AI use in formal proceedings. An advisory board comprised of people with lived experience of drawing on care should be established to inform these actions.”
Other recommendations included:
- The UK government should extend its pilots of AI transcription tools to include various locations and public sector contexts.
- The UK government should set up a What Works Centre for AI in Public Services to generate and synthesise learnings from pilots and evaluations.
- A coalition of researchers, policymakers, civil society and community groups should collaborate on research on the systemic impacts of AI transcription tools.
- Local authorities should specify their outcomes and expected impact when procuring AI transcription tools to ensure a shared understanding among staff and users.
Lara Groves, Senior Researcher at the Ada Lovelace Institute and co-author of the research, said: “AI may be able to help make aspects of public services more efficient, and so policymakers should absolutely be looking at how technology can help public sector workers. However, delivering time savings is not necessarily the same thing as delivering public benefit, especially if these come at the cost of inaccuracy or unaccountability. In the rush to adopt AI in the public sector, it is essential that policymakers don’t lose sight of the wider risks to people and society and the need for responsible governance.”
Oliver Bruff, Researcher at the Ada Lovelace Institute and co-author of the research, said: “The safe and effective use of AI technologies in public services requires more than small-scale or narrowly scoped pilots. Ensuring that AI in the public sector works for people and society requires taking a much deeper and more systematic approach to evaluating the broader impacts of AI, as well as working with frontline professionals and affected communities to develop stronger regulatory guidelines and safeguards.”
Lottie Winson



