AI transcription tools are being rapidly adopted across social work but current evaluation frameworks are ‘limited’, raising governance concerns for local authorities, researchers warn.
An Ada Lovelace Institute study, based on interviews with 39 social workers across 17 councils, highlights that while many practitioners report time savings from AI tools, assessments of their impact prioritise efficiency over people-centred outcomes.
The institute’s researchers found wide variation in perceptions of accuracy and oversight responsibility, with no clear consensus on appropriate use in statutory settings.
The institute recommends the UK Government require transparency reporting, broaden pilots, and establish a ‘What Works Centre’ for AI in public services to strengthen evidence and accountability.
Senior researchers warn that without clearer guidance and evaluation, social workers may carry disproportionate risk for errors in AI-generated content, underscoring the need for robust governance in public sector AI deployment.
Oliver Bruff, researcher at the Ada Lovelace Institute and co-author of the research, commented: ‘The safe and effective use of AI technologies in public services requires more than small-scale or narrowly scoped pilots. Ensuring that AI in the public sector works for people and society requires taking a much deeper and more systematic approach to evaluating the broader impacts of AI, as well as working with frontline professionals and affected communities to develop stronger regulatory guidelines and safeguards.’
.jpg)