AI is here and it is shaping our lives and our society. Chris Holmes, Baron Holmes of Richmond, discusses AI regulation.
AI is here and it is shaping our lives and our society. The Government’s AI Opportunities Action Plan published in January stressed a desire to ‘revolutionise public services and become an AI superpower’ and was followed in February by an invitation to local and regional authorities across the UK to put their communities forward to become the next AI Growth Zone.
Government aligns with the US on AI and delays regulation again Alongside these enthusiastic initiatives the Government has been quietly abandoning their promises to introduce ‘binding AI regulation’. These were commitments made in their manifesto and repeated in the Kings Speech but there has been absolutely no sign of an AI Bill. Instead, the Government seem to be following Trump (and big tech)’s agenda by not signing the declaration following the AI Action Summit in Paris that committed to an ‘inclusive and ethical’ approach.
Urgent need for regulation
This change of tone from Government is disappointing. I have been arguing that this is urgent for years and am incredibly frustrated to watch the delays in the Government’s promised AI Bill. The Government has published an AI playbook with 10 common principles ‘to guide the safe, responsible and effective use of AI in government organisations’. They are sound principles, and I welcome the playbook, however I don’t think it goes far enough. Last week I published a report setting out – again – why the UK urgently needs an approach to AI that puts humans in charge and humanity at the heart.
AI regulation report
My report focuses on eight people’s experiences (the voter, the scammed, the benefit claimant, the job seeker, the teenager, the creative, the transplant patient and the teacher) to underline that AI is already having a huge impact on people’s lives and explain why the current regulatory approach is not working.
The benefit claimant
One section of my report looks at the experience of a benefit claimant impacted by AI. The DWP has consistently failed to tell the public about the algorithms they use to make decisions about people’s lives. There are reports of people whose benefits have been indefinitely suspended by teams known to operate automated systems.
People say they have not been provided with any explanation or told what they need to prove or disprove for the benefit to be reinstated, nor how they might seek redress for any incorrect suspension and for the hardship it has caused.
The Department’s existing automated systems have presented evidence of discriminatory effects against older people, people with disabilities and people of certain nationalities. Tests for unfair outcomes were limited to just three protected characteristics: age, gender, and pregnancy. The DWP previously admitted their ‘ability to test for unfair impacts across protected characteristics is currently limited.’
An investigation by Big Brother Watch found that an algorithm wrongly flagged 200,000 housing benefit claimants for possible fraud and error, which meant that thousands of UK households every month had their benefit claims unnecessarily investigated. The Government’s recently updated algorithmic transparency records did not include the DWP tools.
In January 2025, freedom of information requests, submitted by the Guardian revealed that Ministers have shut down or dropped at least half a dozen AI prototypes intended for the welfare system. It is reported that officials have said that ensuring AI systems are ‘scalable, reliable [and] thoroughly tested’ are key challenges and say there have been many ‘frustrations and false starts’.
My proposed AI Regulation Bill could address these issues through: Clause 2 which sets the principles from the (previous) Government’s AI White Paper on a statutory basis, including, transparency, explainability, accountability, contestability and redress, and a duty not to discriminate.
UK leadership on ethical AI is possible
I have always argued that the opportunity for the UK to lead on ethical AI and the possibility – if done right – that a principle based, outcomes focused approach that offers clarity to businesses and protection to citizens would allow the UK to position itself as a frontrunner for AI innovation. Constant delays and uncertainty from the Government are problematic for innovation as well as public trust. Today (4 March) my private members bill – introduced previously in November 2023 – has been introduced in the House of Lords and I urge the Government to consider the provisions seriously.
For local government leaders who want to deploy AI in a safe and ethical way. For the eight realities set out in my report. For the eight billion citizens of our ever more connected world. For economic, social, and psychological benefit. It’s time to legislate, together on AI, it’s time to human lead. Our data, our decisions, our AI futures.