Fake news is on everyone's mind ahead of the upcoming local elections in May. As part of the fight back against disinformation campaigns, the Electoral Commission has launched a pilot scheme to detect and counter political deepfakes. LocalGov's editor William Eichler spoke with the commission's chief executive Vijay Rangarajan to learn more about it.
Artificial intelligence (AI) has transformed many things in recent years, but perhaps one of its most troubling applications is the deepfake – a synthetic video, audio or image designed to make it appear that someone said or did something they never did. In the context of elections, the implications are stark. During the 2024 general election, around a quarter of voters reported seeing or hearing a deepfake, a figure that underlines just how quickly this threat has moved from theoretical concern to lived electoral reality.
Now, with local elections scheduled for May, the Electoral Commission has launched a deepfake detection pilot, bringing AI-supported tools and human analysts together in an effort to identify and respond to disinformation before it takes hold. For Vijay Rangarajan, the commission's chief executive, the timing reflects not complacency but a genuine escalation in the threat.
‘We have been monitoring online information threats for some time,’ he says. ‘Recently, AI tools have made creating convincing deepfakes dramatically faster, cheaper and more accessible.’ The international evidence bears this out. In Ireland in 2025, a deepfake falsely showed a presidential candidate withdrawing from the race just days before polling. Closer to home, deepfakes have targeted the Prime Minister, the Mayor of London and sitting MPs. ‘The threat has grown significantly,’ Rangarajan says, ‘and this pilot is our response, in line with our Corporate Plan commitment to build greater AI capability to monitor threats to the democratic system.’
How the system works
The detection process is a hybrid one. AI-supported tools assess content and produce confidence scores, but no decision is made without human oversight. ‘A human analyst reviews every potential deepfake before any decision is made,’ Rangarajan explains. ‘The technology supports our judgement, it doesn't replace it.’ This measured approach reflects both the limitations of current AI detection technology and the high stakes involved in making accusations about electoral content. A false positive – wrongly flagging legitimate content as a deepfake – could itself become a source of harmful misinformation.
The question of pace is a real one. Deepfakes can spread rapidly across social media before detection systems have had a chance to respond, and elections compress the timeframes in which damage can be done. Rangarajan acknowledges this directly: ‘Deepfake detection is a rapidly evolving field, and we are deliberately and carefully building our expertise to inform our future response to electoral misinformation.’ The implication is that this pilot is as much about learning as it is about immediate intervention – laying the groundwork for a more mature and capable response in future electoral cycles.
Powers and limits
The Electoral Commission is not a content regulator, and Rangarajan is careful to define what the pilot can and cannot do. When asked what happens if a social media platform refuses a takedown request, his answer is candid: ‘Our role is not to police platforms but to ensure that when deepfakes emerge, the right organisations are alerted quickly, the evidence is preserved, and the public has accurate information about the electoral process.’
The commission, in other words, operates as a coordinator and communicator rather than an enforcement body. ‘We are part of a wider system response,’ Rangarajan says, ‘and this pilot is about making that system work better.’ Whether that wider system – which relies on the cooperation of platforms, police and other regulators – is sufficient to meet the scale of the problem remains an open question.
Measuring success
When it comes to evaluating the pilot, Rangarajan points to action rather than metrics. ‘When we find false information about the electoral process, we will act quickly,’ he says. That action could include publicly correcting false claims, referring potentially unlawful material to the police, or working with platforms to seek the removal of harmful content. What is clear is that the commission views this pilot not as a solution in itself, but as a first step in a longer process of building the capacity needed to protect democratic integrity in an age of increasingly convincing synthetic media. Whether those steps are being taken quickly enough is a question that May's elections may begin to answer.
Check out: Staying Safe in Local Elections: Challenges and support for candidates
This article was written with the help of AI.
