A new report from the Centre for Emerging Technology and Security (CETaS) at the Alan Turing Institute details how AI is being used to influence elections and undermine democratic processes.
AI’s Growing Influence on Elections and Democracy
The report highlights the increasing use of AI-generated deepfakes and disinformation campaigns by actors, including Russian groups, to manipulate public opinion and interfere in elections.
Analysts warn that the greatest threat to democracy emerges *after* polls close, as AI-driven disinformation can erode trust in election results and incite unrest.
Analyzing AI’s Impact: A Data-Driven Approach
Researchers at CETaS analyzed press articles, government information, and election data to understand the impact of deepfakes and generative AI on elections worldwide.
The study identified patterns in the creation and dissemination of AI-generated content, examining user reactions and the viral spread of deepfakes targeting politicians.
The Limited, Yet Present, Impact of Deepfakes
While determining the direct impact of deepfakes on election outcomes is challenging, researchers found that they often experience a short-lived viral surge before fading from public attention.
The sheer volume of online content diminishes the lasting impact of individual deepfakes, but their potential for disruption remains a concern.
Russian Disinformation Networks Targeting Eastern Europe
The report uncovered numerous disinformation networks linked to Russia attempting to promote pro-Kremlin and anti-Ukraine narratives in countries ahead of elections.
These networks utilized AI-generated articles disguised as legitimate news sources to influence public opinion on issues like the war in Ukraine, particularly in Poland, Hungary, the Czech Republic, and the Baltic states.
TikTok’s Role in Shaping Election Narratives
Social media platforms, particularly TikTok, play a significant role in shaping narratives surrounding elections and candidates.
In Romania, TikTok provided a platform for Calin Georgescu to promote his campaign, with his content receiving preferential treatment from the platform’s algorithms compared to other candidates.
AI-Generated Content Used by Political Parties
Political parties are increasingly utilizing AI to generate campaign advertisements and even deepfakes targeting their opponents, often without proper disclosure.
Instances of deepfakes featuring Donald Tusk and TikTok campaigns promoting withdrawal from the European Union were observed in Poland.
Post-Election Disinformation and Erosion of Trust
The influence of AI extends beyond the campaign period, with disinformation spreading after elections to question their legitimacy.
AI-generated images and videos falsely depicting election fraud have circulated, potentially undermining public trust in the democratic process.
Financial Motivations Behind Deepfake Creation
The report notes that algorithms on social media platforms prioritize sensational and clickbait content, incentivizing the production of misleading material for financial gain.
Some actors exploit this by creating deepfakes promoting fraudulent financial schemes, capitalizing on the lack of robust platform moderation.
Regulation and the Future of AI in Politics
The report calls for new regulations requiring politicians to disclose the use of AI in their communications and for the removal of harmful deepfakes.
It emphasizes the importance of balancing regulation with the potential benefits of AI, such as translation tools used to reach wider audiences.
AI in Crisis Situations: Amplifying Disinformation
The report also examines the use of AI during crises, noting its potential to exacerbate disinformation and sow confusion.
Examples include the spread of false information following terrorist attacks and the manipulation of chatbot responses through data poisoning techniques.
Humanoid Robots and Data Security Concerns
The report highlights the rapid development of humanoid robots, particularly in China, and the associated risks related to data privacy and security.
Concerns include the potential for these robots to collect sensitive data about individuals and the vulnerability of their systems to hacking and misuse.

