Site icon Bizon News

AI’s Impact on Democracy: Report Details Disinformation and Manipulation

A new report from CETaS and the Alan Turing Institute reveals how artificial intelligence is being used to manipulate elections and undermine democratic processes, even after polls close.

AI’s Influence on Democracy and Elections

The Centre for Emerging Technology and Security (CETaS) at the Alan Turing Institute has released a series of reports analyzing the impact of deepfakes and generative AI on elections worldwide, examining press articles, government information, and electoral institutions.

Researchers gathered examples from various elections to identify patterns in content creation, AI usage, and user reactions to viral deepfakes of politicians during campaigns.

Challenges in Measuring AI’s Impact

Determining whether deepfakes directly alter election outcomes is difficult, as liking or sharing a deepfake doesn’t reveal how someone ultimately voted. Numerous factors influence voting decisions, including family, friends, and ideology.

While deepfakes often spread rapidly, interest typically fades quickly due to the sheer volume of online content. This limits their sustained impact compared to other forms of interference.

Russian Disinformation Campaigns

The report uncovered numerous disinformation networks linked to Russia attempting to promote pro-Kremlin, anti-Ukraine narratives in countries before elections. These groups used AI-generated articles disguised as official news to influence public opinion.

Russia aims to shape public discourse in Central and Eastern Europe, targeting parties supporting continued military and financial aid to Ukraine. In Poland, efforts focused on exacerbating existing societal divisions rather than creating new narratives.

The Role of Social Media, Particularly TikTok

Social media platforms play a significant role in shaping narratives around elections and candidates, with TikTok proving particularly influential, as seen in Romania where it boosted Calin Georgescu’s campaign.

Content from Georgescu was often not labeled as political, giving it greater visibility compared to other candidates whose content was flagged by algorithms.

Ultimately, social media shapes attitudes and election results, with AI adding another layer of influence on voter behavior.

AI-Generated Content by Political Parties

Parties are increasingly using AI to generate advertisements and even deepfakes targeting opponents, often without disclosing that the content is AI-generated. Instances of this have been observed in Poland, including a deepfake involving Donald Tusk and TikTok materials promoting EU exit.

Post-Election Influence and Deepfakes

The influence of AI extends beyond the campaign period, with reports of fabricated election fraud claims following recent presidential elections. Realistic AI-generated images and videos alleging voting irregularities have circulated widely, making it difficult to debunk misinformation even after it’s corrected.

Financial Motivations Behind Deepfake Creation

Algorithms on social media prioritize sensational and clickbait content, incentivizing its production for financial gain. Some creators are motivated solely by profit, generating deepfakes that promote scams or financial schemes.

Regulation and the Future of AI in Politics

While AI has beneficial applications, such as translating candidate manifestos into multiple languages, there’s a lack of ethical standards for its use in politics. The report calls for mandatory labeling of all AI-generated political content and removal of deepfakes that are defamatory or incite violence.

The Digital Services Act in the EU is expected to hold platforms accountable for addressing these threats, potentially granting researchers and journalists access to social media data for identifying harmful content.

AI in Crisis Situations

The report also analyzed the use of AI during crises, such as terrorist attacks, noting the potential for misinformation and confusion. False information about suspects can lead to violence against specific groups.

“Data poisoning” – the manipulation of data used to train AI chatbots – is another concern, as groups can edit Wikipedia and other sources to influence chatbot responses. The increasing reliance on AI chatbots for information necessitates responsible usage and critical evaluation of sources.

The ongoing conflict between Israel and Iran has seen a surge in realistic AI-generated videos depicting attacks and fabricated damage, further polluting the information ecosystem.

Humanoid Robots and Data Privacy Concerns

China is actively investing in the development of humanoid robots through public-private partnerships. These robots pose risks, including potential physical harm if compromised or hacked.

Data privacy is a major concern, as these robots can record and collect data about their surroundings and users, potentially being used for profiling, targeted advertising, or even falling into the wrong hands.

Despite the risks, humanoid robots have potential benefits, such as assisting individuals with limited mobility or aiding in disaster relief efforts.

Exit mobile version