Experts warn that AI could disrupt EU elections and supercharge disinformation.

BRUSSELS Voters in the European Union will elect legislators for the bloc’s Parliament starting Thursday, in an important democratic exercise likely to be overshadowed also by online misinformation.

Experts warn that fake news could be spread faster by artificial intelligence, which could affect the elections in the EU this year and other countries. The stakes are particularly high in Europe as it has faced Russian propaganda campaigns while Moscow’s conflict with Ukraine continues.

Take a closer view:


What’s happening?

In an election from Thursday to Sunday, 360 million people across 27 countries – including Portugal, Finland and Ireland – will elect 720 European parliamentarians. Experts have noted a rise in fake news, anti-EU misinformation and disinformation in the months before the election.

New AI tools, which make it easier to create false or misleading content, are a major concern. Some of the malicious activities are domestic and some are international. Most often, Russia and China are blamed for such attacks, despite the fact that it is hard to prove their direct involvement.

Josep Borrell warned that “Russian state sponsored campaigns” to flood EU information with misleading content are a threat to how we conduct our democratic debates. This is especially true during election time.

He said that Russia’s “information-manipulation” efforts take advantage of the increasing use of social media penetration and “cheap AI-assisted operation.” He said bots were being used to spread smears against European politicians who criticize Russian President Vladimir Putin.


Has there been any misinformation yet?

Election-related misinformation has been a frequent occurrence.

A fake website that was a mirror of the one operated by Madrid authorities, two days before July’s national elections in Spain, was created. The website posted an article that falsely warned of a possible attack by the disbanded Basque separatist militant group ETA on polling stations.

Two days before the Polish parliamentary elections in October, police swooped on a polling place after a false bomb threat. According to social media accounts that are linked to the “infosphere” of Russian interference, which is what authorities refers as, a device exploded.

In the days leading up to Slovakia’s November parliamentary elections, AI-generated recordings impersonating a candidate discussed plans to rig an election. Fact-checkers were scrambling as these recordings spread on social media to discredit them as false.

Last week, Poland’s national news agency published a fake article claiming that Donald Tusk would mobilize 200,000 men on July 1 in an apparent hacking that the authorities blamed Russia. Minutes later, the Polish News Agency removed the article and said that the source was not them.

This is “really concerning, and different from other attempts to create misinformation using alternative sources,” said Alexandre Alaphilippe. He’s the executive director of EU DisinfoLab – a non-profit group that studies disinformation. It raises the issue of cybersecurity in the news production which should be viewed as critical infrastructure.


What is the purpose of disinformation?

Experts and authorities have said that Russian disinformation is designed to disrupt democracy by preventing voters in the EU from going to the polling booths.

Vera Jourova, vice-president of the European Commission, warned the European Parliament in April that “our democracy cannot be taken as granted” and that the Kremlin would continue to use disinformation, malign influence, corruption, and any other dirty trick from the authoritarian gamebook to divide Europe.

Tusk, on the other hand, criticized Russia’s “destabilization strategies” just before the European elections.

Sophie Murphy Byrne is a senior government affairs manager for Logically, a company that provides AI intelligence. She said, “Disinformation campaigns are often not designed to disrupt elections.” She said that “it tends to be an ongoing activity intended to appeal to conspiracy minds and erode social trust” in a recent online briefing.

Experts and analysts from the EU say that narratives are also created to incite public discontent towards Europe’s political leaders, to try to divide communities on issues such as family values, sexuality or gender, to sow doubts regarding climate change, and to chip away at Western support of Ukraine.


What is new?

Five years ago, during the last European Union elections, online disinformation was largely produced by “troll farm” workers who worked in shifts, writing manipulative posts, sometimes in clumsy English, or repurposing older video footage. Fakes were more easily spotted.

Experts have raised the alarm over the rise of generative AI, which they claim will supercharge the spread disinformation in elections worldwide. The same technology used by ChatGPT and other easy-to-use platforms like OpenAI can be abused by malicious actors to create deepfake audio, video, and images that look real. Anyone with a smartphone, and a clever mind, can create convincing but false content to fool voters.

Salvatore Romano is head of research for AI Forensics a nonprofit group. He said that generative AI systems could be used to create realistic videos and images, and then push them to users of social media.

AI Forensics has recently discovered a network pro-Russian web pages which it says took advantage of Meta’s failure to moderate the political advertising in Europe.

Romano said that fabricated content is “indistinguishable from the real thing” and it takes experts in disinformation a long time to debunk.


What is the government doing?

In order to combat this, the EU has enacted a new law called the Digital Services Act. The law, which is sweeping in scope, requires platforms to reduce the risk of disinformation spreading and can be used as a tool to hold them responsible under the threat or hefty fines.

The bloc uses the law to ask Microsoft for information about Bing Copilot AI, which includes concerns about “automated manipulating of services that could mislead voter.”

Meta Platforms, the owner of Facebook and Instagram, was also investigated by the DSA for failing to do enough to protect its users from disinformation campaigns.

The EU passed a law on artificial intelligence that includes the requirement to label deepfakes. However, it will not arrive in time for voting and take effect within the next two-years.


What are the social media companies doing?

The majority of tech companies have praised the steps they are taking to protect “election integrity” in the European Union.

Meta Platforms, the owner of Facebook Instagram and WhatsApp, has announced that it will establish an election operations centre to identify online threats. The company also employs thousands of content reviewers in the 24 official EU languages. It is tightening its policies regarding AI-generated material, including labeling it and “downranking”.

Nick Clegg said that Meta’s President of Global Affairs, Nick Clegg, had stated there was no indication that generative AI is being used to disrupt elections on a systematic basis.

TikTok has announced that it will create fact-checking hubs within the app of its video-sharing platform. Google, the owner of YouTube, said that it is working with fact checking groups and will use AI in order to “fight abuse on a large scale”.

Elon Musk took the opposite approach with his social media platform X – formerly known as Twitter. “Oh, you mean the team called ‘Election Integrity Team’ that undermined election integrity?” He said, “Yeah, they’re all gone” in a September post. – AP

Related Articles