How bad actors could use generative AI to impact elections this year

Around one billion voters will head to polls all over the world this year, while wily campaigns and underfunded election officials will face pressure to use AI for efficiencies.

Why it matters: Conditions are ripe for bad actors to use generative AI to amplify efforts to suppress votes, libel candidates and incite violence.

  • New companies providing powerful generative AI have untested and relatively small election integrity teams, while older companies have cut back those teams — at its peak in 2019, Meta’s integrity staff numbered over 500 globally.
  • AI may end up disenfranchising voters as election officials use new tools for a variety of tasks, from identifying and removing ineligible citizens from voting registries to AI-powered signature matching.
  • Chatbots and platform algorithms risk serving up inaccurate information to voters.

The big picture: This year,more people will vote than any other year between 2004 and 2048.

  • It’s the first time in 60 years that the U.S. and U.K. are voting for new administrations in the same year and the first time since 2004 that the U.S. and EU are.
  • AI is just one category in a growing list of problems for election officials froma poll worker shortage to violent threats and cybersecurity attacks.

Speech is difficult to regulate. A deep tension exists between the rights to freedom of expression and information and the need to combat misinformation to ensure a fair campaign.

  • That tension will play out against a backdrop of Americans having little trust in the companies deploying AI and a plurality believing AI could alter election results.
  • The few guardrails in place are voluntary  including those demanded by the White House.

What’s happening: Microsoft says it caught Beijing operating a network of online accounts using AI-generated material to sway U.S. voters and both the CIA and DHS warn that China, Russia and Iran are using generative AI to target election infrastructure and processes.

  • YouTube is among the platforms that reversed bans on election result denialism in 2023, while Facebook currently restricts ads that deny “upcoming” or “ongoing” election results, but not past ones.
  • YouTube, TikTok, Facebook and Instagram now require labeling of election-related advertisements created with AI.

Argentina is a case study in how AI can be weaponized in a presidential race.

  • The winning candidatein the country’s November presidential race, right-wing libertarian Javier Milei, used AI to depict his rivals as Communists and emperors — but he cruised to victory by 3 million votes, far ahead of the 44,000 votes that decided the 2020 U.S. presidential race.

Several U.S. states have passed legislation banning or requiring disclosure of political deepfakes, including California, Michigan, Minnesota, Texas and Washington. Legislation is under consideration in New York, Illinois, New Jersey and Kentucky.

  • Arizona election officials conducted a two-day exercise in December, designed to help them spot and respond to deepfake videos.

Yes, but: AI is useful to campaigns and serves as a tool for first drafts of everything from speeches to marketing materials. It also provides customizable robo-conversations with voters and helps candidates better understand the people they aim to serve.

  • America’s decentralized election systems also helps to limit any damage due to misuse of AI — most of the action takes place in local races and the elections themselves are managed by 3,000 or so counties.

What we’re watching: How social media companies work to stop floods of AI-generated misinformation from reaching our screens — if they can’t, their platforms may either become useless or dangerous to democracy.

What’s next: The U.S. primary season kicks off in Iowa and New Hampshire in January, whileTaiwan votes for a new president on Jan. 13.

  • Election officials need to publish guidelines on their planned and actual use of AI by late spring, per President Biden’s AI Executive Order which designates election infrastructure as a type of critical infrastructure.

What they’re saying: Russian election interference in 2016 was “child’s play, compared to what either domestic or foreign AI tools could do to completely screw up our elections,” Sen. Mark Warner (D – Va) tells Axios.

  • “Panic responsibly. It is important to not to freak out about every single thing,” per Katie Harbath, former head of election safety at Meta.
  • Social media companies should allow for “free speech for humans, not computers,” Eric Schmidt told CNBC.