AI technology complicates election security

Recent events, including an artificial intelligence (AI) generated deepfake. robo call impersonating President Biden urging New Hampshire voters to abstain from the primaries, serve as a reminder that bad actors increasingly see modern generative artificial intelligence (GenAI) platforms as a powerful weapon for targeting U.S. elections.

Platforms such as ChatGPT, Google’s (formerly Bard) Gemini, or any other purpose-built Dark Web large language model (LLM) could play a role in disrupting the democratic process, with attacks including mass influence campaigns, trolling automated and proliferation of deepfake content.

In fact, FBI Director Christopher Wray recently expressed concerns about the ongoing information war using deepfakes that could sow disinformation during the upcoming presidential campaign, as state-supported actors attempt to influence geopolitical balances.

GenAI could also automate the rise of “coordinated inauthentic behavior“networks that attempt to develop an audience for their disinformation campaigns through fake news sources, convincing social media profiles, and other avenues, with the goal of sowing discord and undermining public trust in the electoral process.

Electoral influence: Substantial risks and nightmare scenarios

In the view of Padraic O’Reilly, CyberSaint’s chief innovation officer, the risk is “substantial” because the technology is evolving so rapidly.

“This promises to be interesting and perhaps even a little alarming, as we see new variants of disinformation that exploit deepfake technology,” he says.

Specifically, O’Reilly says, the “nightmare scenario” is that microtargeting with AI-generated content will proliferate across social media platforms. This is a familiar tactic from Cambridge Analytica scandal, where the company collected data on the psychological profiles of 230 million US voters, in order to deliver highly personalized messages via Facebook to individuals in an attempt to influence their beliefs – and their votes. But GenAI could automate this process at scale and create highly compelling content that would have few, if any, “bot” characteristics that would turn people away.

“Targeting data stolen [personality snapshots of who a user is and their interests] merging with AI-generated content is a real risk,” he explains. “Russian disinformation campaigns from 2013-2017 suggest what else could and will happen, and we know of deepfakes generated by US citizens [like the one] with Biden and Elizabeth Warren.”

The mix of social media and deepfake technology readily available it could be an apocalyptic weapon to polarize US citizens in an already deeply divided country, he adds.

“Democracy is based on certain traditions and shared information, and the danger here is greater balkanization among citizens, leading to what Stanford researcher Renée DiResta called ‘tailored realities,’” says O’Reilly, i.e. people who believe in “alternative facts”.

The platforms that threat actors use to sow division will likely be of little help: he adds that, for example, social media platform X, formerly known as Twitter, has chipped away at its quality assurance (QA) on content.

“Other platforms have provided standard guarantees that they will tackle misinformation, but protections of free speech and lack of regulation still leave the field open to bad actors,” he warns.

AI amplifies existing phishing TTPs

GenAI is already being used to create more credible and targeted phishing campaigns at scale, but in the context of election security this phenomenon is even more concerning, according to Scott Small, director of cyber threat intelligence at Tidal Cyber.

“We expect to see cyber adversaries adopt generative AI to make phishing and social engineering attacks – the primary forms of election-related attacks in terms of consistent volume for many years – more convincing, making it more likely that attackers targets interact with harmful content,” he explains.

Small says the adoption of AI also lowers the barrier to entry for launching such attacks, a factor that will likely increase the volume of campaigns attempting to infiltrate campaigns or take over candidate accounts this year to impersonation purposes, among other potential ones.

“Criminal and nation-state adversaries regularly adapt phishing and social engineering lures to current events and popular topics, and these actors will almost certainly seek to take advantage of the boom in election-related digital content distributed generally this year, to try to deliver malicious content to unsuspecting users,” he says.

Defend yourself from electoral threats from artificial intelligence

To defend against these threats, officials and campaigns must be aware of the risks associated with GenAI and how to defend against them.

“Election officials and candidates are constantly giving interviews and press conferences from which threat actors can draw insights for AI-powered deepfakes,” says James Turgal, vice president of cyber risk at Optiv. “Therefore, it is up to them to ensure they have a person or team responsible for ensuring control over content.”

They must also ensure that volunteers and workers are trained on AI-based threats such as advanced social engineering, the threat actors behind them, and how to respond to suspicious activity.

To this end, staff should participate in video training on social engineering and deepfake that includes information on all forms and attack vectors, including electronic attempts (email, text messages and social media platforms) , in person and by telephone.

“This is very important, especially with volunteers, because not everyone has good cyber hygiene,” Turgal says.

Additionally, campaign and election volunteers must be trained in how to safely provide information online and to external entities, including social media posts, and use caution when doing so.

“Cyber ​​threat actors can collect this information to tailor socially designed lures to specific targets,” he warns.

O’Reilly supports long-term regulation that includes watermark for audio and video deepfakes will be instrumental, noting that the federal government is working with LLM owners to put protections in place.

In fact, the The Federal Communications Commission (FCC) just declared AI-generated voice calls are deemed “artificial” under the Telephone Consumer Protection Act (TCPA), making the use of voice cloning technology illegal and giving state attorneys general nationwide new tools to combat such fraudulent activity .

“AI is moving so fast that there is an inherent danger that any proposed rule could become ineffective as the technology advances, potentially missing the mark,” O’Reilly says. “In a way, it’s the Wild West and AI is coming to market with very few guarantees.”



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *