UK

Generative AI and elections are key focus for hackers in 2024, report warns

CrowdStrike’s annual Global Threat Report said hackers were using generative AI tools to improve scams and create disinformation.

Hackers could use generative AI to develop computer scripts and codes for use in cyber attacks
Cyber attacks Hackers could use generative AI to develop computer scripts and codes for use in cyber attacks (Dominic Lipinski/PA)

Hackers are turning to generative AI to help them scam people and will look to disrupt major elections taking place during 2024, according to a new cyber threat report.

CrowdStrike’s annual global threat report said the speed of cyber attacks is increasing, with hackers breaking into systems more quickly.

The cyber-security firm’s study said generative AI tools such as ChatGPT are likely to be exploited to help less capable hackers improve their scams and cyber attack capabilities – something ChatGPT maker OpenAI confirmed was happening last week when it announced it had removed accounts linked to state-backed hacking groups that were exploiting its AI tools.

The report also warned that nation-state actors from China, Russia and Iran were highly likely to conduct misinformation campaigns in an attempt to disrupt elections throughout the year, with voting due to occur in the US and most likely the UK, among a number of other countries.

The report said generative AI had “massively democratised computing to improve adversary operations” and was helping “lower the entry barrier” for less skilled hackers to carry out attacks.

CrowdStrike warned that hackers could use generative AI to develop computer scripts and codes for use in cyber attacks, as well as to create more convincing scam content to trick people into handing over sensitive personal information.

On elections, the report warned that while some attempts could be made to disrupt the software which powers elections – including machines to log or count votes – it said the most common form of election targeting would be to distribute disinformation before, during and after the voting process.

It said Russia and Iran would likely look to target elections in the US and EU, as they consider these regions to be “major geopolitical opponents”, while China would likely target countries such as Indonesia, South Korea and Taiwan as they were in its “perceived sphere of influence”.

It added that generative AI would also likely play a part in disinformation campaigns, noting that its ease of use and speed at creating content to aid “deceptive but convincing narratives” would make it a desirable tool for hackers.

CrowdStrike also noted that “changes to or staff reductions affecting the enforceability of content moderation policies at major social media companies” would provide opportunities for hackers to exploit.

Since taking over Twitter – now known as X – in 2022, Elon Musk has dismissed the majority of the company’s staff, including large numbers of content moderators, which many industry experts have warned has allowed large swathes of harmful content, including abuse, spam and misinformation, to more easily spread on the platform.

The annual report was put together by analysing the activity of more than 230 cyber threat groups.

Adam Meyers, head of counter adversary operations at CrowdStrike, said: “Over the course of 2023, CrowdStrike observed unprecedented stealthy operations from brazen eCrime groups, sophisticated nation-state actors and hacktivists targeting businesses in every sector spanning the globe.

“Rapidly evolving adversary tradecraft honed in on both cloud and identity with unheard of speed, while threat groups continued to experiment with new technologies, like GenAI, to increase the success and tempo of their malicious operations.

“To defeat relentless adversaries, organisations must embrace a platform-approach, fuelled by threat intelligence and hunting, to protect identity, prioritise cloud protection, and give comprehensive visibility into areas of enterprise risk.”