Skip to main content

In recent times, we have seen the abuse of AI deepfake technology by malicious actors to impersonate individuals, such as the cases of an AI deepfake landing a school principal in trouble and a hacker attempting to phish a LastPass employee with AI deepfake.

But are AI deepfakes the only way malicious actors leverage Artificial Intelligence technologies?

Cybersecurity researcher Rachel James conducted an extensive research to map out the growing landscape of cyber threats facilitated by AI, with a particular focus on the use of Large Language Models (LLMs) and Generative AI tools by threat groups. She compiled everything in a GitHub repository and it serves as a comprehensive catalog of confirmed instances where cyber criminals have employed AI to enhance their attack strategies.

Note that many popular LLMs like ChatGPT, have built-in safeguards and ethical constraints designed to prevent misuse for malicious or illegal activities. However, these attackers jailbreak and bypass these restrictions which allows them to carry out their malicious activities.

According to Rachel James’ findings, various threat actors, including TA547, Lazarus, and Fancy Bear, have already employed AI-powered TTPs in their attacks. These TTPs range from using LLMs for reconnaissance and vulnerability research to generating scripts and payloads, evading anomaly detection, and bypassing security features.

Below are some ways in which malicious actors have leveraged LLMs for their activities:

LLM-informed reconnaissance:

Employing LLMs to gather actionable intelligence on technologies and potential vulnerabilities. This strategy has been used by nation-state actors like Russia’s APT28/Fancy Bear group, linked to the GRU’s Unit 26165. According to Microsoft’s assessment, APT28’s operations leverage LLMs to conduct reconnaissance in support of Russia’s foreign policy and military objectives, both in Ukraine and internationally.

LLM-enhanced scripting techniques:

Utilizing LLMs to generate or refine scripts used in cyberattacks or for basic scripting tasks. According to researchers at Proofpoint, the threat actor group TA547 was identified targeting German organizations with an email campaign delivering the Rhadamanthys malware, a popular information stealer. What was discovered in this campaign was that TA547 appeared to use a PowerShell script that the researchers strongly suspect was generated by a large language model (LLM).

LLM-aided development:

Using LLMs in the development lifecycle of tools and programs, including those with malicious intent, such as malware. In one case, four cyber attackers in China were arrested for developing ransomware with the assistance of ChatGPT. After an unidentified company in Hangzhou had its systems blocked by ransomware and was demanded a $20,000 cryptocurrency ransom, police traced the attack to the hackers. The attackers admitted to writing versions of the ransomware, optimizing the program with ChatGPT’s help, conducting vulnerability scans, gaining access through infiltration, implanting the ransomware, and carrying out the extortion scheme.

LLM-supported social engineering:

Leveraging LLMs for assistance with translations and communication to establish connections or manipulate targets. Cybercriminals have used LLMs to craft highly convincing phishing emails and social engineering attempts tailored to specific targets. According to SlashNext, since ChatGPT became widely available in Q4 2022, there has been a staggering 1,265% increase in malicious phishing emails, with a 967% rise in credential phishing attacks in particular. This shows how malicious actors are actively exploiting LLMs to enhance their social engineering campaigns.

LLM-assisted vulnerability research:

Using LLMs to understand and identify potential vulnerabilities in software and systems for exploitation. LLMs have shown they can be used to analyze code and find security flaws that could be exploited. According to U.S. Deputy National Security Advisor Anne Neuberger, nation-state actors like North Korea are leveraging AI, including LLMs, to enhance their cyber capabilities in areas like accelerating malware development and discovering exploitable systems. This highlights the real threat of using LLM-assisted vulnerability research against enterprises and nation states.

While deepfakes have garnered significant attention for their potential for abuse, Rachel James’ work reveals the wider potential for other AI technologies to enable and revolutionize cyber attacks. As these technologies continue to evolve, it is crucial for cybersecurity professionals and researchers to stay ahead of the curve, anticipating and mitigating the emerging AI-enabled threats on the horizon before it is too late.

About the author: