Skip to main content

Nation-state hackers – government-backed threat actors – are capitalizing on the rapid development and adoption of Artificial Intelligence (AI) to enhance their cyber operations. This trend poses a significant challenge to ongoing cybersecurity efforts, as these groups leverage large language models (LLMs) to increase the speed, scale, and sophistication of their attacks.

A joint report by Microsoft and OpenAI sheds light on emerging threats in the age of AI. Focusing on identified activity associated with known threat actors, the report highlights prompt-injections, attempted misuse of  LLMs, and fraud as key areas of concern. LLMs are advanced AI systems capable of processing and generating human-like text, making them potentially valuable tools for malicious actors.

The report details instances of specific hacking groups associated with several countries that have been using LLMs to improve their reconnaissance, scripting, research, and other activities to gain crucial information before attacks. Here are some examples:

  • Forest Blizzard (aka Fancy Bear/APT28): This Russian military intelligence actor uses LLMs to understand satellite communication protocols, radar imaging technologies, and specific technical parameters. These queries suggest an attempt to acquire in-depth knowledge of satellite capabilities. They also utilize LLM-enhanced scripting techniques for tasks like file manipulation, data selection, and multiprocessing, potentially automating or optimizing technical operations.
  • Charcoal Typhoon (aka Aquatic Panda): This Chinese state-affiliated group leverages LLMs for translations and communication, likely to establish connections or manipulate targets. They also utilize LLMs for advanced commands, deeper system access, and control, representative of post-compromise behavior.
  • Salmon Typhoon (aka Maverick Panda): This sophisticated Chinese state-affiliated group engages LLMs for queries on a diverse range of subjects, including global intelligence agencies, cybersecurity matters, and topics of strategic interest. LLMs were utilized to resolve coding errors and conceal tactics within operating systems. It was also used for easy translation of computing terms and technical papers.
  • Crimson Sandstorm (aka Imperial Kitten): This Iranian group uses LLMs to generate code snippets seemingly supporting app and web development, interactions with remote servers, web scraping, and information exfiltration. Additionally, they used LLMs to develop code evading malware detection on compromised systems, disable antivirus software, and delete files after application closure.
  • Emerald Sleet (aka Kimusky): This North Korean actor uses LLMs for drafting and generating content likely used in spear-phishing campaigns targeting experts and organizations focused on Asia-Pacific defense issues. They also interact with LLMs to identify think tanks, government organizations, or experts on North Korea’s nuclear weapon program.

According to OpenAI, “These actors generally sought to use OpenAI services for querying open-source information, translating, finding coding errors, and running basic coding tasks.” While no major attacks utilizing LLMs have been identified yet, Microsoft and OpenAI are taking proactive steps. They’ve terminated accounts and assets linked to five state-affiliated groups using their AI services for malicious activities.

As AI technologies continue to evolve and attract the attention of various threat actors, vigilance is key. Microsoft assured they will continue to track malicious activity involving LLMs, implement models aligned with the White House’s Executive Order on AI, and work with OpenAI and other partners to share intelligence, improve customer protections, and aid the broader security community.

About the author: