Skip to main content

The launch of ChatGPT in November 2022 took the world by storm, making Generative Artificial Intelligence (Gen AI) a household name worldwide. Over the years, the company, OpenAI behind ChatGPT,, has seen tremendous growth as its AI model has been integrated into numerous products, tools and services. This has brought massive financial success to OpenAI and significantly advanced AI technology. However, this widespread adoption is being leveraged upon by cyber criminals for various malicious activities.

OpenAI has revealed that since the start of the year, theyโ€™ve had to stop more than 20 malicious operations and deceptive networks worldwide that attempted to use their AI models. This also includes activities theyโ€™ve taken down since their threat report in May 2024.

Cyber Criminals are using AI to make their attacks smarter and more evasive. These reports confirm that generative AI is used in conducting spear-phishing attacks, debugging and developing malware, spreading misinformation, evading detection and other malicious activity. Some of these activities range from making simple content generation requests to complex, multi-stage efforts involving social media analysis and engagement.

The first signs of these AI-powered activities wereย reported by Proofpoint in April, who suspected TA547 (aka “Scully Spider”) of deploying an AI-written PowerShell loader for their final payload, Rhadamanthys info-stealer. OpenAI highlights several cases on how AI is being misused. They have published information about several cases of malicious operations and have also analyzed these operations to identify the ways in which threat actors use AI to increase their efficiency and impact. This latest report highlights their understanding as of October 2024.

Some examples of these cases include:

  1. A cyber threat actor known as “STORM-0817” used OpenAI’s models to debug their malicious code.
  2. A covert influence operation called “A2Z” generated fake biographies for social media accounts.
  3. A spamming network dubbed “Bet Bot” created AI-generated profile pictures for fake accounts on X (formerly Twitter).
  4. Some operations used AI to generate both long-form articles and short comments for posting across the internet.

It is not just individual criminals or small groups. state-sponsored threat actors are also using AI tools to carry out sophisticated attacks. A China-based group called SweetSpecter, emerged in 2023. They tried to use OpenAI’s models to support their offensive cyber operations while simultaneously conducting spear-phishing attacks against OpenAI employees and various governments worldwide.

Two Iranian based operations were also intercepted. In August, a covert Iranian influence operation, STORM-2035 was disrupted. This group used ChatGPT to generate social media comments and long-form articles on topics including the U.S. election, the Gaza conflict, Western policies towards Israel, Venezuelan politics, and Scottish independence. They posted content on X (formerly Twitter) and Instagram. Another Iranian-affiliated group, CyberAv3ngers (reportedly linked to Iran’s IRGC), used OpenAI’s models to research vulnerabilities, debug code, and seek scripting advice.

They banned a cluster of ChatGPT accounts, which was used by a Russian-origin threat actor to generate English- and French-language content targeting West Africa and the UK, and Russian-language marketing content. HP Wolf researchers reported that cyber criminals targeting French users were using AI tools to write scripts for a multi-step infection chain.

OpenAI says that despite these attempts, AI hasn’t led to any big breakthroughs in creating malware or building large fake audiences online. However, as AI continues to evolve, so too will the tactics of cybercriminals. As AI becomes increasingly integrated into our digital lives, the tech industry and policymakers face a crucial challenge: striking a balance between innovation and security.

Cyber security professionals need to also leverage AI, to increase their defense and combat AI-powered attacks. Policy makers and other tech companies also need to double down on their efforts to address the potential risks associated with advanced AI systems; striking a balance between innovation and security.

About the author: