Skip to main content

In the latest development in its lawsuit against cybercriminals, Microsoft has named members of Storm-2139.

In January, Steven Masada, Assistant General Counsel of Microsoftโ€™s Digital Crimes Unit, announced Microsoft’s plan to take legal action to protect the public from abusive AI-generated content.

In the complaint, Microsoft aimed to disrupt the operations of cybercriminals who intentionally develop tools to bypass the safety controls of Microsoftโ€™s Generative AI services in order to create offensive and harmful content.

Microsoftโ€™s investigation revealed that a foreign threat actor group had developed sophisticated software that exploited customer credentials scraped from public websites. These cybercriminals planned to identify and access accounts with specific generative AI services and purposely modify their features.

The criminals then resold access to these compromised services to other malicious actors, providing detailed instructions on how to use custom tools to generate harmful and illicit content, including non-consensual intimate images of celebrities and other sexually explicit material.

Upon discovering this activity, Microsoft blocked the cybercriminals’ access, implemented countermeasures, and enhanced its safeguards to prevent future incidents. Additionally, Microsoft obtained a court order allowing it to seize a website that facilitated the criminal operations.

This action was taken to enable Microsoft to gather evidence about the threat actors, including their monetization methods and technical infrastructure.

Now, Microsoft has publicly identified these threat actors as the Storm-2139 group. The individuals named are Arian Yadegarnia (โ€œFizโ€) from Iran, Alan Krysiak (โ€œDragoโ€) from the United Kingdom, Ricky Yuen (โ€œcg-dotโ€) from Hong Kong, China, and Phรกt Phรนng Tแบฅn (โ€œAsakuriโ€) from Vietnam.

Microsoft classifies Storm-2139 into three main categories: creators, providers, and users. Creators developed the malicious tools. Providers modified and supplied these tools to end users, with different service tiers and payment options. Users then used these tools to generate harmful synthetic content, often centred around celebrities and explicit media.

Microsoft identified these actors through discussions about the lawsuit on the seized platforms. Some threat actors even attempted to dox Microsoftโ€™s legal counsel by posting their names, personal information, and photographs. As a result, Microsoftโ€™s counsel received various emails, including messages from suspected Storm-2139 members attempting to shift blame onto others involved in the operation.

Email Received by Microsoft’s Counsel

Credit: Microsoftย 

Microsoft has reaffirmed its commitment to the responsible use of AI and continues to develop new ways to protect users from abusive AI-generated content. In 2024, the company released a white paper with recommendations for U.S. policymakers on how to reform criminal law and equip law enforcement with the necessary tools and knowledge to address these types of crimes.

By exposing these threat actors, Microsoft aims to set a precedent in the fight against the misuse of AI technology.

About the author: