Skip to main content

A network of websites promising to use artificial intelligence to create nude images from regular photos has been unmasked as a sophisticated malware operation run by Fin7, a notorious Russian cybercrime group. This discovery, made by cybersecurity firm Silent Push, not only highlights the evolving tactics of cybercriminals but also serves as a stark warning to those seeking to engage in unethical and potentially illegal activities online.

The Honeypot Scheme

Researchers have identified at least seven websites operating under variations of the name “AINude.AI,” which claim to offer AI-powered “nudifying” services. However, instead of delivering on this promise, these sites serve as a front for distributing RedLine, a powerful credential-stealing malware.

Screenshot of one of the Fin7 nudify websites. Source: 404 Media.

Screenshot of one of the Fin7 nudify websites. Source: 404 Media.

Zach Edwards, senior threat analyst at Silent Push, explains, “There’s a specific type of audience who wants to be on the bleeding edge of creepy (while ignoring new laws around deepfakes), and who are proactively searching out deepfake AI nude software.” This audience, Edwards suggests, may overlap with users of other AI software or cryptocurrency holders, making them attractive targets for cybercriminals. 

The Wild World of Fin7

The cybersecurity firm attributes these fake nudify sites to Fin7, a Russian hacking group known for its sophisticated and often outlandish operations. In a move that can only be described as audacious, Fin7 has previously created fake companies and even set up bogus penetration testing services. In these schemes, they went as far as hiring legitimate security professionals, effectively tricking them into participating in criminal hacking activities without their knowledge.

This latest operation demonstrates Fin7’s continued activity and adaptability, contradicting the U.S. Department of Justice’s 2023 claim that “Fin7 as an entity is no more.”

A Taste of Their Own Medicine

In a twist of poetic justice, individuals seeking to create nonconsensual intimate images of others now find themselves potential victims of cybercrime. Some people believe that if you are attempting to deepfake someone’s nudes, you deserve to be infected with malware, but this sentiment underscores the ethical implications of seeking out such services while highlighting the unexpected consequences that can arise from engaging with unverified and potentially illegal online tools.

The Hidden Danger of ‘Nudify’ Sites

What many users fail to realize is that these ‘nudify’ sites can act as powerful honeypots, collecting personal data for future cyberattacks. This tactic, while not new, has been cleverly repurposed with the allure of AI technology. By promising to leverage cutting-edge AI for dubious purposes, these sites attract individuals who may be less likely to report their compromised systems to authorities, creating a perfect storm for cybercriminals.

Wide-Reaching Impact

The potential reach of this malware campaign is concerning. One of the Fin7-operated sites was found listed on a major porn site aggregator, potentially exposing a large number of unsuspecting users to the malware threat. This distribution method amplifies the risk, as users may stumble upon these malicious sites through seemingly legitimate channels.

The Malware Threat

Users who attempt to use these fake nudify services are prompted to download files that ultimately install RedLine infostealer malware. This malicious software is designed to harvest sensitive information from infected machines, including login credentials and cryptocurrency wallet details, potentially leading to significant financial and personal losses for victims.

Industry Response and Broader Implications

Upon being alerted to the nature of these sites, Hostinger, the domain registrar for most of the fake nudify sites, has blocked the associated domains. Major web browsers like Chrome and Safari, which previously allowed access to these sites, are now reviewing their security measures.

This incident serves as a stark reminder of the risks associated with seeking out and using unverified online services, particularly those that promise to manipulate images in ethically questionable ways. It also highlights the need for increased awareness and education about the potential dangers lurking behind seemingly harmless or exciting new technologies.

About the author: