Skip to main content

Researchers at Silent Push have discovered a new wave of attacks by the notorious FIN7 threat group. The group now leverages the promise of AI deepfake generators to spread malware through their honeytrap websites.

Also known as Carbon Spider, ELBRUS, or Sangria Tempest, FIN7 is a Russian Advanced Persistent Threat (APT) group that has primarily targeted the U.S. commercial sector since 2013.

In this recent discovery, researchers found that FIN7 had set up at least seven websites distributing malware to visitors attempting to use an โ€œAI Deepnude generator.โ€ Victims of these sites were users seeking to generate illicit nude photographs of fully clothed individuals.ย 

The websites, which have now been taken down, include easynude[.]website, ai-nude[.]cloud, ai-nude[.]click, ai-nude[.]pro, nude-ai[.]pro, ai-nude[.]adult, and ainude[.]site.

All the sites had similar designs and promised to generate free AI deep-nude images from any uploaded photo. They presented two types of traps:

  1. Free Download:ย 

In this version, users were asked to upload photos they wanted to turn into deepfakes. Once the alleged “deepnude” image was generated, the users were prompted to click a “Free Download” link, redirecting them to a Dropbox page, containing malware.

Image of the Malicious ‘Free Download’ Page

Credit: Silent Push

2. Free Trial:ย 

When a visitor clicked the โ€œFree Trialโ€ button, they were prompted to upload an image. After uploading, a message appeared stating, โ€œTrial is ready for download,โ€ along with a pop-up asking, โ€œThe link is for personal use only, do you agree?โ€ If the user agreed and clicked โ€œDownload,โ€ they were presented with a zip file containing the malicious payload.

Malicious Trial Download Prompt

Image of the Malicious ‘Trial Download’ Prompt

Credit: Silent Push

Both methods were used to infect victims’ machines with infostealer malware, including Redline Stealer, D3F@ck Loader, and Lumma Stealer. These malware strains are capable of stealing sensitive information like cookies, passwords, and credentials.

The researchers also observed that FIN7 likely used Search Engine Optimizationย  (SEO) techniques to improve the visibility of their honeytrap websites in search results.ย 

Silent Push warned that organizations may become vulnerable to this attack as some employees could have accessed the malware on the company network. They have also shared that they will continue monitoring FIN7’s activities and will provide updates to the community with their findings.

This case draws attention to more than one facet of AI misuse. One group engaged in irresponsible use of AI by seeking to generate non-consensual nude images of people. Meanwhile, the other group used AI to disseminate malware. Together, these actions highlight the need for ethical frameworks and regulations that ensure AI technologies are used responsibly, prioritising security, privacy, and public safety.

About the author: