Skip to main content

The rise of Generative AI has brought a major issue; Revenge Deepfake porn. Despite the numerous benefits of Generative AI, cyber criminals are misusing AI technology to carry several crimes including the creation and distribution of explicit, illegal AI-generated contents such as deepfake porn and Child Sexual Abuse Material (CSAM).

Many tech companies face criticism, fines and numerous investigations for not properly moderating these crimes on their platforms or doing very little to ensure safe use of their generative AI products. In response, they are stepping up efforts to protect and assure users of their safety.

Microsoft, in one of such moves, announced that it is partnering with StopNCII to “pilot a victim-centered approach to detection in Bing, our search engine”. What this means is that victims of deepfake crimes now have a tool to scrub their images from Bing.

Victims, experts and other stakeholders expressed concerns that just reporting this abuse alone was ineffective and did not adequately address the risk that these deepfakes could be accessed via a simple search. As a result, Microsoft entered this partnership with StopNCII to give users an efficient way to protect their private photos and videos from being shared online without their consent.

This works by creating a unique digital fingerprint also known as a ‘hash’ of the image or video on the user’s device, without actually uploading the file. This fingerprint is shared with industry partners just like Microsoft, who use the fingerprint to find and remove matching images from their platforms, even if the images are AI-generated. The goal is to help victims of revenge porn and other non-consensual image sharing to regain control over their private content without having to share the sensitive material with anyone else.

We have taken action on 268,899 images up to the end of August, says Microsoft. They join a list of major companies who have partnered with StopNCII to create a united front against the spread of deepfake porn and protection of victim’s privacy. They include Meta (consisting of Facebook, Instagram and Threads), TikTok, Bumble, Snapchat, Reddit, Onlyfans and Pornhub and many more.

Asides this tool, Microsoft strictly prohibits creating or sharing explicit content, real or AI-generated explicit across all its services including its AI tools. They’ve set up a central reporting system for Non-Consensual Intimate Images (NCII). When they confirm that reported content violates their rules, they remove it from search results in Bing and take it down from any Microsoft-hosted services where it appears. In addition, they also offer in-product reporting for some services and share its efforts through a Digital Safety Content Report.

With many such developments and partnerships, a united front can be formed to combat the rise of deepfake and the abuse of the ‘powers’ of generative AI. Users are encouraged to be aware of the dangers online and take precautionary measures to stay safe. If victimized, they should follow the necessary steps to report and address the incident.

About the author: