With the advancement of Artificial Intelligence (AI) technology, many threat actors are persistently developing tools designed to bypass safety measures put in place for generative AI services and products. They carry out AI abuse by using these AI tools to produce harmful content. Microsoft, despite its security resilience, also experiences such attacks and are taking strict measures to address this challenge.
One of such measures is that Microsoft’s Digital Crimes Unit (DCU) is taking legal action against a foreign-based group of threat actors who created tools to bypass safety controls in Microsoft’s AI services. The case, filed in the Eastern District of Virginia, aims to stop cybercriminals from generating harmful and offensive content using Microsoft’s AI technology.
The hackers set up a “hacking-as-a-service” operation, using websites domains like ‘rentry.org/de3u’ and ‘aitism.net’ to break into Microsoft’s Azure infrastructure. They created a tool named “de3u” that worked as a front-end for DALL-E 3, along with a special routing system that let users create thousands of AI-generated images using stolen access credentials. The system worked by channeling communications from user computers through a Cloudflare tunnel to access Azure OpenAI Service, with the de3u software utilizing undocumented Microsoft network APIs to mimic legitimate requests.
According to court documents, at least three unknown individuals were behind this operation. They stole Azure API keys and authentication information from U.S.-based Microsoft customers to break into the systems and create harmful images, violating Microsoft’s usage policies. Seven other parties are believed to have used these illegal services. While Microsoft hasn’t revealed exactly how the API keys were stolen, they note that the criminals tried to hide their tracks by deleting their online presence after their domain was seized.
This incident is part of a broader trend of AI abuse. Microsoft points out that government-backed groups from countries like China, Iran, North Korea, and Russia have been using AI services for activities like gathering information, translation, and spreading false information.
Microsoft is sending a clear message that they would not tolerate the weaponization of our AI technology. Other measures in which they would tackle this challenge include: strengthening of their safety guardrails, fostering partnerships and pushing for new laws to give authorities better tools to fight AI abuse.
They recently published a report titled “Protecting the Public from Abusive AI-Generated Content,” which offers suggestions for both industry and government to better protect people, especially women and children, from harmful AI-generated content and AI abuse.