Skip to main content

In a major move to curb the spread of inappropriate and prohibited content generated by AI, Google has issued strict new rules for developers distributing AI apps on the Google Play Store.

Google is requiring AI app makers to implement preventions to block the generation of “restricted content” such as sexual and violent material. Developers must put rigorous testing and filtering systems in place to ensure their AI models and tools respect user safety and privacy.

Under the new guidelines, any apps that produce AI-generated content like chatbots, image creators, and audio tools must have built-in reporting mechanisms allowing users to flag inappropriate outputs. Google also states that developers should “rigorously test” their apps to prevent them from outputting prohibited content.

The policy updates come as deepfakes and other non-consensual synthetic media have proliferated online, raising alarm about their potential for harm.

Google aims to provide clear guidelines and guardrails for developing safe and responsible AI experiences, while protecting users from explicit or unsolicited content.

The AI content rules do not apply to productivity apps that simply integrate AI capabilities or apps that “merely host” and surface user-generated AI outputs from other sources.

This move by Google is a significant step forward in combating the dangers posed by unsafe and unregulated AI systems.

About the author: