Skip to main content

Amid growing concerns about online safety, Character.AI has unveiled a significant upgrade designed specifically to protect teenage users on its AI character interaction platform. The announcement on December 12, 2024, outlines a multi-layered approach that includes a separate AI model for users under 18, enhanced content filtering, and upcoming parental controls.

The most notable change is the development of a distinct large language model (LLM) for teen users, which implements more conservative content restrictions. This model is specifically designed to reduce the likelihood of teens encountering sensitive or inappropriate content.

“Our goal is to provide a space that is both engaging and safe for our community,” the company stated. The new approach includes technical measures to block inappropriate model outputs and user inputs, with special attention to preventing exposure to potentially harmful content.

Character.AI is implementing several new features to improve user safety, including parental controls to be rolled out in Q1 2025, which will allow parents to monitor their child’s platform usage, including time spent and most frequent character interactions. A time-spent notification will alert users after one hour of continuous platform use, with more restrictive settings for those under 18.

The company has also partnered with ConnectSafely, a prominent online safety organization, to develop and refine its safety protocols. This collaboration aims to ensure that the platform’s design prioritizes the protection of young users.

The safety overhaul comes in the wake of lawsuits alleging that the platform potentially contributed to self-harm among young users. Character.AI’s comprehensive response demonstrates a proactive approach to addressing these concerns.

“Safety must be infused in all we do,” the company emphasized, signaling a long-term commitment to protecting its user base, particularly younger members. While the new features represent a significant step forward, Character.AI has indicated that this is an ongoing process, with plans to continue evolving its safety measures.

About the author: