An investigation by 404Media revealed that several AI video generator apps owned by Chinese companies are allowing people to generate non consensual nudity and pornography. They emphasized that these apps lack the basic security guardrails to prevent people carrying out this act.ย
The creation of non-consensual images, often referred to as “deepfakes,” has become a growing issue with the rise of accessible AI tools. Without proper safeguards, these technologies are being exploited to violate privacy. This problem is only escalating as AI technology advances.
According to the publication, one of the commonly used was Pixverse, owned by a Beijing-based company and founded by Wang Changhu, who is a former employee at ByteDance. Users reportedly uploaded images of celebrities from various sources and then input prompts describing the celebrity undressing. In some cases, users first create explicit images of individuals and then use these as inputs for further AI-generated videos.
In their podcast titled, โWe’re Not Ready for Chinese AI Video Generatorโ the 404Media team discussed how these Chinese AI models often have fewer safeguards compared to their American counterparts. This comparison highlights the urgent need for global standards and regulations to address the misuse of AI technologies.
As AI continues to evolve, the cybersecurity community must focus on developing robust safeguards to prevent such violations and protect individuals from harm. Companies developing AI models should and AI-powered apps should ensure they put up tight security guardrails to prevent their products from becoming a tool for security violation.