In an effort to combat this growing digital threat, a group of members from both political parties in Congress are taking a stand against the rise of non-consensual deepfakes. They have taken action against major tech companies such as Google, Apple and Microsoft. They sent letters to the CEOs of these companies after a report from 404 media, demanding explanations and concrete solutions to tackle the spread of deepfake pornography.
Research from 2023 revealed that 98 percent of all deepfake videos online are pornographic, with 99 percent of these videos targeting women. The impact of this deepfake images on its victims is devastating, causing mental and emotional trauma including depression, anxiety, suicide and even financial loss.
Despite numerous attempts by tech companies to ban and remove such content, they are struggling to keep up with evolving technology. Apple for instance was able to take down three apps used to create deepfakes after an independent investigation from 404 Media. However, the fact that these apps made it through Apple’s screening process in the first place raises serious concerns about existing security protections. Google has also tried to address the issue by updating its policies, instructing AI app developers to build in precautions against offensive content and has implemented in-app reporting mechanisms. However, investigations have shown that Google’s search results continue to promote apps designed to create non-consensual deepfakes, undermining their efforts to combat the problem.
The letter demands specific answers from tech companies. The congress are asking these companies:
- What specific plans do they have to stop deepfake pornography from spreading on their platforms? The lawmakers want to know not just the plans, but exactly when these plans will be implemented and how the companies will detect such harmful content.
- Who is involved in developing these protection strategies? They want transparency about the team or experts working on solutions.
- What happens after someone reports a deepfake? The letter seeks details about how quickly and effectively these reports are processed, and what oversight ensures these reports are taken seriously.
- How do they decide whether an app should be removed from their platform? They want a clear explanation of the review process, including how long reviews take, how quickly violating apps are removed, and whether apps can be fixed and reinstated.
- What help is available for victims whose images have been used without consent? The lawmakers want to understand the support and remedies offered to people who have been harmed.
- Finally, are there efforts to educate users about these protections and help them safeguard themselves online? They’re looking for proactive awareness and prevention strategies.
The proposed TAKE IT DOWN Act represents a critical step in combating this issue. It aims to provide stronger legal protections for victims and hold creators of non-consensual AI-generated intimate images accountable. The bipartisan approach demonstrates the serious nature of this technological threat.
As AI technology continues to advance, the need for robust safeguards becomes more critical than ever. Tech companies must do more to protect users, especially women, from the growing threat of AI-generated non-consensual intimate images.