In a significant development highlighting the growing concerns around AI misuse, OpenAI discovered and banned malicious accounts which they suspected were using ChatGPT technology to power surveillance tools linked to China. This move was part of the companyโs efforts to curb the malicious use of its AI model.ย
OpenAI researchers Ben Nimmo, Albert Zhang, Matthew Richard, and Nathaniel Hartley identified and named this operation โPeer Reviewโ because it had suspicious behavior across multiple accounts. These malicious accounts were mainly used to perform tasks such as analyzing documents, creating sales pitches, debugging code, and developing tools for monitoring social media activity.
In a detailed report published by OpenAI, the Peer Review group focused on creating tools to monitor social media platforms like X, Facebook, YouTube, Instagram, Telegram, and Reddit. One such tool, called the “Qianyue Overseas Public Opinion AI Assistant,” was designed to gather information on social media conversations about sensitive political and social topics related to China, such as discussions on human rights protests.ย
The insights from these conversations were reportedly shared to Chinese authorities, including embassies and intelligence agencies in countries like the United States, Germany, and the United Kingdom. Additionally, the group leveraged ChatGPT for other tasks, such as researching political figures and think tanks in countries like the United States, Australia, and Cambodia. They even used the tool to translate and analyze screenshots of English-language documents.
This incident comes amid increasing concerns about the potential misuse of generative AI technologies. It highlights the challenges AI companies face in preventing misuse of their technology. The companyโs action proves that it is working to improve its monitoring and enforcement mechanisms.