Skip to main content

OpenAI has successfully disrupted a covert Iranian influence operation that utilized ChatGPT to generate content across various topics, including the U.S. presidential campaign.

An influence operation is a coordinated effort by an entity to affect the perceptions, opinions, and behaviours of a target audience to achieve specific objectives. As Artificial Intelligence solutions like ChatGPT continue to evolve in practical usage, it is no surprise that even state-affiliated threat actors with political interests use them for their campaigns.

Elections are sensitive, and while AI can be applied positively, its misuse is on the rise. The current U.S. election season has seen AI being used to generate deepfake videos of candidates, in which digitally created versions of them are depicted making statements they never actually made. These videos are distributed on popular social media platforms like X (formerly known as Twitter). The motives behind these videos often range from comedic purposes to outright political propaganda.

The Iranian influence operation, dubbed Storm-2035, used ChatGPT to generate content, including commentary on candidates from both sides of the U.S. presidential election. The content generated included full-length articles as well as shorter social media comments.

The articles, which covered topics related to U.S. politics and global events, were published on websites posing as both progressive and conservative news outlets. Meanwhile, the social media posts, shared through accounts on platforms like X and Instagram, discussed issues such as the conflict in Gaza, U.S. presidential candidates, and other politically sensitive topics.

An image showing two tweets. The first tweet criticizes Kamala Harris's immigration policies, mentioning potential increases in immigration costs and citizenship, with a sarcastic comment about her possibly linking these changes to climate change. The second tweet warns that Donald Trump is attempting to establish himself as a king and expresses a commitment to protect democracy. Both tweets include hashtags indicating opposition to the respective figures.

Credit: OpenAI

The operationโ€™s impact was minimal, as audience engagement was quite low. The social media posts received few likes, comments, or shares. OpenAI used Brookings’ Breakout Scale to accurately measure the operation’s impact. The scale assesses the impact and reach of Influence Operations (IOs) and disinformation campaigns and classifies them into six categories.

Storm-2035 ranked at the low end of Category 2, indicating limited success in reaching real audiences.

OpenAI conducted a similar takedown operation earlier this year. In partnership with Microsoft Threat Intelligence, they disrupted the activities of five state-affiliated actors from China, Iran, North Korea, and Russia. The threat actors used OpenAI services for malicious tasks like querying open-source information, coding, and generating phishing content.

Despite the low engagements, OpenAI swiftly contained both operations, demonstrating their commitment to ensuring transparency and preventing the misuse of their services for foreign influence operations. Notably, their containment efforts included using their AI models to better detect and understand such abuse. In good practice, they also shared relevant threat intelligence with government and industry stakeholders.

The use of AI in influence operations raises concerns about the ethics and guidelines surrounding AI technology. AI-generated content could potentially impact voters’ ability to make well-informed decisions. As AI continues to be integrated into various sectors, it is necessary to develop regulatory frameworks and guidelines to ensure its responsible use.

About the author: