OpenAI announced on Friday that it had removed accounts linked to an Iranian group that utilized its ChatGPT chatbot to generate content aimed at influencing the U.S. presidential election and other geopolitical issues.
The group, known as Storm-2035, created commentary on U.S. election candidates, the Gaza conflict, and Israel’s participation in the Olympic Games, distributing the content through social media and websites.
A thorough investigation by OpenAI, supported by Microsoft, revealed that ChatGPT was employed to produce both long-form articles and short social media posts.
However, the operation failed to gain significant traction, as most social media posts garnered little to no engagement, and there was no evidence of widespread sharing of the generated articles.

In response, OpenAI has banned the accounts from accessing its services and is continuing to monitor for any further policy violations. This follows an earlier Microsoft threat-intelligence report in August, which identified Storm-2035 as an Iranian network using four fake news websites to engage U.S. voter groups with polarizing messages on topics such as the U.S. presidential candidates, LGBTQ rights, and the Israel-Hamas conflict.
The AI company previously reported in May that it had disrupted five covert influence operations attempting to misuse its models for deceptive activities online.
