Sat. Mar 14th, 2026
Reader Mode

The change, aimed at reducing unnecessary denials, was confirmed by Laurentia Romaniuk, a member of OpenAI’s AI model behavior team, and Nick Turley, ChatGPT’s head of product. Turley emphasized that users can now engage with ChatGPT more freely, provided they comply with legal and ethical boundaries.

Despite the rollback, ChatGPT will still refuse to answer certain objectionable queries or promote misinformation. Previously, some users on platforms like Reddit and X (formerly Twitter) reported seeing warnings for topics related to mental health, erotica, and fictional violence. The removal of the so-called “orange box” warnings is seen as an effort to address concerns that ChatGPT was overly censored or restricted.

OpenAI maintains that this change does not affect the chatbot’s actual responses. The company has also updated its Model Spec guidelines, reinforcing that its AI will engage with sensitive topics without bias. The revisions aim to prevent the AI from making blanket assertions that could exclude specific viewpoints, ensuring a broader spectrum of discussions.

The decision comes amid growing political scrutiny over AI moderation policies. Critics, including some allies of former U.S. President Donald Trump, have accused AI systems of being skewed towards progressive ideologies. Tech figures like Elon Musk and investor David Sacks have particularly targeted OpenAI, claiming that ChatGPT was programmed to be “woke” and suppress conservative perspectives.

By removing these warnings and updating its guidelines, OpenAI appears to be addressing concerns about ideological bias in AI responses. While some users welcome the move as a step towards more open conversations, others remain skeptical about how the chatbot will handle controversial discussions moving forward.

Related Post

Leave a Reply

Your email address will not be published. Required fields are marked *

×