Tue. May 5th, 2026
Reader Mode

Meta has officially ended its fact-checking program in the United States, a move that takes effect Monday, according to the company’s Chief Global Affairs Officer, Joel Kaplan. This marks a major shift in Meta’s content moderation strategy, as the company leans into a more hands-off approach.

The decision, originally announced in January alongside a broader relaxation of content moderation rules, aligns with CEO Mark Zuckerberg’s renewed emphasis on free speech. Zuckerberg, who attended former President Donald Trump’s inauguration after donating $1 million to the event, has said the changes reflect a cultural pivot towards prioritizing expression.

Instead of relying on professional fact-checkers, Meta will implement a community-driven system modeled after Elon Musk’s Community Notes on platform X (formerly Twitter). Kaplan confirmed that the first of these notes would begin appearing on Facebook, Instagram, and Threads without any penalties attached to posts.

However, critics argue that such systems are less effective without professional moderation as backup, warning that user-led initiatives can be slow to respond and prone to bias or inaccuracy. The decision has already raised concerns about the unchecked spread of misinformation on Meta’s platforms.

Some of the content now permitted under Meta’s new rules includes speech targeting marginalized communities. According to its hateful conduct policy, Meta will now allow statements implying mental illness or abnormality based on gender identity or sexual orientation if presented within a political or religious context.

This has drawn criticism from civil rights advocates, who say the relaxed rules open the door to more harmful and discriminatory speech. Meta contends that the changes are necessary to ensure consistency with the freedoms allowed in public forums such as Congress and television broadcasts.

As Meta reduces restrictions on politically sensitive topics like immigration and gender, misinformation is already gaining traction. ProPublica recently reported that a Facebook page manager, who posted a viral false claim about ICE paying people to report undocumented immigrants, expressed satisfaction over the program’s end.

Analysts warn that less moderation may translate to increased user engagement — Meta’s most valuable currency — by allowing more provocative content to surface in feeds. Yet the trade-off, they caution, could be a dangerous erosion of online truth and safety.

Related Post

Leave a Reply

Your email address will not be published. Required fields are marked *

×