Fri. Mar 27th, 2026
Reader Mode

Video sharing platform YouTube has expanded its artificial intelligence powered likeness detection technology to a pilot group of government officials, political candidates and journalists in an effort to combat the spread of deepfake videos online. The company announced on Tuesday that the selected participants will gain access to a tool capable of identifying unauthorized AI generated content that imitates their faces and allows them to request its removal if it violates platform policies.

The detection system, first introduced last year for creators in the YouTube Partner Program, functions in a similar way to YouTube’s well known Content ID technology. While Content ID scans uploaded videos for copyrighted material, the new tool focuses on identifying simulated human faces generated by artificial intelligence tools that attempt to mimic real individuals.

Deepfake technology has increasingly raised concerns across the digital landscape because it can be used to manipulate videos of well known personalities such as politicians or public officials, making it appear as though they said or did something that never happened. YouTube officials said the expansion of the pilot programme is intended to help safeguard public conversation, particularly within civic and political spaces where misinformation could have significant consequences.

According to Leslie Miller, Vice President for Government Affairs and Public Policy at YouTube, the initiative is designed to balance free expression with the risks posed by AI generated impersonations. She explained that not all flagged videos would automatically be removed, noting that the platform would review each case under its privacy and expression policies to determine whether the content qualifies as parody or political commentary.

YouTube also disclosed that it supports broader regulatory efforts in the United States, including the proposed NO FAKES Act, which seeks to regulate the use of artificial intelligence to recreate a person’s voice or visual identity without consent. The company said the pilot programme may eventually expand to allow individuals to block the upload of violating content before publication and could later extend detection capabilities to include synthetic voices and other forms of intellectual property.

Related Post

Leave a Reply

Your email address will not be published. Required fields are marked *

×