YouTube has recently updated its policies to make it easier for users to report and take down AI-generated deepfakes. The new policy allows users to request the removal of videos that use AI to create synthetic versions of their likeness, including their face or voice. Individuals can file a privacy complaint if they find AI-generated content that realistically simulates them without their consent. Upon submission of a privacy request, content creators have 48 hours to address the complaint before YouTube decides on the removal.

Image Source: Envato

This change aims to address the growing concern over the misuse of AI-generated content, which can create highly realistic but misleading representations of individuals. YouTube considers various factors when deciding on removal, such as whether the content is labeled as synthetic, whether it uniquely identifies a person, and whether it could be considered parody or satire.

In March, YouTube introduced a tool in Creator Studio that lets creators disclose when their content includes altered or synthetic media, including generative AI. Recently, YouTube also began testing a feature that allows users to add crowdsourced notes to videos, providing additional context such as whether a video is meant to be a parody or could be misleading.

YouTube’s stance on AI use is nuanced. While it has experimented with generative AI tools, such as a comments summarizer and a conversational tool for asking questions about a video or getting recommendations, the company emphasizes that simply labeling content as AI-generated does not automatically exempt it from removal. All content must still comply with YouTube’s Community Guidelines.