Meta has announced plans to significantly expand the labeling of AI-generated images on its social media platforms, including Facebook, Instagram, and Threads. The initiative aims to cover not only the synthetic imagery produced by Meta’s own generative AI tools but also that created using technologies from other companies like Google, OpenAI, Microsoft, Adobe, Midjourney, and Shutterstock. This effort is part of an industry-wide push towards establishing best practices and embedding “invisible markers” within images and their metadata to facilitate the identification of AI-generated content.

This Image was created with the assistance of DALL·E

The decision to implement these labels comes in response to the potential for misuse of AI-generated imagery to spread disinformation. Nick Clegg, Meta’s president of global affairs, highlighted the importance of this initiative in guarding against the adversarial use of AI to deceive the public. He emphasized the ongoing need for the industry and society at large to stay vigilant and proactive in detecting AI-generated content.

Meta’s approach to labeling AI-generated content relies on the detection of visible marks and invisible watermarks embedded in synthetic images by its AI technology. However, the company acknowledges challenges in detecting AI-generated videos and audio due to the less widespread adoption of marking and watermarking, which makes it difficult for detection tools to be effective.

In light of these challenges, Meta is also introducing a tool that allows users to voluntarily disclose when they share AI-generated video or audio content, ensuring it can be appropriately labeled. The company has indicated that failure to use this disclosure tool for “photorealistic” videos or “realistic-sounding” audio could result in penalties under Meta’s Community Standards.