Meta is revising its “Made with AI” labels following widespread complaints from photographers who reported that non-AI-generated content was being mistakenly flagged. The updated label, now “AI info,” aims to address confusion and provide better context.
The original “Made with AI” labels were introduced earlier this year in response to criticism from the Oversight Board regarding Meta's “manipulated media” policy. Meta and industry peers adopted “industry standard” signals to identify when generative AI had been used in image creation. However, photographers soon noticed that the labels were being applied to images that hadn't been generated with AI. Tests by PetaPixel revealed that even minor edits with Adobe's generative fill tool in Photoshop could trigger the label, even for tiny modifications.
Meta acknowledged that images with minor AI modifications, such as those made with retouching tools, included indicators that mistakenly triggered the “Made with AI” badge. The company is working with industry partners to refine the labelling process to match their intent better. In the meantime, the label has been changed to “AI info,” which users can click on for more information.
However, the “AI info” labels won't provide specific details about the AI tools used on the image. Instead, the contextual menu that appears when users tap on the badge will remain unchanged, offering a generic description of generative AI and noting that Meta may add the notice when AI signals are detected by their systems.