In response to the growing influence of artificial intelligence (AI) in media, Google has announced a new mandate requiring advertisers to disclose election ads that feature digitally altered content. This policy change aims to address the rise of generative AI, which facilitates the creation of text, images, and videos that can mislead or misinform the public.
Google's updated political content policy is designed to adapt to the evolving nature of AI. It will compel advertisers to select a checkbox in the 'altered or synthetic content' section when setting up their ad campaigns. The company will automatically generate an in-ad disclosure for mobile feeds, shorts, and in-streams on computers and televisions. For other formats, advertisers must ensure the disclosure is prominent and noticeable to users. The specific language for these disclosures will vary depending on the context of each ad.
The move comes amid increasing concerns about the misuse of AI in creating deepfakes and other synthetic media that can misrepresent real people or events. The potential for AI to influence political outcomes was starkly demonstrated during India's recent general election, when fake videos featuring AI-generated depictions of Bollywood actors criticizing Prime Minister Narendra Modi went viral.
In a similar vein, OpenAI reported in May that it had disrupted five covert operations attempting to use its AI models for deceptive activities aimed at manipulating public opinion. Meta, the parent company of Facebook and Instagram, has also implemented requirements for advertisers to disclose the use of AI or other digital tools in creating political, social, or election-related ads.
These efforts reflect a broader push by tech companies to mitigate the spread of misinformation and ensure transparency in political advertising.