top of page
Gen-AI Employee Support & Automation Platform

Search Giant Google Reduces Exposure to Explicit Deepfakes by 70%




Google has announced new steps to reduce the visibility of AI-generated explicit content, making it harder to find. Recent updates to Google's ranking system have significantly decreased exposure to fake explicit images, cutting such content by over 70% for searches involving specific individuals.


Previously, searches for terms like “deepfake nudes Jennifer Aniston” would return numerous results claiming to have explicit AI-generated images of the actress. These results have now been replaced by articles addressing the societal impacts of deepfakes and warnings about related scams.


In a blog post, Google product manager Emma Higham highlighted that these changes help shift focus from nonconsensual fake images to informative content about the issue. This comes after a WIRED investigation revealed Google’s prior resistance to implementing stronger measures against intimate portrayals of people spreading online without consent.


While Google has streamlined the process for requesting the removal of unwanted explicit content, victims and advocates have called for more proactive measures. The rise of AI image generators with few usage restrictions has exacerbated the problem, enabling the creation of spoofed explicit images easily.


A March analysis by WIRED found that Google received over 13,000 removal requests for explicit deepfake links, complying in 82% of cases. As part of its new crackdown, Google will apply three measures to reduce the discoverability of both synthetic and real unwanted explicit images. These include preventing duplicates of removed images from appearing in search results, filtering similar explicit images from related queries, and demoting websites with high volumes of successful takedown requests.


These measures aim to offer greater peace of mind to individuals concerned about unwanted explicit content resurfacing. Despite these efforts, Google acknowledges that the measures are not perfect, and advocates believe more can be done. While Google prominently warns US users about the illegality of seeking child pornography, similar warnings for adult sexual deepfakes are not planned.

bottom of page