The new policy will take effect on May 30.
Google has updated its Inappropriate Content Policy to include language that expressly prohibits advertisers from promoting websites and services that generate deepfake pornography. While the company already has strong restrictions in place for ads that feature certain types of sexual content, this update leaves no doubt that promoting “synthetic content that has been altered or generated to be sexually explicit or contain nudity” violates its rules.
Advertisers who promote websites or applications that produce deepfake porn, provide guides on how to make deepfake porn, or list, rate, or contrast different deepfake porn services will be immediately suspended. Additionally, they will not be able to post their advertisements on Google. The company is offering advertisers the option to have any ad that violates the new policy removed before it takes effect on May 30. According to 404 Media, the proliferation of deepfake technologies has increased advertisements for tools designed to specifically target users who wish to produce sexually explicit content. According to reports, some of those tools even pose as kid-friendly services to be listed on the Google Play Store and Apple App Store. However, they conceal this on social media, where they advertise their capacity to produce manipulated porn.
Still, Google has already begun to ban services that produce deepfakes in Shopping ads with explicit sexual content. Shopping ads for services that “generate, distribute, or store synthetic sexually explicit content or synthetic content containing nudity” have been prohibited by the company, in line with its soon-to-be-wider policy. These consist of websites that promote deepfake porn generators and tutorials on creating fake porn.
Conclusion
To sum up, Google’s revised ad content policy is a big step toward combating the spread of sexually explicit content, including deepfake pornography, online. Google forbids advertisers from aiding in the production and distribution of artificially generated and sexually explicit content by blocking advertisements connected to websites and applications that produce deepfake porn. Google’s annual ad safety report, which details the removal of over 1.8 billion ads for breaking its policies—including those promoting deepfake pornography—makes the enforcement of this policy clear.
Despite these initiatives, problems still exist since certain entities manage to get beyond them. To effectively stop the continued spread of deepfake pornography, Google must remain vigilant and make use of both technological advancements and strict policy enforcement. Although Google has taken steps to protect users from harmful content and to create a safer online environment, more work is needed to address the growing threat of digital exploitation.
It is important to note that Google takes action against deepfake pornography in ways that go beyond just regulating ads. The company’s commitment to prohibiting the promotion of sexually explicit content, including synthetic content, underscores its dedication to fostering a trustworthy online ecosystem. But more needs to be done, especially to close the gaps that those who want to spread this kind of content take advantage of.
In conclusion, Google’s revised ad content policy is a commendable step in the right direction toward lessening the impact of deepfake pornography. However, to effectively combat this widespread problem and guarantee a safer online environment for all users, ongoing efforts and industry collaborations are crucial.