MOUNTAIN VIEW, California – In a decisive move to address growing concerns over AI-generated deepfake nude images, Google has issued new guidance for developers building AI applications distributed through Google Play.
This announcement follows a recent crackdown on applications designed to create non-consensual and potentially harmful content.
The new guidelines come amid alarming reports about the ease with which certain AI-powered apps can manipulate photos to produce realistic yet fabricated nude images of individuals.
High-profile examples include apps like ‘DeepNude’ and its clones, which can strip clothes from images of women, generating highly realistic nude photos.
Additionally, reports have highlighted the availability of apps capable of generating deepfake videos, posing significant risks of privacy invasion, harassment, and blackmail.
Under the new guidance, AI applications must undergo rigorous testing to prevent the generation of restricted content.
Developers are required to document these tests before their app launches, as Google may request a review of these tests in the future.
Furthermore, any advertisements suggesting that an app can circumvent Google Play’s rules could result in a ban from the platform.
Google has also emphasized the importance of providing users with mechanisms to report inappropriate content.
To support developers, the company is offering additional resources and best practices, including its People + AI Guidebook, which aims to assist developers in responsibly building AI applications.