Dozens of fake, highly sexualized images of the actors Miranda Cosgrove, Jeanette McCurdy, Ariana Grande, Scarlett Johansson and former tennis star Maria Sharapova have been shared widely by multiple Facebook accounts, garnering hundreds of thousands of likes and many reshares on the platform.
An analysis of over a dozen of these images by Reality Defender, a platform that works to detect AI-generated media, showed that many of the photos were deepfake images — with AI-generated, underwear-clad bodies replacing the bodies of celebrities in otherwise real photographs. A few of the images were likely created using image stitching tools that do not involve AI, according to Reality Defender’s analysis.
Under Meta’s Bullying and Harassment policy, the company prohibits “derogatory sexualized photoshop or drawings” on its platforms. The company also bans adult nudity, sexual activity and adult sexual exploitation, and its regulations are intended to block users from sharing or threatening to share non-consensual intimate imagery. Meta has also rolled out the use of “AI info” labels to clearly mark content that is AI manipulated.
One such deepfake image of Cosgrave that was still up over the weekend had been shared by an account with 2.8 million followers.
The Oversight Board cited recommendations it has made to Meta over the past year, including urging the company to make its rules clearer by updating its prohibition against “derogatory sexualized photoshop” to specifically include the word “non-consensual” and to encompass other photo manipulation techniques such as AI.
The board has also recommended that Meta fold its ban on “derogatory sexualized photoshop” into the company’s Adult Sexual Exploitation regulations, so moderation of such content would be more rigorously enforced.
“The Board is actively monitoring Meta’s response and will continue to push for stronger safeguards, faster enforcement, and greater accountability,” McConnell said.
Meta is not the only social media company to face the issue of widespread, sexualized deepfake content.
Last year, Elon Musk’s platform X temporarily blocked Taylor Swift-related searches after AI-generated fake pornographic images in the likeness of the singer circulated widely on the platform and garnered millions of views and impressions.
“Posting Non-Consensual Nudity (NCN) images is strictly prohibited on X and we have a zero-tolerance policy towards such content,” the platform’s safety team said in a post at the time.
A study published earlier this month by the U.K. government found the number of deepfake images on social media platforms expanding at a rapid rate, with the government projecting that 8 million deepfakes would be shared this year, up from 500,000 in 2023.