(VitalNews.org) – Meta’s oversight panel spoke out about Meta’s policies on non-consensual deepfake images as they look into a case involving two explicit AI-generated images of famous women.
The Oversight Board said that Meta didn’t take down a deepfake of an intimate image of a famous Indian woman until the company’s board got involved in the situation. Deepfake nude images of women and celebrities such as Taylor Swift have been published on social media; it’s become more of an issue as the technology used to make them is more accessible to the public than ever.
The board has been serving as a reference for content on its platforms, and they have spent months reviewing two cases that involved AI-generated images depicting famous women. One case concerned an “AI manipulated image” posted on Instagram showing a nude Indian woman from the back with her face showing. This AI figure resembled a famous public figure and it wasn’t until this person sent it to the Oversight Board that Meta decided to take the image down.
In the second case, an AI generated image was depicting an American woman who was nude and being groped. This was posted to a Facebook group and was immediately removed. The board said that both images were a violation of Meta’s “derogatory sexualized photoshop,” which is under their harassment and bullying section.
They did add that the policy wording was not clear enough and they decided to replace the word “derogatory” with “non-consensual” instead to make it more clear. The board asked Meta why the Indian woman’s photo wasn’t already in their database, and they were astonished by the response that Meta relied on media reports.
The board stated, “This is worrying because many victims of deepfake intimate images are not in the public eye and are forced to either accept the spread of their non-consensual depictions or search for and report every instance.”
Copyright 2024, VitalNews.org