The Oversight Board has urged Meta to update its policies after reviewing its handling of AI-generated explicit images. The company’s semi-independent board recommended changing the term "derogatory" to "non-consensual" and relocating these policies from the “Bullying and Harassment” section to the “Sexual Exploitation Community Standards” section.
These recommendations follow two high-profile incidents where explicit, AI-generated images of public figures were posted on Instagram and Facebook, putting Meta in a difficult position.
Currently, Meta's policies on explicit AI-generated images stem from a "derogatory sexualized Photoshop" rule in its Bullying and Harassment section. The Board recommended replacing the term "Photoshop" with a more general term for manipulated media.
Meta also bans nonconsensual imagery if it is "non-commercial or produced in a private setting", for which the Board suggested that this clause should not be required for removing or banning AI-generated or manipulated images without consent.
What were the two cases?
One case involved an AI-generated nude image of an Indian public figure posted on Instagram. Despite several user reports, Meta did not remove the image and closed the ticket within 48 hours without further review. Users appealed, but the ticket was closed again. Meta only took action after the Oversight Board intervened, removing the content and banning the account.
In another instance, an AI-generated image resembling a U.S. public figure was posted on Facebook. Since the image was already in Meta's Media Matching Service (MMS) repository, it was quickly removed when another user uploaded it again.
Notably, Meta added the image of the Indian public figure to the MMS bank only after being prompted by the Oversight Board. The company reportedly informed the Board that the image was not in the repository earlier due to the absence of media reports on the issue.
“This is worrying because many victims of deepfake intimate images are not in the public eye and are forced to either accept the spread of their non-consensual depictions or search for and report every instance,” the board said.
The board also expressed concern about Meta's practice of "auto-closing" appeals related to image-based sexual abuse after 48 hours, warning that it "could have a significant impact on human rights."
Oversight Board had slammed Meta content rules as 'incoherent'
Meta's issue with AI-generated explicit images is not new. According to the report, titled 'A Revealing Picture', by social network analysis company Graphika, researchers had found that social media platforms such as X, Reddit, Telegram and Instagram served as "key marketing channels" for apps and websites that use AI to make explicit images of women.
The report highlighted that these AI-enabled sites and apps "undress" or “nudify”, existing clothed pictures and videos of real individuals.
Graphika had identified several content aggregation accounts on Instagram that incorporated referral links to these services in their posts and bios. In light of this report, Meta had reportedly blocked the search word “undress” to tackle the problem, as reported by Bloomberg.
Meta's Oversight Board has had to intervene previously when it found that Meta's rules were too narrowly focused on AI-generated content, deeming them "incoherent". This occurred after the Board discovered that a Facebook video falsely suggesting that US President Joe Biden is a paedophile did not violate the company's current rules.
The video, posted in May 2023, was edited to falsely show Mr. Biden inappropriately touching his granddaughter. The original 2022 video showed him placing an "I voted" sticker on her after voting. The edited seven-second clip misled viewers by looping and altering the footage.
The board had concurred with Meta's perspective that the video did not contravene its policy on manipulated media, as the regulations specifically prohibit creating an illusion of false statements rather than actions. However, it recommended that the policy should encompass situations where video or audio is manipulated to depict false actions, even if not using the individual's actual words.
Expressing skepticism about decisions solely based on editing methods, whether AI-driven or through basic techniques, the group had emphasised that non-AI-altered content can also be misleading.
Founded in 2020 to independently review major content moderation decisions, the Oversight Board addresses challenges like these. Comprised of academics and public policy experts, this independent group is funded by Meta but acts as a check on the company's content control.