An outside group is examining how Meta dealt with complaints about AI-generated fake porn on its platforms.
Consisting of 40 members with diverse backgrounds from around the world, a board was established by Meta to help determine what content to remove or keep on its platforms.Oversight Board” consisting of 40 diverse members from around the globe across different disciplines, was created by Meta to assist in deciding what content to remove, keep, and why, per the group’s website.
The Oversight Board will use its independent judgment to support people’s right to free speech and ensure those rights are being properly respected. The Board’s decisions regarding Facebook, Instagram, and Threads content will be final and must be implemented by Meta, unless doing so would violate the law,
The Oversight Board issued a statement on April 16th indicating that it will be investigating two recent instances of AI-generated pornographic images of well-known women that were posted on Meta’s platforms. statement The first case involves an AI-created nude image resembling a public figure from India, which was shared on Instagram. A user flagged the content to Meta for being pornographic, but the report was automatically closed after Meta failed to review it within 48 hours. The user appealed Meta’s decision to keep the content up, however, this appeal was also automatically closed. The user then appealed to the Oversight Board.
According to the Board, Meta acknowledged that its choice to keep the content up was a mistake and has since taken down the post.
A public Instagram account that exclusively posts AI-generated images of Indian women shared the fake image.
The second case involves an AI-generated pornographic image of a nude woman with a man touching her breast. The image was posted on Facebook and was designed to resemble an American public figure mentioned in the caption.
Another Facebook user had originally posted the photo in question, which was then removed for violating Meta’s policy on bullying and harassment, specifically for “derogatory sexualized photoshop or drawing.”
Once removed, the image was added to a database allowing Meta's automated system to identify and delete previously banned images.
In this case, Meta’s systems identified and removed the image. The user who posted the photo appealed the removal, but the report was automatically closed. It was then appealed to the Board.
The Board is inviting public feedback to assist in their decision-making process for these two cases.
In particular, they are seeking public comments regarding the negative impact of AI-generated porn on women, particularly those in the public eye. Additionally, the Board is requesting contextual information about the prevalence of AI-generated porn globally, strategies for Meta to address this issue on its platform, and the challenges of relying on an automated system that closes appeals within 48 hours without review.
As part of their decision-making process, the Board will also offer suggestions to Meta on content policies and more. Unlike the Board's definitive decisions on individual cases, Meta must respond to suggestions within 60 days, and they are not binding.
to the Board’s suggestions by carrying them out completely or partially, checking feasibility, confirming that the suggestion is something Meta already does, or choosing not to take any further action.
Meta responds The increasing availability of AI technology has led to a growing concern about the widespread use of fake pornographic images.
A year ago, a movement was initiated by those impacted by deepfake incidents to secure more legal safeguards, as reported earlier by
The Dallas Express Dorota Mani, one of the organizers of the campaign, was the mother of a 14-year-old girl who was among several high school female students in New Jersey whose images were used to produce deepfake nudes that were distributed on social media..
The school informed Mani about the deepfake nudes, but reportedly, none of the students who allegedly created or circulated the sexually explicit content have faced consequences.
“We’re advocating for our children,”
Mani stated, as reported by The Associated Press. “They are not Republicans, and they are not Democrats. They don’t care. They just want to be loved, and they want to be safe.” said In October of 2023, President Joe Biden issued an
executive order with the aim of ensuring the safety and security of AI. “Achieving this goal necessitates thorough, dependable, repeatable, and standardized assessments of AI systems, along with policies, institutions, and, as appropriate, other measures to assess, comprehend, and alleviate risks from these systems before they are employed,” the order states.
An independent board is examining how Meta dealt with reports of deepfake pornography generated by artificial intelligence on its platforms. Comprising 40 members representing a diverse range of disciplines and backgrounds from around the world, an “Oversight Board” was established by Meta to assist in determining “what to take down, […]