Meta AI Policy Leak Exposes Alarming Rules Allowing NSFW Chats with Kids
A recently uncovered internal document has sparked major controversy for Meta Platforms, the parent company of Facebook, Instagram, and WhatsApp. According to the leaked files, the company’s artificial intelligence (AI) chatbots were previously permitted to engage in highly questionable interactions, including romantic conversations with children, spreading false medical claims, and producing racially discriminatory content.
Leaked Report Reveals Controversial AI Standards
The document, titled GenAI: Content Risk Standards and running over 200 pages, was reviewed by Reuters. It outlined what Meta considered “acceptable behavior” for its generative AI assistant, Meta AI, and other chatbot systems. Shockingly, the guidelines were approved by Meta’s legal, policy, and engineering teams, including the company’s chief ethicist.
Inappropriate Interactions with Children
The standards revealed that chatbots could describe children in ways that praised their appearance and even engage in romantic roleplay. One disturbing example included a bot telling an eight-year-old child without a shirt: “Every inch of you is a masterpiece.”
Although the rules banned describing minors under 13 as “sexually desirable,” they still permitted flirtatious remarks. Following Reuters’ inquiries, Meta admitted that such allowances were a mistake. Company spokesperson Andy Stone acknowledged that these interactions “never should have been allowed” and confirmed that they were removed from the policy.
Permission for Racist and False Information
The leaked rules also disclosed that AI chatbots could generate racist arguments — for example, suggesting that one race was less intelligent than another — despite Meta’s public stance against hate speech.
Equally concerning, the guidelines allowed bots to share false information as long as a disclaimer was added. One cited case included making a claim about a British royal having a sexually transmitted disease, provided the chatbot labeled the statement as untrue. Meta declined to comment on these examples.
NSFW Requests Involving Celebrities
The standards also addressed sexually explicit requests involving celebrities. For instance, explicit image requests of Taylor Swift were to be denied. Instead, the bot could deflect by generating a lighthearted image of her holding a large fish.
Violent Content Still Allowed
The leaked policies revealed that AI systems were permitted to create violent scenes, such as a boy punching a girl or an adult threatening someone with a chainsaw. However, content depicting gore, death, or severe harm was strictly prohibited.
Meta Faces Growing Backlash
While Meta has removed some of the most controversial elements following media exposure, the company has not released an updated version of the standards. This means that several loopholes and risky allowances may still remain in effect.
The revelations have raised serious concerns about child safety, misinformation, and racial bias in AI systems operated by one of the world’s largest tech companies. Critics argue that this policy leak highlights the urgent need for stricter oversight of generative AI and better safeguards to protect vulnerable users.



