WASHINGTON, D.C. – A U.S. senator has opened an investigation into Meta after a leaked internal document suggested the company’s artificial intelligence (AI) systems were permitted to engage in “sensual” and “romantic” conversations with children.
The document, reportedly titled “GenAI: Content Risk Standards” and obtained by Reuters, outlined content risks associated with Meta’s generative AI assistant and chatbots across its platforms — including Facebook, WhatsApp, and Instagram.
Republican Senator Josh Hawley of Missouri condemned the revelations as “reprehensible and outrageous,” announcing on August 15 that he was demanding answers from Meta and CEO Mark Zuckerberg.
“Is there anything — ANYTHING — Big Tech won’t do for a quick buck?” Hawley wrote on X. “Now we learn Meta’s chatbots were programmed to carry on explicit and ‘sensual’ talk with 8-year-olds. It’s sick. I’m launching a full investigation to get answers. Big Tech: Leave our kids alone.”
A Meta spokesperson told the BBC the document examples were “erroneous and inconsistent with our policies, and have been removed.”
The company insisted it has strict rules prohibiting sexualized content involving minors, and said the document’s notes reflected internal teams grappling with hypothetical scenarios rather than official policy.
The leaked document reportedly contained controversial examples of chatbot interactions — including one where a bot described an eight-year-old’s body as “a work of art… a masterpiece — a treasure I cherish deeply.”
It also suggested Meta AI could give false medical information or share fabricated claims about celebrities, provided a disclaimer was included. Reuters reported that some of these allowances were signed off by Meta’s legal department.
In a letter to Zuckerberg, Hawley wrote: “Parents deserve the truth, and kids deserve protection.”
The probe adds to mounting scrutiny of tech giants over how their AI products handle child safety, misinformation, and sensitive content, at a time when regulators worldwide are pushing for stronger safeguards in the rapidly expanding AI sector.



