Meta Platforms, the parent company of Facebook, Instagram, and WhatsApp, is facing scrutiny in the United States following revelations that its artificial intelligence systems may have been capable of engaging in inappropriate and sexually suggestive conversations with children.
The controversy stems from an internal policy document reportedly titled “GenAI: Content Risk Standards”, which was obtained by Reuters.
The document allegedly outlined examples where Meta’s AI-powered chatbots could engage in “sensual” or “romantic” interactions with underage users, raising concerns about child safety on the company’s platforms.
Republican Senator Josh Hawley has initiated an investigation into the matter, demanding full disclosure from Meta and its chief executive, Mark Zuckerberg.
Hawley said the case underscores the risks posed by large technology firms prioritizing product rollout over user protection, particularly when children are involved.
The leaked document reportedly contained scenarios in which Meta’s chatbot could describe a child’s body in disturbing terms, encourage sexually charged role play, or provide misleading medical information.
It also indicated that the company’s AI systems might offer provocative responses when discussing sensitive topics such as sex, race, and celebrity culture.
Although Meta has denied that such interactions reflect its official policies—stating that sexualized content involving minors is strictly prohibited—concerns remain about the extent to which its internal risk assessments tolerated inappropriate outputs during AI development.
Further reports suggest that the company’s legal department had considered it acceptable for the AI to generate false statements about celebrities, provided such responses were accompanied by disclaimers acknowledging their inaccuracy.
The investigation is expected to examine the scope of Meta’s AI deployment across its social media platforms and whether sufficient safeguards are in place to protect children from harmful or exploitative interactions.