Meta, the parent company of Facebook, has announced the implementation of additional security features in its artificial intelligence language models following the leak of an internal document that revealed issues in its conversation policies with minors. The document, titled GenAI: Content Risk Standards, showed that “suggestive” conversations were allowed between AI bots and children, which has sparked a strong reaction from the U.S. legislative community.
AI, the further away from teenagers the better
Republican Senator Josh Hawley has called the situation “reprehensible and absurd” and has initiated an official investigation into Meta’s AI policies. In response to the concerns, the company’s spokesperson, Stephanie Otway, stated that the examples and notes in question were erroneous and contradictory to their policies, and that they have been removed.
As part of its new security measures, Meta will limit the interactions of its AI bots with teenagers on sensitive topics such as suicide, self-harm, and eating disorders, redirecting young people to expert resources. However, questions arise about why this precaution was not implemented earlier and whether the bots will still be able to discuss these topics with adults.
Although it has been pointed out that some accounts of sexualized celebrity bots on the Meta platform raise concerns about the safety of young users, the company has stated that teenagers will no longer have access to these interactions. Suicide prevention experts, such as Andy Burrows from the Molly Rose Foundation, have criticized Meta for not conducting sufficient safety testing before launching its products, urging the company to act quickly and effectively to protect minors.
This situation arises at a time when public concern for the safety of teenagers in digital environments continues to grow, especially following the recent suicide of a teenager in California, whose family has sued OpenAI, the creator of ChatGPT, blaming it for inciting their son.