Meta has announced upcoming parental control tools for its AI-powered experiences across Facebook, Instagram, and Messenger, aiming to give parents more oversight on how teens engage with artificial intelligence on its platforms.
The new features, according to Meta, will allow parents to monitor and manage their teens’ interactions with Meta AI, including limiting access to certain AI-generated chats and educational tools. The update comes amid growing global scrutiny of how social media companies protect minors from potentially harmful or misleading AI content.
In a statement released Friday, the tech giant stated that it intends to implement these controls on Instagram in the first part of the next year. In the United States, the United Kingdom, Canada, and Australia, they will be accessible in English.
“We recognize parents already have a lot on their plates when it comes to navigating the internet safely with their teens, and we’re committed to providing them with helpful tools and resources that make things simpler for them, especially as they think about new technology like AI,” the company said in a post written by Instagram head Adam Mosseri and newly appointed Meta AI head Alexandr Wang.
Meta also added in the statement that the tools are part of a broader initiative to make its AI systems “safe, transparent, and age-appropriate.” The company also revealed plans to expand transparency reports detailing how AI features are used by young users.