Instagram is tightening safety settings for its teenage users with a new set of parental controls and content filters designed to give parents more oversight and limit exposure to inappropriate material. The changes, announced this week by parent company Meta, mark one of the platform’s most comprehensive overhauls of teen safety features to date.
The new policy automatically places all users under 18 on a PG-13-style content filter, restricting exposure to mature or sensitive posts. Under the new settings, teenagers will see less material related to drug use, violence, explicit language, or sexually suggestive content. Meta says the feature will roll out immediately in the United States, United Kingdom, Canada, and Australia, with plans to expand globally before the end of the year.
Under the updated system, teens will no longer be able to switch to less restrictive content settings without parental approval. Parents and guardians will also receive notifications when changes are requested, ensuring they remain part of the decision-making process. The platform will additionally introduce a new “Limited Content Mode,” which blocks an even broader range of posts and may restrict certain features like commenting or following some accounts.
These steps are part of its broader effort to ‘create a safer experience for young people’ amid growing criticism over how social media affects teen mental health.
Alongside the content filters, Instagram is also expanding parental controls to its AI-powered features, following public concern about “flirtatious” or inappropriate interactions between minors and Meta’s chatbots. Beginning early next year, parents will be able to see which AI characters their teens are interacting with, and may restrict or disable private conversations entirely.
Meta is also introducing transparency tools that summarize what types of topics a teen discusses with AI assistants, without exposing personal messages. These changes will first appear in English-speaking markets before expanding to other regions.
The move comes as Meta faces mounting regulatory and public scrutiny. Lawmakers in the U.S. and Europe have repeatedly pressed social-media companies to implement stronger protections for minors.
Meta has faced lawsuits and investigations over allegations that its algorithms intentionally amplified harmful content to teenagers. In response, the company has introduced several measures over the past two years — including time-limit reminders, “quiet mode” notifications, and automatic privacy settings for new teen accounts.
By introducing default filters and requiring parental consent to loosen them, Instagram is effectively shifting the burden of safety from teens to adults — a move regulators have long demanded.
The rollout of the new features will continue through the end of 2025, with AI controls following in early 2026. The company plans to evaluate feedback from parents and young users before extending the updates to other products, including Facebook and Messenger.



