spot_img

Instagram Introduces New Parental Controls and PG-13 Filters for Teen Users

Date:

Instagram is tightening safety settings for its teenage users with a new set of parental controls and content filters designed to give parents more oversight and limit exposure to inappropriate material. The changes, announced this week by parent company Meta, mark one of the platform’s most comprehensive overhauls of teen safety features to date.

The new policy automatically places all users under 18 on a PG-13-style content filter, restricting exposure to mature or sensitive posts. Under the new settings, teenagers will see less material related to drug use, violence, explicit language, or sexually suggestive content. Meta says the feature will roll out immediately in the United States, United Kingdom, Canada, and Australia, with plans to expand globally before the end of the year.

Under the updated system, teens will no longer be able to switch to less restrictive content settings without parental approval. Parents and guardians will also receive notifications when changes are requested, ensuring they remain part of the decision-making process. The platform will additionally introduce a new “Limited Content Mode,” which blocks an even broader range of posts and may restrict certain features like commenting or following some accounts.

These steps are part of its broader effort to ‘create a safer experience for young people’ amid growing criticism over how social media affects teen mental health.

Alongside the content filters, Instagram is also expanding parental controls to its AI-powered features, following public concern about “flirtatious” or inappropriate interactions between minors and Meta’s chatbots. Beginning early next year, parents will be able to see which AI characters their teens are interacting with, and may restrict or disable private conversations entirely.

Meta is also introducing transparency tools that summarize what types of topics a teen discusses with AI assistants, without exposing personal messages. These changes will first appear in English-speaking markets before expanding to other regions.

The move comes as Meta faces mounting regulatory and public scrutiny. Lawmakers in the U.S. and Europe have repeatedly pressed social-media companies to implement stronger protections for minors.

Meta has faced lawsuits and investigations over allegations that its algorithms intentionally amplified harmful content to teenagers. In response, the company has introduced several measures over the past two years — including time-limit reminders, “quiet mode” notifications, and automatic privacy settings for new teen accounts.

By introducing default filters and requiring parental consent to loosen them, Instagram is effectively shifting the burden of safety from teens to adults — a move regulators have long demanded.

The rollout of the new features will continue through the end of 2025, with AI controls following in early 2026. The company plans to evaluate feedback from parents and young users before extending the updates to other products, including Facebook and Messenger.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Share post:

Subscribe

spot_imgspot_img

Trending

More like this
Related

Nairobi County Approves Two-day Monthly Menstrual Leave for Women Staff

NAIROBI, Kenya — Women working for the Nairobi City...

DCI Releases Detailed Timeline of Crash That Killed Former MP Cyrus Jirongo

NAIVASHA, Kenya — The Directorate of Criminal Investigations (DCI)...

Ombudsman Presses Nairobi County to Clear Sh4bn Pension Arrears Owed to Former City Council Workers

NAIROBI, Kenya — Nairobi County is facing renewed pressure...

Kenyan Shilling Hits 17-Month High Against Dollar, Easing Import and Debt Pressures

NAIROBI, Kenya - The Kenyan shilling has climbed to...