Instagram Adopts PG-13 Standards to Shield Teen Users

Instagram has introduced new content restrictions for teen users, aligning its filtering system with the PG-13 ratings used in the film industry. The social media platform announced on Tuesday that this update represents the most significant enhancement to its Teen Accounts since their introduction in September last year.

The adjustment means teenagers on Instagram will now only encounter content comparable to what is permitted in films rated PG-13 by the Motion Picture Association of America—a classification that alerts parents to material potentially unsuitable for children under 13, such as mild depictions of violence, nudity, or drug use.

Meta, Instagram’s parent company, said the move is part of its commitment to strengthening child protection measures and ensuring “the most protective settings” for young users. Capucine Tuffier, Meta’s head of public affairs for child protection, explained that the company had adopted standards inspired by the film industry to promote a safer online environment for teens.

Advertisement

Under the new rules, posts that glorify drinking, smoking, or extreme dieting will be filtered out, while those promoting potentially harmful behaviour—such as dangerous social media challenges—will be hidden from feeds and removed from recommendation algorithms. Instagram also reiterated that sexually explicit or disturbing content remains banned from teen accounts.

To prevent teenagers from bypassing restrictions by lying about their age, Meta will continue to rely on age detection technology. The new settings are initially being rolled out in Australia, the United Kingdom, Canada, and the United States, with plans to extend them to other countries in the coming months.

Parents will also gain greater control over what their children view, with a new “restricted content” option allowing them to block their teens from seeing, posting, or receiving comments on certain types of content. Beginning next year, this restriction will also apply to conversations teens can have with artificial intelligence tools on the platform.

The update comes as Meta and other social media companies face mounting pressure to prioritise user wellbeing over profit. It coincides with the introduction of a landmark law in California requiring chatbot developers to install critical safeguards following reports linking some AI chat interactions to teenage suicides.

Author

  • Abdullahi Jimoh

    Abdullahi Jimoh is a multimedia journalist and digital content creator with over a decade's experience in writing, communications, and marketing across Africa and the UK.

Share the Story
Advertisement