To better safeguard its younger audience from potentially harmful material, Instagram is introducing new limitations for teen profiles. By default, users under 18 will now only be exposed to content that fits within PG-13 movie guidelines, steering clear of topics such as intense violence, explicit nudity, and detailed depictions of drug use.
Teens under 18 will not be able to modify this restriction unless they have direct consent from a parent or guardian.
Instagram is also launching a more rigorous filter called Limited Content, which will stop teens from viewing or commenting on posts where this setting is enabled.
The platform announced that beginning next year, it will introduce further limitations on the types of conversations teens can have with AI bots that have the Limited Content filter active. The PG-13 content restrictions are already being applied to AI chats.

These updates come as chatbot developers like OpenAI and Character.AI face lawsuits over alleged user harm. Last month, OpenAI introduced new rules for ChatGPT users under 18 and stated it is training the chatbot to avoid “flirtatious talk.” Earlier this year, Character.AI also implemented additional restrictions and parental controls.
Instagram, which has been developing safety features for teens across profiles, direct messages, search, and content, is broadening its controls and restrictions for minors. The platform will prevent teenagers from following accounts that share content unsuitable for their age. If a teen already follows such accounts, they will not be able to view or interact with their posts, and the reverse is also true. These accounts will also be excluded from recommendations, making them less visible.

Additionally, the company is stopping teens from accessing inappropriate content that is linked to them through direct messages.
Meta already limits teen access to content about eating disorders and self-harm. Now, it is also blocking terms like “alcohol” and “gore,” and is taking steps to ensure teens cannot find such material even if they intentionally misspell these words.

The company is piloting a new feature that allows parents to report content they believe should not be recommended to teens through parental supervision tools. Reported posts will be reviewed by a dedicated team.
Instagram is beginning to implement these updates today in the U.S., U.K., Australia, and Canada, with a global rollout planned for next year.