Ig Alerts: Instagram to Notify Parents When Teens Repeatedly Search Suicide or Self-Harm
Ig is at the center of a new parental notification aimed at helping families spot when a teen may need support: in the coming weeks, Instagram will start notifying parents who use its supervision tools if their teen repeatedly attempts searches related to suicide or self-harm within a short period of time. The change is presented as an additional layer of protection for Teen Accounts and parental supervision features.
How Ig will notify parents and what triggers alerts
The system is designed to detect attempted searches that match certain categories and then notify parents enrolled in supervision. Attempted searches that would prompt an alert include phrases promoting suicide or self-harm, phrases that suggest a teen wants to harm themselves, and terms like 'suicide' or 'self-harm'.
When the threshold is reached, parents will receive notifications through available contact channels — email, text, or WhatsApp — and an in-app notification. Tapping the in-app message opens a full-screen explanation that the teen has repeatedly tried to search for terms associated with suicide or self-harm within a short period of time, and offers expert resources to help parents approach potentially sensitive conversations.
Rollout, safeguards and what parents will see
These alerts will first roll out to parents who use Instagram’s parental supervision tools in the US, UK, Australia, and Canada next week, with availability in other regions planned later in the year. The stated goal is to empower parents to step in if a teen’s searches suggest they may need support while avoiding unnecessary notifications that could dilute the usefulness of alerts.
The team analyzed Instagram search behavior and consulted experts from its Suicide and Self-Harm Advisory Group when designing the feature. They selected a threshold that requires a few searches within a short period of time and intentionally erred on the side of caution; that means the system may sometimes notify parents when there is not real cause for concern, but it is positioned as a cautious starting point that will be monitored and refined based on feedback.
What this builds on and what parents should expect next
The new alerts build on Instagram’s existing approach to potentially harmful content for teens. The platform retains policies that block searches promoting or glorifying suicide or self-harm and instead directs users to resources and helplines that can offer support. The majority of teens do not try to search for suicide and self-harm content on the platform, and these alerts are intended to provide parents with timely information when repeated search attempts occur.
- Who receives alerts: parents enrolled in supervision for a teen account.
- What triggers an alert: multiple searches within a short period for terms tied to suicide or self-harm.
- How alerts arrive: email, text, WhatsApp, and an in-app notification that opens to resources.
- Initial availability: US, UK, Australia, Canada, with further rollout later in the year.
Parents and teens enrolled in supervision will be notified next week that these alerts are starting. The product team emphasized that they will continue to monitor the feature and listen to feedback to ensure the balance between timely intervention and minimizing unnecessary notifications remains appropriate. For families using parental supervision, this change will introduce a new signal that can prompt supportive conversations and connect teens with resources when searches suggest they may be at risk.
In day-to-day references some people may shorten the platform name to ig in conversation; this update to ig supervision tools is part of a broader effort to protect teens from potentially harmful content while offering parents options to respond when concern arises.