Editor’s Note: If you or someone you know is struggling with depression or suicidal ideation, you can call 988 to access the 988 Suicide & Crisis Lifeline or find help online at https://988lifeline.org/.
Instagram said Thursday it will start alerting parents if their kids repeatedly search for terms clearly associated with suicide or self-harm. The alerts will only go to parents who are enrolled in Instagram’s parental supervision program.
Instagram says it already blocks such content from showing up in teen accounts’ search results and directs people to helplines instead.
READ MORE: What legal experts say about a major ‘bellwether trial’ over child social media addiction
The announcement comes as Meta is in the midst of two trials over harms to children. A trial underway in Los Angeles questions whether Meta’s platforms deliberately addict and harm minors. Another, in New Mexico, seeks to determine whether Meta failed to protect kids from sexual exploitation on its platforms. Thousands of families — along with school districts and government entities — have sued Meta and other social media companies claiming they deliberately design their platforms to be addictive and fail to protect kids from content that can lead to depression, eating disorders and suicide.
Meta executives including CEO Mark Zuckerberg have disputed that the platforms cause addiction. During questioning by the plaintiff’s lawyer, in Los Angeles, Zuckerberg said he still agrees with a previous statement he made that the existing body of scientific work has not proved that social media causes mental health harms.
WATCH: Zuckerberg takes stand in a landmark trial on youth social media addiction
The alerts will be sent via email, text or WhatsApp, depending on the parent’s contact information available, as well as a notification through the parent’s Instagram account.
“Our goal is to empower parents to step in if their teen’s searches suggest they may need support. We also want to avoid sending these notifications unnecessarily, which, if done too much, could make the notifications less useful overall,” Meta said in a blog post.
WATCH: After son’s suicide, mother says social media platforms are built to addict children
Meta said it is also working on similar notifications to parents about their kids’ interactions with artificial intelligence.
“These will notify parents if a teen attempts to engage in certain types of conversations related to suicide or self-harm with our AI,” Meta said. “This is important work and we’ll have more to share in the coming months.”
A free press is a cornerstone of a healthy democracy.
Support trusted journalism and civil dialogue.










































