Taking More Steps To Keep The People Who Use Instagram Safe

By  

Adam Mosseri, Head of Instagram

October 27, 2019

Update 9/30/21: Information in this article may be outdated. For current information about our suicide and self-injury content detection technology, please visit our Safety Center. As described in the Safety Center, our algorithms are intended to help identify potential suicide and self-injury content and are not intended to diagnose or treat any mental health or other condition.

Nothing is more important to me than the safety of the people who use Instagram, particularly the most vulnerable. Suicide and self-harm are difficult and complex topics that people understandably care deeply about. These issues are complicated – there are many opinions about how best to approach them – but they matter a lot, and to me, as a parent, they certainly hit home.

My first thoughts are with anyone dealing with these difficult issues, and their family and friends. I can’t begin to imagine what they are going through. I also recognise that simply keeping people in my thoughts is not enough. We at Instagram owe it to everyone who uses our platform – especially those who may be at risk of suicide and self-harm – to do everything we can to keep them safe.

Two things are true about online communities, and they are in conflict with one another. First, the tragic reality is that some young people are influenced in a negative way by what they see online, and as a result they might hurt themselves. This is a real risk.

But at the same time, there are many young people who are coming online to get support with the struggles they’re having – like those sharing healed scars or talking about their recovery from an eating disorder. Often these online support networks are the only way to find other people who have shared their experiences.

Based on expert advice from academics and mental health organisations like the Samaritans in the UK and National Suicide Prevention Line in the US, we aim to strike the difficult balance between allowing people to share their mental health experiences while also protecting others from being exposed to potentially harmful content.

We understand that content which could be helpful to some may be harmful to others. In my conversations with young people who have struggled with these issues, I’ve heard that the same image might be helpful to someone one day, but triggering the next. That’s why we don’t allow people to share content that encourages or promotes self-harm or suicide. We have never permitted that.

Earlier this year we strengthened our approach on content related to suicide and self-harm. In February, we prohibited graphic images of self-harm and built new technology to find and act on this type of content, and we have worked to ensure that this sort of content, and those accounts sharing it, are not recommended.

As a result, we have been able to act on twice as much content as before. In the three months following our policy change we have removed, reduced the visibility of, or added sensitivity screens to more than 834,000 pieces of content. We were able to find more than 77% of this content before it was reported to us. While this is progress, we know the work here is never done.

This past month, we further expanded our policies to prohibit more types of self-harm and suicide content. We will no longer allow fictional depictions of self-harm or suicide on Instagram, such as drawings or memes or content from films or comics that use graphic imagery. We will also remove other imagery that may not show self-harm or suicide, but does include associated materials or methods.

Accounts sharing this type of content will also not be recommended in search or in our discovery surfaces, like Explore. And we’ll send more people more resources with localized helplines like the Samaritans and PAPYRUS in the UK or the National Suicide Prevention Lifeline and The Trevor Project in the United States.

These are complex issues that no single company or set of policies and practices alone can solve. I’m often asked, why do we allow any suicide or self-harm content at all on Instagram? Experts tell us that giving people a chance to share their most difficult moments and their stories of recovery can be a vital means of support. That preventing people from sharing this type of content could not only stigmatize these types of mental health issues, but might hinder loved ones from identifying and responding to a cry for help.

But getting our approach right requires more than a single change to our policies or a one-time update to our technology. Our work here is never done. Our policies and technology have to evolve as new trends emerge and behaviors change.

To help us stay aware of new trends or cultural nuances, we meet every month with academics and experts on suicide and self-harm. We are also working with the Swedish mental health organization, MIND, to understand the role that technology and social media has on the lives of young people. In the U.K, we are working with the Samaritans on an industry-wide effort to shape new guidelines to help people in distress.

Outside of Europe we also have additional technology that helps us proactively find people who might be in need. We want to bring this to Europe but there are important legal considerations under EU law, so we’re working with our European regulator.

Any time we hear about someone harming themselves and who may have been influenced by what they saw on our platforms, we are reminded of the struggles many young people face on and offline. We will continue working to keep everyone safe on Instagram, while at the same time making it possible for people to access support that can make a difference when they need it the most.

Adam Mosseri, Head of Instagram