Facebook’s remarkable suicide prevention tool

30 Mar 2018 Nick Garbutt    Last updated: 17 Nov 2020

Facebook’s giant campus in California
Facebook’s giant campus in California

In a week dominated by controversy about how our social media data is exploited Scope looks at how Facebook also uses data to save lives.

There is no better example of both the dangers and the benefits of Artificial Intelligence than its use by Facebook.

Aside from the troubling issue of how Facebook data has allegedly been exploited to attempt to manipulate voting, the company has also developed a remarkable AI tool designed to prevent suicide.

Suicide rates are rising across the globe yet comparatively little is known about what gives rise to a suicide risk. Therefore, finding more out about effective prevention is key.

Two years ago a piece of research was published which found evidence that posts on social media can be strong indicators when people are having suicidal thoughts. Subsequent work is based on uncovering trigger words – and emojis – that identify people at risk. It involves developing algorithms that detect trends in human communication. This is a well established and comparatively advanced science – we are all familiar with it because it is what underpins predictive text on our smart phones.

Last year the American Crisis Text Line built an algorithm to analyse the words most frequently used when people were in desperate need of help, using a database of 22 million messages.

Strangely it discovered that the word “ibuprofen” was 16 times more likely to predict the person texting would need emergency services than the word suicide. Another key finding was that another high-risk piece of content wasn’t even a word – but a crying face emoji.

This has allowed the organisation to identify 9,000 word combinations that indicate high risk, so that appropriate texts can be prioritised for help. Crucially the research provides evidence that people do not always use the obvious words to express suicidal feelings.

So when Facebook entered the suicide prevention arena in November it was building on an emerging body of science.

It has developed a tool which it says can identify heightened suicide risk amongst social media users. This then alerts a team of human reviewers who can then reach out to the people concerned.

Facebook said at the launch that prior to even announcing the existence of the suicide prevention algorithm it has reached out to 100 at-risk people in this way.

We can safely assume that it is an extremely powerful tool. Facebook has dedicated huge resources to AI, with two research centres in the USA and another in France. It has also hoovered up leading academics in the field as it competes for dominance in a powerful emerging technology.

It also has an enormous amount of data to work with – it claims that around one fifth of the world’s population use the platform. Contrast the 2 billion users with all their posts with the 22 million texts analysed by Crisis Text Line.

However, to date Facebook has declined to say how the algorithm works and what its findings have been. Perhaps it will at some future date, but for now its extraordinary – and potentially game-changing - research has not been shared. It could help others to save lives.

There may well be very good reasons for this reluctance. For example, to prevent others oversimplifying suicide risk by concentrating on key words to the exclusion of other factors. It could even be used by organisations with malign intent to target vulnerable people with poor mental health.

Yet Facebook is a private company whose most prized asset is its enormous database and one which has worked tirelessly to develop ways in which it can turn this into a commercial opportunity.

Last year the Guardian ran a story based on leaked documents which claimed that Facebook had shown advertisers in Australia how it has the capacity to identify when teenagers feel “insecure”, “worthless” and “need a confidence boost”.

The company denied that it offers advertisers tools to target audiences based on their emotional state, saying that the documents were based on “research done by Facebook and subsequently shared with an advertiser” and were “intended to help marketers understand how people express themselves”.

In any event the Cambridge Analytica controversy, combined with the development of the suicide prevention tool demonstrate just how much can be done with data, for good and ill.

This is a characteristic of most technological developments in a fast-changing world. They raise multiple ethical questions, once which demand debate and consideration.

The team at US-based Bookmark have produced a superb graphic which brings this to life across a range of emerging technologies based on AI. It is an excellent primer on some of the benefits and dangers that are posed.

The clock is ticking – civic society needs to engage on these issues – it would be so much better to help to shape change so it brings benefits rather than to sit back and leave it all to the technology companies.

Join the Conversation...

We'd love to know your thoughts on this article.
Join us on Twitter and join the conversation today.

Join Our Newsletter

Get the latest edition of ScopeNI delivered to your inbox.