Pop culture obsessives writing for the pop culture obsessed.

Facebook has developed artificial intelligence to help identify suicidal users

Photo: WIN-Initiative / Getty

Given how much time we all spend online, social platforms have unique opportunities to spot troubling behavior before it can manifest into physical violence or self-harm. Though it’s long had processes in place for those being reported for writing suicidal posts, Facebook has now taken a significant step forward by implementing artificial intelligence capable of identifying those who may be at risk of hurting themselves. Their new initiatives are being developed alongside several mental health organizations.

A new piece from the BBC outlines the effort, which relies on pattern-recognition algorithms to identify suicidal behavior through word choice, phrasing, and the types of comments a post is receiving. Once a post has been identified, a human team reviews the content, then reaches out. “We know that speed is critical when things are urgent,” a Facebook product manager told the BBC.


The outreach also extends to those who might be using the Facebook Live function to express suicidal thoughts. The goal, the article states, is for a Facebook response team to reach out to the individual as they’re broadcasting, rather than afterward. That approach has caused some to question whether that’s the right approach.

“Some might say we should cut off the stream of the video the moment there is a hint of somebody talking about suicide,” said Jennifer Guadagno, Facebook’s lead researcher on the project.

“But what the experts emphasized was that cutting off the stream too early would remove the opportunity for people to reach out and offer support.

“So, this opens up the ability for friends and family to reach out to a person in distress at the time they may really need it the most.”

These announcements come on the heels of Facebook founder Mark Zuckerberg’s promise that the social media platform would also develop algorithms to help identify posts by potential terrorists. While there’s rightful criticism out there of our massive faith on algorithms to solve all of the world’s problems, this is a case where it could possibly lead to some good.

Share This Story