Facebook unveils AI suicide prevention tools

Nov. 28 (UPI) — Facebook on Monday unveiled new artificial intelligence technology to detect suicidal posts before fellow users report them.

The social media company is using pattern recognition software to flag posts, live videos and comments in which users are expressing thoughts of suicide. Facebook will then notify first responders or send mental health information to the at-risk user.

A post on Facebook’s blog by Guy Rosen, vice president of product management, said the company initiated 100 wellness checks with first responders over the past month. This was in addition to user alerts.

Along with the AI initiative, Facebook is prioritizing the most urgent user alerts about suicidal posts, dedicating more moderators to suicide prevention and partnering with organizations like the National Suicide Prevention Lifeline.

“The team includes a dedicated group of specialists who have specific training in suicide and self harm,” Rosen said of the moderators.

Rosen said Facebook will be rolling out the technology globally except in the European Union where privacy laws prevent the data scans.

Alex Stamos, Facebook’s chief security officer, addressed concerns that the AI technology could be applied in other, more alarming ways.

“The creepy/scary/malicious use of AI will be a risk forever, which is why it’s important to set good norms today around weighing data use versus utility and be thoughtful about bias creeping in,” he tweeted.

Prior to the launch of Facebook’s AI initiative, the company allowed concerned users to alert moderators to concerning posts.

“We provide people with a number of support options, such as the option to reach out to a friend and even offer suggested text templates. We also suggest contacting a help line and offer other tips and resources for people to help themselves in that moment,” Rosen said.

LEAVE A REPLY

Please enter your comment!
Please enter your name here