Artificial Intelligence for Suicide Prevention
In today’s digital landscape, users leave a trail of telling information on apps and online communities about their moods, from happiness and anger to sadness and depression, including suicidal thoughts.
Chinese Tech Companies Lead Suicide Prevention Efforts
In China, led by Gang Wu, who works security at Alibaba Group, Wu’s team created Safeguarding Lifeline, a project that uses artificial intelligence or AI to link businesses, public security and third-party organizations in a suicide intervention program.
The project, launched in July of 2019, has engaged in over 2,500 cases of suicide outreach. Safeguarding Lifeline employs eight full-time personnel and some additional part-time employees who work around the clock responding to potential suicide threats. Paid for entirely by Alibaba, the company put together an algorithm that identifies whether a user is at risk for suicide.
Chinese video sharing platforms like Bilibili and internet giant Tencent have launched similar services. AI suicide prevention isn’t a new concept, in 2018, Huang Zhisheng, deputy director of the Big Data Research Institute at Wuhan University started the Tree Hole Rescue Mission, which uses AI to crawl the Weibo website, China’s largest social media platform, detecting and reporting on comments that suggest potential suicidal thoughts.
By the end of 2019, the project had intervened on more than 1,600 suicide cases. By way of example, a potential suicide post was detected by an algorithm running on a computer in Amsterdam, 5,000 miles away from the person who uploaded the post in China. The program immediately sent notifications to volunteers residing in different parts of China, and when the volunteers were unable to reach the young man in question, they reported the case to local authorities.
The police were then able to intervene with appropriate mental health modalities, saving the life of a 21-year-old university student who would otherwise have been lost to a severe case of chronic depression.
Facebook’s Fight Against Suicide
Perhaps the first and most aggressive proponent for using AI in suicide prevention is Facebook, who in March of 2017 began scanning nearly every post on its platform in an effort to assess suicide risk.
Following a string of suicides that were live-streamed on its platform, Facebook took the proactive action to use an algorithm to detect signs of potential self-harm. Any flagged posts are reviewed by Facebook personnel, who can then choose to direct mental health resources to the user in question, or alert local authorities for appropriate intervention.
Facebook began working with first responders in the U.S. soon after its AI suicide prevention launch, and now plans to expand suicide prevention oversight to other countries in its ever-expanding realm of influence.