By Aili McConnon 

In late January, a 60-year-old women in northern Argentina posted on Facebook: "This can't go on. From here I say goodbye."

Within three hours, a medical team reached the woman and saved her life -- thanks in part to advances in artificial intelligence.

The post caught the attention of Facebook's AI system, which is programmed to spot potential suicidal language. The system decided it was an emergency and passed it along to moderators for review, who then alerted authorities in Buenos Aires. Before long, first responders were on the scene. (Facebook wouldn't comment on the incident.)

"Artificial intelligence can be a very powerful tool," says Enrique del Carril, the investigations director in the district attorney's office in Buenos Aires. "We saved a woman far away in remote Argentina before something terrible happened. That is incredible."

Facebook's suicide-alert system is just one of many efforts to use artificial intelligence to help identify people at risk for suicide as early as possible. In these programs, researchers use computers to comb through massive amounts of data, such as electronic health records, social-media posts, and audio and video recordings of patients, to find common threads among people who attempted suicide. Then algorithms can start to predict which new patients are more likely to be at risk.

Machine assistance

Machines wouldn't replace humans making diagnoses about suicidal behavior. But these tools -- most of which are still experimental -- could eventually help clinicians screen patients more quickly and accurately, perhaps even while a doctor is still doing an interview.

At the same time, some critics have raised concerns about the privacy rights of patients as machines tap into their personal data, as well as possible mistakes in how the information is interpreted.

Using technology to detect suicidal behavior is part of a larger effort to use AI to discover and treat a range of mental-health issues including depression, schizophrenia and bipolar disorder.

But suicide-detection research -- in the public and private sectors -- is further along than other mental-health efforts. In part, that's because suicide is on the rise, particularly among teenagers. In 2006, one person in the U.S. committed suicide every 16 minutes, according to the Centers for Disease Control and Prevention. A decade later, it was every 12 minutes. Plus, traditional ways of predicting suicide have been found lacking. In fact, a recent meta-analysis by Florida State University researchers and others, published in the journal Psychological Bulletin, found that the traditional approach of predicting suicide, which includes doctors' assessments, was only slightly better than random guessing.

By contrast, early tests of AI have shown markedly better results. A follow-up study by several of the same researchers, published in the journal Clinical Psychological Science last year, used AI to analyze the medical records of nearly 16,000 general hospital patients in Tennessee. The algorithms identified common traits among suicidal patients -- such as a history of using antidepressants and injuries with firearms -- and could predict with 80% to 90% accuracy whether someone would attempt suicide in the next two years.

The results show AI can "model complex interactions among many risk factors" to decide who is most likely at risk, says Jessica Ribeiro, psychology professor at Florida State University focused on suicide prevention, and one of the researchers.

Other early tests combine analysis of medical records with real-life data, such as what people say to their clinicians and how they say it. John Pestian, director of computational medicine at the Cincinnati Children's Hospital, took this approach in a study published in 2016 in the journal Suicide and Life-Threatening Behavior. Dr. Pestian looked at 379 people in one of three categories: at serious risk for suicide; mentally ill but not suicidal; and a control group. The subjects filled in surveys and were interviewed and filmed.

An algorithm analyzed relevant patterns and could determine with up to 93% accuracy who was actually in the suicidal group versus someone who was mentally ill but not at risk, or a control. Among other signs, the findings showed that mentally ill patients and control patients tended to laugh more, sigh less, and express less anger and emotional pain and more hope than those who exhibited suicidal behavior. All of which, Dr. Pestian argues, could only be gleaned from real-world interactions, not medical records.

Analyzing audio

Dr. Pestian has used his AI research to develop an app called SAM that has been tested in Cincinnati schools and clinics. The app records sessions between therapists and patients, then analyzes linguistic and vocal factors to provide a real-time assessment of a patient at risk for suicide.

Another system with a similar approach: Cogito's Companion, developed by Cogito Corp. The system, which has been used with about 500 veterans, analyzes data from users' phones, such as the frequency with which they text or call and how much they have traveled in a given week; users also record short audio diaries that the system analyzes. Cogito says its app can detect depression and suicidal behavior with more than 80% accuracy.

Some private-sector efforts to identify suicidal behavior are already being used on a wide scale. In the past five years, AI-powered virtual assistants such as Apple's Siri have started directing users to the National Suicide Prevention Lifeline, and offering to connect them, when they detect suicidal comments or questions. That might include people using the word "suicide" or saying something like "I want to jump off a bridge."

Facebook has been working on suicide prevention for more than 10 years, but faced criticism last year for not doing enough after several users took their own lives and live-streamed the process. In November 2017, Facebook said that it had started to use AI to analyze people's posts and live streams in an effort to detect suicidal thoughts, and that its AI system now prioritizes particularly dangerous and urgent reports so that they are more quickly addressed by moderators. The company says that over a month in the fall of 2017, its AI system alerted first responders to intervene in 100 cases of potential self-harm.

"We're always looking to improve our tools," says William Nevius, a Facebook spokesman. "We know this is a new technology, and we're always looking for additional ways to help people."

Potential roadblocks

But as companies get involved in the suicide-prevention efforts, they face a host of ethical questions. For one, there's transparency: Technology firms already have to deal with concerns about the kinds of information they collect from users and what they do with it, and those debates will likely become even more heated as they handle sensitive mental-health information.

Legal and regulatory questions also arise, such as who assumes responsibility if an AI system makes a false prediction. A wrong guess, for instance, might leave an individual with a damaging data trail suggesting they were suicidal.

In fact, such questions of privacy may plague any research into suicide, some critics say. For medical AI systems to work well, they need access to a wealth of data from a variety of patients, but that can be tricky because of the perceived stigma of mental-health disorders, says Siddarth Shah, an industry analyst at research firm Frost & Sullivan. "How many people are going to be OK with having sensitive mental-health information shared with an algorithm?" he says.

Some efforts are under way to address that issue. For instance, Qntfy, an Arlington, Va., company, is recruiting people to donate data for study, and more than 2,200 people have done so to date. Identifying information is scrubbed out of the data before it's analyzed, the company says.

Finally, issues of nuance plague many AI efforts. Though AI may recognize a word, it may not comprehend the context. "Saying 'I hate this. I can't survive' is very different if you are saying it to a doctor versus venting on social media," says Adam Miner, a clinical psychologist and AI researcher at Stanford University.

Ms. McConnon is a writer in New York. Email reports@wsj.com.

 

(END) Dow Jones Newswires

February 23, 2018 11:02 ET (16:02 GMT)

Copyright (c) 2018 Dow Jones & Company, Inc.
Meta Platforms (NASDAQ:META)
Gráfico Histórico do Ativo
De Mar 2024 até Abr 2024 Click aqui para mais gráficos Meta Platforms.
Meta Platforms (NASDAQ:META)
Gráfico Histórico do Ativo
De Abr 2023 até Abr 2024 Click aqui para mais gráficos Meta Platforms.