149 people a day lose their life to suicide, making it the 10th leading cause of death for Americans. There is not one single factor that explains why people chose to end their life. To encourage people to come forward for help, research must be provided that will end the stigma that surrounds the shame from suicide. Although there is not one single cause to explain all suicide, depression is the most common mental illness that factors into a persons’ risk. Preventing suicide is possible when family members, friends and professionals can recognize the warnings signs. Even the most experienced psychologists that all suicide attempts and completions cannot be prevented because warning signs vary from person to person. Some are more vocal about their desires to end their lives, some become more angry or irritated, and other may never say or express their feelings. Recognizing the warning signs and symptoms of suicide behaviors are key issues in reducing death from suicide.

In the past suicide screening relied solely on clinicians’ judgment. However, with the increasing availability of A.I. in the medical field, research shows the capability to predict suicidal risks. Today with computational power of artificial intelligence (A.I.) multiple factors that contribute to suicide tendencies, can predict an individual’s risk.

How Social Media A.I Influences Suicide Prevention

Facebook has a vested interest in keeping users from live streaming suicides. In a promotional video, Mark Zuckerberg, announced how their A.I. programs are able to detect the patterns of its’ users for suicide idealization and self-harm. Their algorithms recognize threats through users’ post, messages and the responses from their friends. Armed with this data, their A.I.’s can make predictions whether the threat is legitimate. Once these messages are recognized or flagged, humans can then intervene based on recommendation from the A.I. and call local law enforcement for a welfare check.

How A.I Are Addressing the Suicide Crisis

Artificial intelligence is already sorting what you see on social media through data mining. Recently the question of how an A.I. can respond and monitor mental health crises are being asked. The first in the world, Canada launched its Artificial Intelligence pilot project for surveillance of suicide-related behaviors using social media. The aim of this project is defining what exactly is suicide related behaviors on social media and using those terms in a way that can be sorted out so that an A.I. can tell the differences of suicide idealization, self-harm behaviors and what those communications entail. The Public Health Agency of Canada “will determine if future work would be useful for ongoing suicide surveillance,” based on the results gathered from the study.

A.I. In The Clinical Setting

Improving our knowledge regarding suicide is the first step in prevention. A.I. has already enhanced speed and accuracy of diagnosis, and help guide professional to deliver interventions before a patient harms themselves. The impact that A.I. has had in the clinical setting has shown how it can predict suicide risk and recommend treatment, in agreement with qualified clinicians. AI’s only learn with the data they accumulate. Challenges arise from the misclassification of suicide risk from false positives, however, to identify just one true positive, these risks are seen as a result of an imperfect system.

Despite imperfections, the utilization of A.I. will be revolutionize the future of healthcare. As opposed to what we see as reactionary care, the focus will be more preventative. Continuous connectivity and the capacity to learn, A.I. would have the power to improve clinical efficiency by analyzing patient data over a lifetime. The results of this learning can then be used to improve patient participation and cooperation in care, reducing the number of deaths from suicide.

The Future of A.I

Protecting people’s privacy should be a major concern. The perceptions of data privacy and the reluctance to share personal data pose ethical challenges and creates difficult barriers for further suicide research. Uses of A.I. outside of a clinical setting can infringe upon individuals’ personal freedom. What happens if risk assessments are in disagreement from human professionals? Skepticism surrounding the use of A.I. must address the publics’ interest in protecting civil liberties. The impact of sound research will prove the importance and validity of information gathered and supplied by A.I.

Advocates for its’ use in clinical setting must prove their legitimate use with well-defined programs and parameters to ensure consumers the safety and validity of the information presented. Proper legislations must address the risks associated with the storage and collection of confidential information. Protocols need to be in place to properly handle confidential material that has been identified by A.I. technology. It is essential to show how A.I. programs can appropriately and positively respond to the fragile emotional states of at-risk individuals, without facilitating suicidal planning.