latest

Mental health AI has ‘significant shortcomings,’ WHO study finds

A review of research has found methodological flaws and a lack of transparency in AI models used in mental health applications

7th February 2023 about a 3 minute read
“We found that AI application use in mental health research is unbalanced and is mostly used to study depressive disorders, schizophrenia and other psychotic disorders. This indicates a significant gap in our understanding of how they can be used to study other mental health conditions.” Dr Ledia Lazeri, regional adviser for mental health at WHO/Europe

Artificial intelligence (AI) applications that provide mental health services have “significant shortcomings,” a study from the World Health Organisation has found.

Although the study found that AI has potential in mental health, it also found that some AI models have been promoted too rapidly and have “yet to be evaluated as viable in the real world.”

The paper, entitled Methodological and quality flaws in the use of artificial intelligence in mental health research: a systematic review, looked at studies on the use of AI for mental health disorder between 2016 and 2021. The authors carried out a systematic search of the PubMed, Scopus, IEEE Xplore, and Cochrane databases. They retrieved a total of 429 non-duplicated records from the databases, and included 129 for a full assessment,18 of which were manually added.

AI can enable policy-makers to gain insight into more efficient strategies to promote mental health, the authors argue. However, the study found significant flaws in how the AI applications process statistics, with little evaluation of the risk of bias.

One-third of the studies did not report any pre-processing or data preparation, the study found. One-fifth of the models were developed by comparing several methods without assessing their suitability in advance and only a small proportion reported external validation.

Lack of collaboration is a problem

The absence of transparent reporting on AI models undermined their replicability, the authors say. The lack of collaboration between researchers was also a problem. Dr David Novillo-Ortiz, a regional adviser on data and digital health at WHO/Europe, and co-author of the study, said that the lack of transparency and the methodological flaws were concerning, because “they delay AI’s safe, practical implementation.” He added: “Data engineering for AI models seems to be overlooked or misunderstood, and data is often not adequately managed. These significant shortcomings may indicate overly accelerated promotion of new AI models without pausing to assess their real-world viability.”

Dr Ledia Lazeri, regional adviser for mental health at WHO/Europe, said: “We found that AI application use in mental health research is unbalanced and is mostly used to study depressive disorders, schizophrenia and other psychotic disorders. This indicates a significant gap in our understanding of how they can be used to study other mental health conditions.”

The study noted that in 2021, over 150 million people in the WHO European Region were living with a mental health condition, and that the Covid-19 pandemic has exacerbated the situation, with people experiencing greater stress and worse economic conditions, while being less able to access services.

FCC Insight

Artificial intelligence has shown real promise in the development of applications to support people with mental health problems. We must be cautious, however, about adopting the technology too enthusiastically when it is clear that many of the models in use have methodological flaws and lack transparency about the data used to develop them. Hasty implementation of exciting-looking apps that may not have a solid foundation, and may have in-built bias, will only serve to harm patients with mental health problems, not help them.