• Aucun résultat trouvé

Artificial Intelligence in Mental Health Services: Results From a Literature Review and an Environmental Scan

N/A
N/A
Protected

Academic year: 2022

Partager "Artificial Intelligence in Mental Health Services: Results From a Literature Review and an Environmental Scan"

Copied!
3
0
0

Texte intégral

(1)

Artificial Intelligence in Mental Health

Services: Results From a Literature Review and an Environmental Scan

Artificial intelligence (AI) is increasingly being used in health care, where it has the potential to augment care, change how it is delivered, and improve access. For mental health care, AI applications are being developed for the prevention, detection, diagnosis, and treatment of mental health problems or illnesses.

With the high demand for mental health services in Canada,

these technologies could have an important role in providing

mental health care. Uncertainty remains, however, about their

effectiveness and appropriate use.

(2)

To address this uncertainty and understand the current landscape of AI use for mental health, the Mental Health Commission of Canada requested a CADTH evidence review and environmental scan. This work included conducting a literature review, developing a map showing where relevant research and development initiatives are being conducted, and consulting with AI and mental health stakeholders in Canada. The environmental scan identified several groups and organizations (in Canada and internationally) that are using and developing AI applications for mental health care. The information from this work is intended to inform decisions in this area.

Who are the primary users of AI technologies in mental health?

Based on currently available technologies, AI applications for mental health are mainly intended to provide diagnostic support to clinicians. Some applications are also intended to support people with lived experience in their treatment, and some are for those in treatment planning (e.g., health-care organizations) and caregivers.

How is AI being used in mental health care?

With most AI applications still in research and development, their clinical use for mental health care is limited. Available clinical uses include diagnosis (determining the presence of mental illness) and prevention (e.g., detecting risk, and connecting users to appropriate supports) as well as treatment (e.g., using conversational agents to deliver cognitive behavioural therapy).

How effective is AI for preventing, diagnosing, or treating mental health problems or illnesses?

Some AI applications have shown a high diagnostic accuracy when compared with diagnosis by a physician. There is also evidence that certain applications used in treatment may be effective in reducing depression and anxiety symptoms.

What are the key research and development domains in AI for mental health?

Most research and development initiatives are aimed at diagnosis (e.g., the use of patient data to detect or diagnose mental illness), while some research focuses on prevention and prognosis (predicting a client’s response to treatment). In addition, the analysis of social media posts ( e.g., Twitter, Facebook) has been explored as a way to support early detection and diagnosis for people experiencing mental health illness including major depression, anxiety, and postpartum depression. Recent trends in AI research and development show an increased interest in the use of AI applications for wearable devices and smartphone-based sensors to collect data.

What are the current gaps in knowledge about AI for mental health?

The AI gaps identified for mental health include a lack of research in preventing mental health problems and a lack of guidelines for using AI in mental health care. Lack of research also exists on assessing AI among specific groups, such as older adults, immigrants, refugees, ethnocultural, or racialized populations; First Nations, Inuit, or Métis peoples; and members of lesbian, gay, bisexual, transgender, and queer or questioning, and two-spirit (2SLGBTQ+) communities.

(3)

What should decision-makers

consider when planning to use AI in mental health care?

Decision-makers should ensure that the AI intervention will translate well from a lab environment to clinical use. This may require careful planning to ensure that:

• ethical requirements are met

• the technology is suitable and effective for mental health care

• clinician and client perspectives are considered

• culturally sensitive algorithms are created.

With AI technologies still in the early stages of development and testing, more research is needed to establish greater certainty about their optimal use for mental health care services.

Read the summary reports

Artificial Intelligence in Mental Health Services: A Literature Review

Artificial Intelligence in Mental Health Services: An Environmental Scan

Note:

The information in this report is based on the technologies available when these reviews were conducted.

The Mental Health Commission of Canada commissioned a CADTH literature review and an environmental scan to address the role of AI in mental health services. This report summarizes findings from the literature review [Artificial Intelligence and Machine Learning in Mental Health Services: A Literature Review. Ottawa: Canadian Agency for Drugs and Technology in Health (CADTH), Mental Health Commission of Canada (MHCC); 2021 June.] and the environmental scan [Artificial Intelligence and Machine Learning in Mental Health Services: An Environmental Scan. Ottawa: Canadian Agency for Drugs and Technology in Health (CADTH), Mental Health Commission of Canada (MHCC); 2021 June.]

Références

Documents relatifs

Early childhood mental health (ECMH) was a new priority area for the Mental Health Commission of Canada to explore between 2019-2021.. A key part was the completion of a

This includes a review of literature and background information on basic epidemiology, common beliefs about mental illness, explanatory models, idioms of distress, help-seeking

Given that psychological impacts of emergencies and of emergency protective actions implemented during response are often greater than the actual physical impact of radiation, it is

In order to determine the scope and range of the ethical questions related to Big Data and AI in health, the WHO’s Global Health Ethics Team and the Institute for Bioethics of

Guiding principles and best practice First, building on existing frameworks, such as the Department for Digital, Culture, Media and Sport Data Ethics Framework, 10 is the Code

The stakeholders also include regulatory agencies, which must validate and define whether, when and how such technologies are to be used, ministries of education that teach

As the specialized United Nations agency for health, WHO is well-placed to develop a global framework for ethics and governance in AI to ensure that such technology is aligned with

"Measures should be taken to ensure, as &r as practicable, that policies and pro- grammes concerning vocational rehabilitation are coordinated with policies and