Next Article in Journal
Counseling and Prescription of Physical Exercise in Medical Consultations in Portugal: The Clinician’s Perspective
Previous Article in Journal
Topical Dinoprostone vs. Foley’s Catheter: A Systematic Review and Meta-Analysis of Cervical Ripening Approaches
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Pre- Trained Language Models for Mental Health: An Empirical Study on Arabic Q&A Classification

by
Hassan Alhuzali
1,* and
Ashwag Alasmari
2,3
1
Department of Computer Science and Artificial Intelligence, Umm Al-Qura University, Makkah 24382, Saudi Arabia
2
Department of Computer Science, King Khalid University, Abha 62521, Saudi Arabia
3
Center for Artificial Intelligence (CAI), King Khalid University, Abha 62521, Saudi Arabia
*
Author to whom correspondence should be addressed.
Healthcare 2025, 13(9), 985; https://doi.org/10.3390/healthcare13090985
Submission received: 6 March 2025 / Revised: 16 April 2025 / Accepted: 22 April 2025 / Published: 24 April 2025
(This article belongs to the Section Health Informatics and Big Data)

Abstract

Background: Pre-Trained Language Models hold significant promise for revolutionizing mental health care by delivering accessible and culturally sensitive resources. Despite this potential, their efficacy in mental health applications, particularly in the Arabic language, remains largely unexplored. To the best of our knowledge, comprehensive studies specifically evaluating the performance of PLMs on diverse Arabic mental health tasks are still scarce. This study aims to bridge this gap by evaluating the performance of pre-trained language models in classifying questions and answers within the mental health care domain. Methods: We used the MentalQA dataset, which comprises Arabic Questions and Answers interactions related to mental health. Our experiments involved four distinct learning strategies: traditional feature extraction, using PLMs as feature extractors, fine-tuning PLMs, and employing prompt-based techniques with models, such as GPT-3.5 and GPT-4 in zero-shot and few-shot learning scenarios. Arabic-specific PLMs, including AraBERT, CAMelBERT, and MARBERT, were evaluated. Results: Traditional feature-extraction methods paired with Support Vector Machines (SVM) showed competitive performance, but PLMs outperformed them due to their superior ability to capture semantic nuances. In particular, MARBERT achieved the highest performance, with Jaccard scores of 0.80 for the question classification and 0.86 for the answer classification. Further analysis revealed that fine-tuning PLMs enhances their performance, and the size of the training dataset plays a critical role in model effectiveness. Prompt-based techniques, particularly few-shot learning with GPT-3.5, demonstrated significant improvements, increasing the accuracy of question classification by 12% and the accuracy of answer classification by 45%. Conclusions: The study demonstrates the potential of PLMs and prompt-based approaches to provide mental health support to Arabic-speaking populations, providing valuable tools for individuals seeking assistance in this field. This research advances the understanding of PLMs in mental health care and emphasizes their potential to improve accessibility and effectiveness in Arabic-speaking contexts.
Keywords: mental health; natural language processing; question/answer classification; text classification; pre-trained language models mental health; natural language processing; question/answer classification; text classification; pre-trained language models

Share and Cite

MDPI and ACS Style

Alhuzali, H.; Alasmari, A. Pre- Trained Language Models for Mental Health: An Empirical Study on Arabic Q&A Classification. Healthcare 2025, 13, 985. https://doi.org/10.3390/healthcare13090985

AMA Style

Alhuzali H, Alasmari A. Pre- Trained Language Models for Mental Health: An Empirical Study on Arabic Q&A Classification. Healthcare. 2025; 13(9):985. https://doi.org/10.3390/healthcare13090985

Chicago/Turabian Style

Alhuzali, Hassan, and Ashwag Alasmari. 2025. "Pre- Trained Language Models for Mental Health: An Empirical Study on Arabic Q&A Classification" Healthcare 13, no. 9: 985. https://doi.org/10.3390/healthcare13090985

APA Style

Alhuzali, H., & Alasmari, A. (2025). Pre- Trained Language Models for Mental Health: An Empirical Study on Arabic Q&A Classification. Healthcare, 13(9), 985. https://doi.org/10.3390/healthcare13090985

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop