Next Article in Journal
Towards Designing a Set of Usability and Accessibility Heuristics Focused on Cognitive Diversity: An Exploratory Case Study with Generative Artificial Intelligence
Next Article in Special Issue
Predicting Peritoneal Dialysis Failure Within the Next Three Months Based on Deep Learning and Important Features Analysis
Previous Article in Journal
Advanced Integration of ES-MADM II in HRM: A Balanced Approach to Appraisal and Promotion Decisions
Previous Article in Special Issue
Improving Consumer Health Search with Field-Level Learning-to-Rank Techniques
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Harnessing AI in Anxiety Management: A Chatbot-Based Intervention for Personalized Mental Health Support

1
Faculty of Medicine, University of Oradea, 410087 Oradea, Romania
2
Doctoral School of Sociology, Faculty of Social and Human Sciences, University of Oradea, 410087 Oradea, Romania
3
Independent Researcher, 031046 Bucharest, Romania
4
ENT Department, Faculty of Medicine, University of Oradea, 410087 Oradea, Romania
*
Author to whom correspondence should be addressed.
Information 2024, 15(12), 768; https://doi.org/10.3390/info15120768
Submission received: 19 October 2024 / Revised: 20 November 2024 / Accepted: 29 November 2024 / Published: 2 December 2024
(This article belongs to the Special Issue Artificial Intelligence and Data Science for Health)

Abstract

:
Anxiety disorders represent one of the most widespread mental health challenges globally, yet access to traditional therapeutic interventions remains constrained, particularly in resource-limited settings. This study evaluated the effectiveness of an AI-powered chatbot, developed using ChatGPT, in managing anxiety symptoms through evidence-based cognitive-behavioral therapy (CBT) techniques. Fifty participants with mild to moderate anxiety symptoms engaged with the chatbot over two observational phases, each lasting seven days. The chatbot delivered personalized interventions, including mindfulness exercises, cognitive restructuring, and breathing techniques, and was accessible 24/7 to provide real-time support during emotional distress. The findings revealed a significant reduction in anxiety symptoms in both phases, with an average improvement of 21.15% in Phase 1 and 20.42% in Phase 2. Enhanced engagement in Phase 2 suggested the potential for sustained usability and familiarity with the chatbot’s functions. While participants reported high satisfaction with the accessibility and personalization of the chatbot, its inability to replicate human empathy underscored the importance of integrating AI tools with human oversight for optimal outcomes. This study highlights the potential of AI-driven interventions as valuable complements to traditional therapy, providing scalable and accessible mental health support, particularly in regions with limited access to professional services.

1. Introduction

Anxiety disorders rank among the most prevalent mental health conditions globally, affecting approximately 284 million individuals annually, as reported by the World Health Organization (WHO). These disorders significantly impair quality of life, manifesting through persistent worry, physical tension, and difficulties in daily functioning. Despite their widespread prevalence, access to effective treatments remains limited, particularly in low-resource regions and developing countries. This disparity highlights the urgent need for scalable, accessible solutions that can complement traditional mental health interventions and bridge the treatment gap [1,2,3].
Mental health conditions, including anxiety, also pose significant economic burdens. The costs associated with mental health care extend beyond the affected individuals, placing strain on healthcare systems and societies worldwide. For instance, research indicates that older adults often face higher healthcare expenses due to comorbidities and intensive care needs, further exacerbating financial pressures [4,5,6]. Addressing these economic challenges necessitates innovative approaches to reduce costs while improving accessibility and effectiveness.
In this context, artificial intelligence (AI) has emerged as a promising avenue for augmenting psychological therapies. AI-based systems, particularly those leveraging natural language processing (NLP) models such as ChatGPT, have shown their potential as tools for delivering psychological support. These systems can provide personalized, rapid interventions and operate continuously without being restricted by the availability of human therapists. The COVID-19 pandemic underscored the relevance of such solutions, as many individuals turned to digital platforms to meet their mental health needs when in-person care was unavailable [7,8,9].
Among the therapeutic approaches for anxiety disorders, cognitive-behavioral therapy (CBT) is widely recognized for its effectiveness. However, access to trained CBT specialists is limited, making it challenging for many individuals to receive appropriate care [10,11]. AI-powered chatbots have begun to simulate CBT principles, offering tools and techniques that empower users to manage anxiety. For example, chatbots like Wysa and Youper integrate mindfulness exercises, cognitive restructuring, and relaxation techniques to help users navigate stress and anxiety in daily life [12,13].
Despite these advancements, AI applications in therapy face significant limitations. Chief among these is the absence of human empathy, a cornerstone of the therapeutic relationship [14]. Trust and emotional support, which are integral to successful treatment outcomes, cannot be fully replicated by an algorithm. While AI systems can simulate human-like dialogue and provide tailored responses, the depth of the emotional connection achieved in a therapist–patient relationship remains beyond their reach [15,16].
This study seeks to address these challenges by developing and evaluating an AI-based chatbot algorithm grounded in ChatGPT, designed to deliver personalized interventions for anxiety management. By integrating CBT principles with advanced prompt engineering techniques, the proposed system aims to provide tailored mental health support [17]. This research investigates the effectiveness of the chatbot in reducing anxiety symptoms, examines user engagement, and evaluates its potential to bridge gaps in traditional mental health care.
The primary objective of this study is to assess the role of AI in augmenting anxiety treatment, particularly in contexts where traditional therapies are inaccessible [18]. By highlighting its strengths and limitations, this research aims to contribute to the growing field of AI-driven mental health interventions, paving the way for future improvements that address ethical considerations and meet users’ emotional needs comprehensively [19,20].

2. Critical Literature Review

2.1. Introduction to the Literature Review

Anxiety disorders rank among the most prevalent mental health conditions globally, with an estimated 284 million people affected annually. As healthcare systems struggle to meet the demand for traditional therapeutic interventions, particularly in resource-constrained regions, the development of scalable and accessible solutions has become imperative. Artificial intelligence (AI) represents a promising avenue for augmenting mental health interventions, especially through its ability to deliver continuous, personalized support. This section critically reviews the existing literature on AI-driven mental health solutions, identifies gaps, and lays the foundation for the hypotheses of the current study.

2.2. AI in Mental Health Interventions

The integration of AI into mental health care has gained significant traction, particularly through the use of chatbots powered by natural language processing (NLP). These tools have shown promise in replicating elements of cognitive-behavioral therapy (CBT), such as cognitive restructuring and mindfulness techniques. Studies by Fulmer et al. (2018), Kretzschmar et al. (2019) and Graham et al. (2019) demonstrate the potential of chatbots like Wysa, Youper, Joy, and Woebot to reduce symptoms of anxiety and depression in diverse populations [18,21,22]. These studies emphasize the utility of chatbots in providing immediate support during crises, thereby filling critical gaps in the accessibility and availability of human therapists.
Recent studies highlight the growing role of AI in broader mental health applications. For example, Fitzpatrick et al. (2017) explored how chatbot interventions might enhance user engagement through advanced personalization, while research by Li et al. (2023) demonstrated the feasibility of integrating physiological data, such as heart rate variability, into chatbot frameworks to refine therapeutic recommendations [23,24]. These advancements suggest that chatbots are not merely supplementary tools but integral components of comprehensive mental health strategies.
However, while promising, the current literature also reveals several persistent limitations that hinder the full potential of chatbot-based interventions [25,26].

2.3. Gaps in Existing Research

While the efficacy of AI in mental health care is well documented, the persisting gaps are as follows:
  • Short Study Durations: Most of the existing research evaluates chatbot efficacy over periods of two to four weeks, limiting insights into sustained improvements. The current study, while initially following a seven-day intervention period, addresses this limitation by incorporating a second observation phase after two months, aiming to evaluate the durability of therapeutic effects over time.
  • Limited Sample Diversity: Participant demographics in prior studies often skew toward younger, technologically adept populations, leaving gaps in understanding how chatbots perform across age, gender, and socio-economic groups. The current study seeks to broaden the scope by including participants from diverse backgrounds.
  • Focus on Psychological Symptoms Alone: While many studies concentrate solely on psychological outcomes, few integrate physiological markers, such as nutrition or stress-related biomarkers, into their frameworks. This gap neglects the holistic nature of mental health, which encompasses both psychological and physical well-being. Our study uniquely incorporates insights into nutritional deficiencies and physiological indicators, expanding the scope of chatbot utility.
  • Insufficient Examination of Personalization: Although personalization is a frequently cited advantage of AI interventions, its direct impact on outcomes remains underexplored. This study evaluates the role of tailored recommendations in sustaining user engagement and improving therapeutic efficacy.
  • Lack of Conceptual Models: Few studies provide a theoretical framework that explicitly links chatbot interventions to specific psychological or behavioral outcomes. The current study addresses this by grounding its approach in established CBT principles and providing a conceptual model to illustrate the relationship between user engagement, intervention personalization, and mental health outcomes.

2.4. Proposed Hypotheses

Building on the identified gaps, this study advances the following hypotheses:
  • Hypothesis 1: 
    AI-based chatbots significantly reduce anxiety symptoms within a short-term intervention (seven days), with measurable improvement in anxiety scores across diverse demographic groups.
    Justification: Existing research validates short-term efficacy; this study aims to extend these findings to a more heterogeneous population.
  • Hypothesis 2: 
    Long-term re-engagement with the chatbot (two months after initial use) demonstrates sustained or improved anxiety management.
    Justification: Incorporating a second observation phase addresses the common limitation of short study durations.
  • Hypothesis 3: 
    Personalized feedback and contextualized recommendations enhance user engagement and efficacy.
    Justification: Personalization has been recognized as a critical factor but is rarely quantified; this study investigates its impact on user outcomes.
  • Hypothesis 4: 
    Integrating physiological insights (e.g., nutritional recommendations) into chatbot interventions leads to superior mental health outcomes compared to standard psychological support alone.
    Justification: Addressing both physical and psychological health acknowledges the interdependence of these dimensions in managing anxiety.
By addressing these hypotheses, this study seeks to fill critical gaps in the literature, offering a nuanced understanding of how AI-driven chatbots can be optimized for mental health care. The findings aim to inform future research and development in this rapidly evolving field, providing a foundation for more effective and comprehensive AI-based interventions.

2.5. Study Objectives

This study aimed to assess the effectiveness of an AI-powered chatbot, developed using ChatGPT, in managing anxiety symptoms through evidence-based cognitive-behavioral therapy (CBT) techniques. Specifically, the study sought to perform the following:
  • Evaluate the immediate impact of chatbot interactions on anxiety symptoms during a seven-day period (Phase 1).
  • Investigate the chatbot’s capacity for long-term retention and reusability of CBT techniques over an additional seven-day period conducted two months later (Phase 2).
  • Analyze engagement metrics, such as interaction frequency and duration, and their relationship with anxiety symptom reduction.
  • Compare the chatbot’s effectiveness and user experience to similar AI-based mental health interventions.

3. Materials and Methods

To evaluate the effectiveness of a chatbot based on ChatGPT in treating anxiety disorders, we developed a personalized algorithm that utilizes prompt engineering techniques to generate responses tailored to each individual user. Prompt engineering refers to the process of configuring questions and response scenarios so that the language model provides relevant and accurate results based on the data provided by the user [27]. This aspect is crucial to ensure that the algorithm does not provide generic responses but is capable of adapting to the unique needs and emotional states of each individual [11,28].

3.1. Conceptual Model

This study is guided by a conceptual model (Figure 1) that illustrates the relationships between key variables involved in the evaluation of the chatbot’s effectiveness in managing anxiety symptoms. The model emphasizes the interplay between user engagement, the personalized interventions delivered by the chatbot, and the observed changes in anxiety levels over two phases of observation.
The model hypothesizes the following:
  • Engagement Metrics: Frequent and sustained interaction with the chatbot positively correlates with reductions in anxiety symptoms. Engagement metrics, such as interaction frequency and duration, serve as critical predictors of user outcomes.
  • Personalized Interventions: The tailored use of cognitive-behavioral therapy (CBT) techniques, mindfulness exercises, and cognitive restructuring delivered through the chatbot mediates the relationship between engagement and anxiety reduction.
  • Outcome Measures: Improvements in anxiety levels, measured across two phases, are expected to demonstrate the chatbot’s long-term applicability and potential for sustained mental health support.
By structuring the study around these variables and their relationships, the conceptual model provides a foundation for analyzing the chatbot’s effectiveness. The relationships depicted in Figure 1 help clarify the hypothesized pathways of influence and serve as a framework for the statistical analyses detailed in subsequent sections.

3.2. Study Design

This study aimed to evaluate the efficacy of a personalized AI-based chatbot, utilizing ChatGPT, in assisting individuals with anxiety disorders. The research employed a two-phase observational design, each phase spanning a period of seven days. A total of 50 participants, all experiencing mild to moderate anxiety symptoms, were selected through a rigorous screening process based on their scores on the Beck Anxiety Inventory (BAI) and the Generalized Anxiety Disorder Scale (GAD-7).
  • Phase 1: Participants were instructed to interact with the chatbot as needed over a seven-day period, with the system being accessible 24/7. During this phase, the chatbot delivered real-time support and interventions tailored to individual user inputs. It employed evidence-based cognitive-behavioral therapy (CBT) techniques, including mindfulness exercises, guided breathing strategies, and cognitive restructuring.
  • Phase 2: Approximately two months after completing Phase 1, the same cohort of participants was re-invited to engage with the chatbot for another seven-day period. This phase aimed to assess the chatbot’s capacity for long-term retention, its reusability, and sustained effectiveness in reducing anxiety symptoms. Comparative analyses were conducted between the two phases to evaluate the progression over time, retention of techniques, and engagement differences.
The dual-phase study design provided a comprehensive framework to analyze both the immediate and enduring effects of chatbot interventions on anxiety symptoms. This structure also enabled a detailed examination of changes in user interaction patterns and the potential scalability of the chatbot as a long-term mental health support tool.
The initial duration of 7 days was chosen to assess the feasibility and immediate effectiveness of the chatbot intervention. This time frame allowed participants to engage with the chatbot in a structured manner and provided sufficient data to evaluate short-term improvements in anxiety symptoms. Recognizing the limitations of this duration, a second phase was conducted two months later, replicating the seven-day usage period to analyze long-term effects and sustained usability.

3.3. Algorithm Configuration

The development of the chatbot algorithm was centered on ChatGPT, an advanced natural language processing (NLP) model capable of generating dynamic and contextually relevant responses that emulate human interaction [29,30,31]. The configuration process focused on ensuring that the chatbot could provide personalized, evidence-based interventions tailored to individual user needs. Key elements of the algorithm’s configuration included the following:
  • Prompt Engineering: A personalized and adaptive flow of questions and responses was designed to align with the user’s emotional state and specific anxiety symptoms [27]. This process ensured that the chatbot delivered relevant and precise responses rather than generic advice. Through iterative refinement, the prompts were optimized to enable the chatbot to effectively guide users through techniques such as breathing exercises, cognitive restructuring, and mindfulness practices [32].
  • Integration of Clinical Scales: The chatbot incorporated validated clinical tools, including the Beck Anxiety Inventory (BAI) [33,34] and the Generalized Anxiety Disorder Scale (GAD-7) [35]. These scales were used to assess participants’ baseline anxiety levels, monitor changes throughout the study, and guide the chatbot’s interactions [36]. By leveraging these tools, the chatbot could measure the severity of anxiety symptoms and adjust its responses to better address the user’s needs [37].
  • Behavioral Customization: The algorithm was designed to dynamically adjust its recommendations based on participant inputs. For instance, users who reported high anxiety related to a specific trigger were guided through tailored cognitive-behavioral therapy (CBT) techniques, such as cognitive restructuring [38,39]. Similarly, participants experiencing physical symptoms of anxiety, such as muscle tension or rapid breathing, received targeted interventions like progressive muscle relaxation or diaphragmatic breathing exercises [17]. This level of customization enhanced the chatbot’s ability to provide meaningful and effective support.
The combination of these elements ensured that the chatbot offered an individualized, high-quality mental health support experience. By integrating prompt engineering, validated clinical scales, and behaviorally tailored interventions, the chatbot was positioned as a scalable and adaptable tool for managing anxiety symptoms.

3.4. Chatbot Architecture and Interaction Flow

To address the needs of participants with anxiety, the chatbot was designed to deliver evidence-based interventions grounded in principles of cognitive-behavioral therapy (CBT). Leveraging the capabilities of ChatGPT, the system employed advanced natural language processing (NLP) to provide tailored support in real-time. The architecture of the chatbot and its decision-making process are illustrated in Figure 2.

3.4.1. Initial Engagement and Input Collection

When a participant initiated an interaction with the chatbot, they were prompted to describe their current emotional state or anxiety-related symptoms. Questions included prompts such as the following: “How are you feeling right now?” or “Are you experiencing physical symptoms like a racing heart or difficulty breathing?” These inputs allowed the chatbot to establish a baseline understanding of the user’s mental state.

3.4.2. Input Analysis and Symptom Categorization

The chatbot analyzed participant responses using NLP algorithms to identify emotional cues and symptom patterns. Key elements such as emotional tone, frequency of anxiety indicators, and physical symptoms were extracted and categorized. This analysis informed the chatbot’s subsequent recommendations and ensured relevance to the participant’s needs.

3.4.3. Decision-Making and Intervention Selection

Based on the analysis, the chatbot employed a rule-based system integrated with CBT principles to determine the most suitable intervention. For instance, as follows:
  • Participants expressing intense emotional distress were guided through mindfulness exercises or diaphragmatic breathing techniques to promote immediate relaxation.
  • Users reporting persistent negative thoughts received cognitive restructuring prompts to challenge maladaptive thinking patterns.
  • Physical symptoms, such as muscle tension, triggered recommendations for progressive muscle relaxation exercises.

3.4.4. Intervention Delivery

The selected intervention was communicated to the participant through clear, structured instructions. For example, a mindfulness exercise might involve the following: “Close your eyes and focus on your breath. Inhale deeply for a count of four, hold your breath for four seconds, and exhale for four seconds. Repeat this cycle five times.
To provide further clarity on the chatbot’s interaction design and therapeutic framework, examples of chatbot prompts have been included in Appendix A. These examples illustrate how the chatbot delivers personalized interventions, such as mindfulness exercises, cognitive restructuring techniques, and breathing instructions, tailored to the user’s emotional state and input. The prompts exemplify the system’s ability to simulate a supportive, human-like conversational style while adhering to evidence-based cognitive-behavioral therapy principles.

3.4.5. Feedback and Adaptive Learning

Participants were encouraged to provide feedback on the effectiveness of each intervention. Responses such as “I feel calmer” or “This didn’t work for me” were logged to refine future interactions. This feedback loop enabled the chatbot to adapt its recommendations over time, improving the personalization of interventions.

3.4.6. Continuous Monitoring and Data Logging

The chatbot maintained a record of participant interactions, including frequency, duration, and the type of interventions delivered. These data facilitated both real-time support and the subsequent analysis of engagement patterns, contributing to the study’s evaluation metrics.
The interaction flow highlighted the chatbot’s ability to mimic human-like therapeutic guidance while maintaining scalability and accessibility. Figure 3 provides a visual representation of this process, outlining the decision-making pathways and intervention delivery framework.

3.5. Participant Selection

Participants were recruited on a voluntary basis and screened according to predefined inclusion and exclusion criteria to ensure the study’s relevance and reliability. The selection criteria were as follows (Table 1):
  • Inclusion Criteria:
    Participants experiencing mild to moderate anxiety symptoms, as measured by validated tools such as the Beck Anxiety Inventory (BAI) or Generalized Anxiety Disorder Scale (GAD-7).
    A willingness to participate in both phases of the study.
  • Exclusion Criteria:
    Individuals undergoing ongoing psychotherapy or taking medication for anxiety at the time of the study.
    Severe anxiety or other mental health conditions requiring immediate professional intervention.
Upon selection, demographic information, including age, gender, and educational background, was collected to ensure a diverse participant pool. This demographic data was analyzed to confirm that the sample represented a range of individuals and was not limited to a specific subgroup.
The demographic composition of the participants is summarized in Table 2, which highlights key characteristics such as mean age, gender distribution, and baseline anxiety levels. This diversity ensured that the findings of the study could be generalized across a broader population, while also allowing for subgroup analyses to explore variations in responses to the chatbot intervention.

3.6. Data Collection

A robust data collection framework was implemented to capture both quantitative and qualitative insights regarding the chatbot’s effectiveness in managing anxiety symptoms. Data was collected systematically throughout both phases of the study, allowing for comprehensive analysis.

3.6.1. Quantitative Data Collection

  • Daily Assessments: Participants completed self-administered questionnaires at the beginning of each day and immediately after each chatbot interaction [40]. These questionnaires were designed to measure the following:
    Anxiety Levels: Assessed using validated scales, such as BAI and GAD-7, to track changes over time.
    Sleep Quality: Questions addressing sleep disturbances, onset latency, and overall restfulness.
    Effectiveness of CBT Techniques: Evaluated the perceived usefulness of mindfulness exercises, breathing techniques, and cognitive restructuring provided by the chatbot.
    Emotional State and Stress Management: Captured participants’ perceived ability to manage stress and emotional challenges on a daily basis.
  • Interaction Metrics: The chatbot recorded the interaction frequency and duration for each participant, daily. These metrics allowed for an analysis of user engagement and its correlation with anxiety reduction. Specific data points included the following:
    Total interaction time per day.
    Frequency of interactions within a 24 h period.
    Trends in engagement over the seven days of each phase.

3.6.2. Qualitative Data Collection

  • Participant Feedback: Qualitative feedback was gathered at the end of each phase to better understand participants’ experiences with the chatbot [41,42]. Feedback focused on the following:
    The perceived utility of the chatbot in managing anxiety symptoms.
    The relevance and personalization of responses provided by the chatbot.
    Suggestions for improvement in functionality and interaction quality.

3.6.3. Data Recording and Management

All data were anonymized and securely stored to ensure participant confidentiality. The collected data were organized into datasets for statistical analysis, enabling comparisons between phases and identifying trends in anxiety reduction, user engagement, and effectiveness of CBT techniques.
By combining objective measures with subjective feedback, the study provided a holistic understanding of the chatbot’s impact on the participants’ mental health [43].

3.7. Statistical Analysis

The statistical analysis aimed to evaluate the effectiveness of the chatbot in reducing anxiety symptoms and its sustained impact over two observation phases. Quantitative data analysis focused on changes in anxiety scores, while qualitative feedback provided additional insights into participant experiences.

3.7.1. Primary Outcomes

  • Anxiety Score Changes: The primary outcomes were the reductions in anxiety scores from the beginning to the end of each phase, as well as the improvement percentages across both phases.

3.7.2. Secondary Outcomes

  • Engagement Metrics: Interaction frequency and duration were analyzed to identify patterns of engagement and their relationship with anxiety reduction.
  • Qualitative Feedback: Participant feedback was reviewed to evaluate the perceived utility and relevance of the chatbot.

3.7.3. Statistical Methods

  • Phase 1 Analysis:
    Paired t-tests were conducted to compare anxiety scores between Day 1 and Day 7, assessing short-term improvements during Phase 1.
    Results were reported with means, standard deviations, and confidence intervals.
  • Phase 2 Analysis:
    Paired t-tests were applied to anxiety scores in Phase 2, comparing Day 1 and Day 7 to assess long-term improvements.
    Comparisons between Phases 1 and 2 were performed to evaluate the chatbot’s sustained impact over time, specifically examining differences in mean anxiety scores and improvement percentages.
  • Validation Tests:
    Tests for Normality: Shapiro–Wilk tests were applied to ensure that anxiety score differences were normally distributed, a key assumption for parametric analyses.
    Non-Parametric Validation: For robustness, the Wilcoxon Signed-Rank Test was conducted to validate the paired t-test results, especially for data that deviated from normality.

3.7.4. Analytical Tools

  • All analyses were conducted using SPSS, ensuring rigorous statistical standards.
  • Graphical representations, including box plots and line charts, were generated to visualize changes in anxiety scores and engagement metrics over time.
This multi-layered approach to statistical analysis ensured both the validity and reliability of the study findings, allowing for a comprehensive evaluation of the chatbot’s effectiveness in managing anxiety symptoms.

3.8. Ethical Considerations

This study adhered to established ethical guidelines for research involving human participants. Key aspects included the following:
  • Informed Consent: Verbal informed consent was obtained from all participants before their inclusion in the study. Participants were provided with detailed information about the study’s objectives, procedures, and their right to withdraw at any time without repercussions. A sample of the consent form is included in the Appendix B.
  • Minimal Risk Nature: The study was observational and involved no medical interventions or sensitive data collection. The chatbot’s role was limited to providing general psychological support and did not replace professional medical or therapeutic interventions.
  • Ethical Approval: In line with international guidelines for observational studies, formal ethical approval was deemed unnecessary. The study did not pose risks to participants’ physical or psychological well-being and followed ethical research practices for minimal-risk studies.
  • Data Privacy: The participants’ data were anonymized and stored securely. No identifiable personal information was collected, ensuring participant confidentiality throughout the research process.
This approach ensured that the study maintained ethical integrity while respecting the rights and well-being of all participants.

4. Results

4.1. Statistical Analysis

The primary objective of this study was to evaluate the effectiveness of the chatbot in reducing anxiety symptoms across two observational phases. Statistical methods, including paired t-tests and Wilcoxon Signed-Rank Tests, were employed to assess changes in anxiety scores and their relationship with user engagement metrics. (Table 3).

4.1.1. Phase 1 Findings

  • Reduction in Anxiety Symptoms: A paired t-test comparing anxiety scores from Day 1 to Day 7 of Phase 1 showed a statistically significant reduction (t(49) = 10.24, p < 0.001). The average improvement percentage for Phase 1 was 21.15% (SD = 2.32). This highlights the chatbot’s short-term effectiveness in alleviating anxiety symptoms, as seen in Figure 2, which details individual improvement trends across participants.
  • Engagement Metrics: The analysis of interaction frequency revealed variability in participants’ engagement with the chatbot. While higher interaction frequencies were observed among participants reporting greater symptom relief, statistical analyses did not confirm a direct correlation. This suggests that while engagement may play a role in outcomes, additional factors likely contribute to the observed improvements.
  • Responses to CBT Techniques: Participants responded variably to the cognitive-behavioral therapy (CBT) techniques offered by the chatbot. Mindfulness exercises and guided breathing were particularly effective in improving participants’ emotional states.

4.1.2. Phase 2 Findings

Two months after Phase 1, participants were invited to engage with the chatbot for another seven-day period to assess long-term retention and reusability.
  • Sustained Improvement: A paired t-test comparing Phase 1 and Phase 2 mean anxiety scores revealed sustained improvements (t(49) = 8.47, p < 0.001). The average improvement percentage in Phase 2 was slightly lower, at 20.42% (SD = 2.96), indicating that while the benefits of the intervention were retained, there was a marginal decrease compared to Phase 1.
  • Reusability and Consistency: Participants demonstrated longer average daily interaction times in Phase 2 (M = 24.15 min, SD = 4.5) compared to Phase 1 (M = 19.55 min, SD = 3.2). This suggests greater familiarity and comfort with the chatbot during the second phase.

4.1.3. Variability and Individual Trends

Figure 4 illustrates the improvement percentages for each participant across the two phases, highlighting individual variability. While most participants demonstrated improvement, a few outliers exhibited either minimal or no change. These results underscore the importance of tailoring interventions to individual needs.

4.1.4. Validation and Robustness

To ensure the reliability of the results, normality checks were performed using the Shapiro–Wilk test, confirming that the data were suitable for parametric tests. Wilcoxon Signed-Rank Tests further validated the paired t-test findings, demonstrating robust statistical outcomes across both phases (Table 4).
This table highlights the key statistical tests conducted to validate the results across both phases of the study. It demonstrates the consistency and robustness of the findings while accounting for minor deviations from normality.

4.2. Anxiety Score Progression

The progression of anxiety scores over the two phases highlights the effectiveness of the AI-based chatbot in reducing symptoms of anxiety. Statistically significant reductions were observed in both phases, supported by paired t-tests and additional robustness checks. These findings underscore the potential of personalized AI interventions in managing anxiety disorders (Table 5).
A paired t-test comparing anxiety scores from Day 1 to Day 7 in Phase 1 revealed a statistically significant reduction in anxiety levels. The mean difference was 14.84 ± 1.36 (t(49) = 77.12, p < 0.001), with a 95% confidence interval ranging from 14.45 to 15.23. This finding confirms that the chatbot intervention contributed to a substantial improvement in anxiety symptoms over the initial seven-day period.
Similarly, in Phase 2, a paired t-test demonstrated a significant reduction in anxiety scores, with a mean difference of 11.80 ± 1.97 (t(49) = 42.37, p < 0.001). The 95% confidence interval ranged from 11.24 to 12.36, further validating the chatbot’s effectiveness during the second phase of observation.
To evaluate the sustained impact of the chatbot intervention, a paired t-test was conducted to compare the mean anxiety scores across the two phases. The analysis showed a significant mean difference of 11.89 ± 2.36 (t(49) = 35.61, p < 0.001), indicating that improvements observed during Phase 1 were largely retained in Phase 2. These results suggest that participants were able to apply techniques learned during the first phase effectively over the long term.
Figure 5 visualizes the baseline (Day 1) and final (Day 7) anxiety scores for both phases. The graph illustrates a consistent decline in anxiety symptoms across the seven days in each phase, with Phase 2 demonstrating a slightly steeper reduction. These results indicate that the chatbot was effective in supporting both immediate and long-term anxiety management.

4.3. Engagement and Personalization

The chatbot demonstrated robust engagement levels, especially during emotionally challenging moments. Participants utilized the chatbot consistently throughout both phases, with engagement metrics highlighting a shift towards increased interaction duration in Phase 2. This suggests that familiarity with the system and confidence in its utility contributed to higher levels of usage over time.
Table 6 presents the classification of participants into engagement categories based on their daily average interaction time with the chatbot. The comparison between Phases 1 and 2 indicates an increase in engagement during the second phase, with more participants spending 20–25 min daily on average. This shift may reflect enhanced familiarity with the chatbot’s functionality or increased reliance on the system for anxiety management.
This shift in engagement metrics highlights the chatbot’s adaptability and its ability to maintain relevance across both immediate and long-term interventions. The increased duration of interactions in Phase 2 also reflects participants’ trust in the system as a reliable tool for managing anxiety symptoms.
The continuous availability of the chatbot and its personalized approach likely contributed to these engagement patterns, demonstrating its potential as an accessible and effective mental health support system.
The results highlight that most participants engaged with the chatbot for 15–20 min daily during Phase 1, whereas Phase 2 demonstrated a slight increase in interaction time, with 42% of participants moving to the 20–25 min category. This increased engagement may correlate with higher familiarity or improved utility of the chatbot as perceived by users.
Additionally, Figure 6 and Figure 7 illustrate the daily interaction times for all 50 participants across the seven days of Phases 1 and 2, respectively. These visualizations provide a detailed view of usage patterns, showcasing individual variability as well as general trends. In Phase 1, interaction times were relatively stable, with most participants engaging between 15 and 25 min daily. The variability in interaction time indicates differences in individual needs and preferences.
In Phase 2, there was a noticeable increase in interaction times, with several participants engaging for longer durations compared to Phase 1. This suggests that participants became more comfortable with the chatbot and relied on it more frequently for support.
These findings align with the summary provided in Table 6, further supporting the idea that the chatbot’s accessibility and personalized interventions encouraged sustained engagement across both immediate and long-term use.

4.4. Participant Feedback

The feedback provided by participants underscored the chatbot’s effectiveness in supporting anxiety management and highlighted several aspects of the user experience. Overall, participants consistently rated the chatbot as a valuable tool, particularly for its accessibility and personalized responses. The chatbot’s availability around the clock was frequently mentioned as a critical feature, as it allowed users to access support whenever needed, including during moments of acute emotional distress. Personalized responses that were tailored to the specific needs of participants further enhanced the perceived utility of the intervention.
One of the strengths of this chatbot was its ability to maintain a high level of engagement among participants throughout the study. Users frequently interacted with the chatbot, and their feedback indicated that they felt supported and guided by the responses provided, even in the absence of direct interaction with a human therapist. This increased frequency of interactions was facilitated by the chatbot’s continuous accessibility, as it was available 24/7. Participants could access the tool at any time of the day, particularly during moments of emotional crisis when human support might not be available.
Furthermore, we assessed the participants’ responses to various therapeutic techniques offered by the chatbot, such as cognitive restructuring and relaxation exercises [44]. The aim was to determine which of these techniques had the greatest impact on the participants’ emotional state. The results indicated that the use of mindfulness techniques and guided breathing significantly contributed to improving overall well-being.
Participants expressed high satisfaction with the mindfulness exercises and guided breathing techniques offered by the chatbot. These features were often described as immediately effective in alleviating anxiety symptoms and were particularly appreciated by users who reported high levels of stress during the study period. The ability of the chatbot to integrate evidence-based therapeutic techniques into its interactions was a key factor in its positive reception.
Despite the overwhelmingly favorable feedback, some participants suggested areas for improvement. A common recommendation was the refinement of the chatbot’s conversational tone to better emulate human empathy. While the chatbot provided valuable support, its inability to replicate the emotional connection characteristic of human therapists was noted as a limitation. This feedback aligns with broader critiques of AI-based interventions in mental health, where the lack of emotional resonance remains a challenge.
Quantitative measures of participant satisfaction further validated the qualitative feedback. Using a 5-point Likert scale (1 = very dissatisfied, 5 = very satisfied), participants reported an average satisfaction rating of 4.44 (SD = 0.577), indicating a high level of approval for the chatbot’s functionality and effectiveness (Table 7).
These results demonstrate the chatbot’s significant potential as a scalable and accessible tool for managing anxiety. While participants expressed satisfaction with the intervention, their constructive feedback highlights opportunities for further refinement, particularly in enhancing the chatbot’s ability to simulate empathetic interactions.

5. Discussion

The findings of this study provide valuable insights into the potential of AI-driven chatbots as supplementary tools for managing anxiety. The statistically significant reductions in anxiety scores across both phases underscore the effectiveness of the chatbot in delivering immediate and sustained improvements in emotional well-being. Additionally, the increased engagement and retention observed in Phase 2 suggest that familiarity with the system enhances its utility over time.
One of the key strengths of this study lies in the chatbot’s ability to employ evidence-based cognitive-behavioral therapy (CBT) techniques, such as mindfulness, guided breathing, and cognitive restructuring, in a personalized and scalable manner. Participants consistently reported satisfaction with these interventions, particularly their accessibility during moments of emotional distress. These results align with the existing literature that highlights the potential of AI in providing scalable mental health solutions, particularly in contexts where access to human therapists is limited.
The variability in individual responses highlights the importance of personalization in AI-based mental health interventions. While some participants reported significant reductions in physical symptoms, such as palpitations and muscle tension, others emphasized cognitive improvements, such as greater clarity of thought and better sleep quality. This diversity of outcomes reflects the complex nature of anxiety and the necessity for adaptable therapeutic strategies.
AI has the capability to collect and analyze behavioral data, such as daily physical activity, sleep patterns, and social interactions. These insights can provide a comprehensive overview of the user’s mental state, aiding in the adjustment of therapeutic suggestions to meet specific needs. For instance, an individual with an active daily routine may benefit from recommendations that emphasize maintaining this lifestyle for anxiety management, while someone exhibiting signs of sedentarism may need encouragement to enhance their physical activity and reconnect with their social environment.
By incorporating these techniques, the chatbot successfully equipped users with tools that they could directly apply in their daily lives, helping them better manage their anxiety symptoms. For example, during moments of emotional crisis, users received suggestions for breathing techniques that helped them quickly alleviate physical stress symptoms, such as palpitations or feelings of panic. Similarly, cognitive restructuring was utilized to assist users in reanalyzing negative thoughts and replacing them with more realistic and constructive alternatives.
To illustrate this concept, consider two distinct cases:
  • Active Individual: A user with a daily active schedule may receive suggestions for mindfulness techniques integrated into their exercise routine. For example, the chatbot might recommend breathing exercises or meditation to be performed before or after workouts, maximizing both physical and mental benefits while keeping sessions brief to fit the user’s schedule.
  • Sedentary Individual: In contrast, a user showing signs of sedentarism may benefit from an intervention plan that includes small steps to improve their physical activity. The chatbot could suggest simple stretching exercises to be done at home or in the office and periodically remind them to stand and move, thereby helping reduce anxiety symptoms through improved circulation and endorphin release.
The personalization of interventions does not stop there. The chatbot can continue to monitor user responses to various suggested techniques, constantly adjusting them based on the feedback received. By employing personalized strategies, the chatbot becomes capable of offering not only general advice but also specific interventions tailored to each user. This level of customization is essential for maintaining the relevance and effectiveness of the chatbot in psychological counseling sessions.
Furthermore, the chatbot’s ability to monitor user responses over multiple sessions allows for continuous adjustment and improvement of recommendations based on user feedback. This can be accomplished through periodic questionnaires or quick assessments after each session to evaluate the effectiveness of the techniques employed. For instance, if a user reports that a particular breathing technique was ineffective, the chatbot can offer alternatives such as meditation exercises or muscle relaxation techniques.
Participants reported that the chatbot provided practical techniques they could immediately apply to manage their anxiety. This sense of control over their mental state was essential in enhancing feelings of safety and autonomy. The chatbot’s responses were viewed as a continuous guide, assisting users in developing personalized strategies to cope with stress and anxiety, particularly during challenging times.
While the chatbot could not replicate the deep emotional connection that a human therapist provides, participants nonetheless experienced a high level of support. Many noted that frequent interactions with the chatbot gave them the feeling of receiving assistance and useful advice in a manner that enabled them to manage their symptoms proactively. Even in the absence of a therapist’s physical presence, the chatbot offered consistent support, contributing to the development of effective self-help skills.
An important aspect of the study was the chatbot’s effectiveness during emotional crises. Participants reported that the chatbot provided appropriate and relevant solutions when they faced acute anxiety episodes. This ability to deliver rapid and specific interventions in emergency situations significantly contributed to reducing stress levels and improved participants’ sense of control over their emotional reactions.
Although the lack of direct human interaction may be considered a disadvantage, the majority of participants indicated that the chatbot was a valuable tool in managing their anxiety, especially in critical situations. The personalized responses, tailored to each user’s evolving condition, contributed to a sense of support and guidance, offering an effective and accessible solution for daily mental health management.

5.1. Comparison with Similar Studies on Chatbots for Mental Health

The findings of this study align with and extend existing research on the role of chatbots in mental health interventions [45,46]. Previous studies, such as those by Fitzpatrick et al. (2017), demonstrated the potential of chatbots to deliver effective cognitive-behavioral therapy (CBT) for individuals experiencing symptoms of depression and anxiety [23]. Similarly, research by Inkster et al. (2023) highlighted the ability of AI-driven systems to improve user engagement and provide accessible therapeutic support [47]. In the present study, the observed reduction in anxiety symptoms—averaging 21.15% in Phase 1 and 20.42% in Phase 2—is consistent with the findings of other chatbot-based interventions, which report symptom reductions ranging from 15% to 30%.
The findings of our study align with the growing body of research highlighting the potential of AI-powered chatbots in supporting mental health interventions. For instance, Perplexity, another AI-driven chatbot, has been evaluated in studies focusing on its ability to manage anxiety and depression symptoms [48,49,50]. These studies emphasize Perplexity’s strengths in delivering personalized therapeutic suggestions and maintaining high user engagement through natural language processing. However, our chatbot, developed using ChatGPT, demonstrated several unique contributions. Specifically, it integrates evidence-based cognitive-behavioral therapy (CBT) techniques, such as mindfulness exercises, breathing strategies, and cognitive restructuring, in a manner that tailors responses to individual emotional states.
Unlike Perplexity, which primarily relies on general-purpose natural language models, our chatbot’s design incorporates prompt engineering informed by clinically validated anxiety assessment tools, such as the Beck Anxiety Inventory (BAI) and Generalized Anxiety Disorder Scale (GAD-7). This distinction enhances its ability to provide targeted interventions based on specific anxiety symptoms reported by users. While both chatbots share limitations, such as the lack of human empathy, our study highlights the importance of integrating structured therapeutic frameworks into chatbot design to maximize their impact on mental health outcomes. These comparisons underscore the broader potential of AI in mental health while also emphasizing the need for future research to refine and optimize such tools.
What differentiates this study is the inclusion of a dual-phase observational design, enabling the evaluation of both short-term and sustained effects of chatbot usage. While existing studies often focus on single-session or short-duration interventions, this research demonstrated that the familiarity gained in Phase 1 contributed to the enhanced engagement and symptom reduction in Phase 2. Additionally, the higher average interaction time in Phase 2 suggests that repeated exposure to the chatbot may foster a deeper integration of therapeutic techniques.
Despite these similarities, the results also highlight certain challenges noted in the literature. For instance, studies such as Gaffney et al. (2019) emphasize the limitations of chatbots in replicating human empathy—a finding echoed by the qualitative feedback in this study [51]. Addressing this limitation remains a critical area for future development, particularly in enhancing the chatbot’s conversational tone and emotional responsiveness.
By situating the findings within the broader context of chatbot-based mental health interventions, this study underscores the potential of AI as a scalable, accessible tool for managing anxiety, while also identifying avenues for further refinement and application.

5.2. Limitations, Individual Variability, and Future Research

While the findings demonstrate that the chatbot significantly reduced anxiety symptoms for the majority of participants, individual responses varied considerably. Some participants reported more pronounced improvements in physical symptoms, such as reduced palpitations and muscle tension, while others highlighted cognitive enhancements, including greater clarity of thought and improved sleep quality. These variations underscore the importance of personalization in mental health interventions, particularly when addressing the diverse needs of users.
One methodological limitation of this study is the relatively small sample size (N = 50) and the short duration of each observational phase (seven days). While these parameters were sufficient to capture immediate and short-term effects, they may not fully reflect the long-term impact or scalability of the chatbot intervention. Future research should aim to include larger and more diverse populations, as well as extend the observational periods, to provide a more comprehensive understanding of the chatbot’s efficacy.
Additionally, the absence of a control group represents another limitation. Without a comparative framework, it is difficult to isolate the effects of the chatbot from external factors that may have influenced participants’ anxiety levels. Integrating a control group in future studies would allow for more robust assessments and a clearer understanding of the chatbot’s unique contributions to anxiety management.
Another significant limitation lies in the chatbot’s inability to replicate human empathy, a critical element in traditional therapeutic relationships. The emotional connection between a therapist and a patient, characterized by trust and empathy, remains a cornerstone of effective mental health interventions. While the chatbot effectively employed evidence-based cognitive-behavioral therapy (CBT) techniques, its lack of emotional resonance may limit its effectiveness in certain cases. This limitation highlights the potential benefits of hybrid models that combine AI systems with human oversight, ensuring that users receive both the accessibility of AI and the emotional support of a human therapist.
Studies in the literature support these findings, highlighting that while AI can provide significant benefits in mental health through accessibility and personalization, it cannot fully substitute for human interaction. The therapeutic relationship, characterized by empathy, understanding, and emotional support, is essential for the long-term success of treatment. This underscores the importance of intuition and the therapist’s ability to respond to the emotional subtleties of patients.
Within the therapeutic relationship, empathy serves not only as a supportive element but also as a catalyst for change. Human therapists are capable of perceiving the nuances and subtleties of nonverbal communication, providing emotional feedback, and adapting interventions based on the patient’s emotional state. This adaptability is crucial, particularly when addressing complex issues related to anxiety, depression, and trauma. In contrast, chatbots may offer predefined responses but lack the capacity to react with the same degree of sensitivity and human understanding.
Despite these limitations, the chatbot’s accessibility and scalability present promising opportunities for expanding mental health support, particularly in underserved populations. The findings from this study indicate that personalized AI-driven interventions can play a valuable role in complementing traditional therapies. However, further refinement of the chatbot’s algorithms is necessary to address individual variability more effectively, particularly in tailoring responses to specific user needs.
This personalized approach, supported by AI technology, has the potential to transform how mental health interventions are delivered. Customization not only enhances the effectiveness of interventions but also increases accessibility, adapting to the specific needs of each individual. In the future, the use of AI technologies in this context could open new avenues for mental health treatments, facilitating the necessary access and support for those facing anxiety and other emotional disorders.
In conclusion, while the study provides compelling evidence of the chatbot’s potential in managing anxiety, the limitations outlined above should guide the design of future research to enhance the robustness, generalizability, and ethical implementation of AI-based mental health interventions.
In addition to addressing these limitations, future research should explore the long-term effects of repeated chatbot usage and its potential integration into broader mental health care frameworks. Extending the observational period and examining the impact of the chatbot in diverse cultural and demographic contexts could further elucidate its utility. Furthermore, advancements in natural language processing could enhance the chatbot’s conversational tone, making it more engaging and empathetic.

6. Conclusions

This study highlights the significant potential of artificial intelligence (AI)-driven technologies in supporting mental health, particularly in the management of anxiety disorders. Findings indicate that a chatbot powered by ChatGPT effectively reduced anxiety symptoms, with improvements averaging 21.15% in Phase 1 and 20.42% in Phase 2. These results underscore the value of AI as a complementary tool in mental health care, particularly in contexts where access to human therapists is limited. The dual-phase design further demonstrated the chatbot’s capacity to provide both immediate relief and sustained benefits, reinforcing its utility as a long-term support mechanism.
While AI cannot replicate the empathy and nuanced clinical judgment of a human therapist, chatbots like the one studied here can play a crucial role in delivering accessible and personalized psychological support. By integrating evidence-based cognitive-behavioral therapy (CBT) techniques such as mindfulness exercises, guided breathing strategies, and cognitive restructuring, the chatbot empowered participants to develop self-help skills and better manage their anxiety. Its ability to provide real-time support represents a key advantage in addressing the urgent needs of individuals during moments of emotional distress.
The accessibility of AI-driven solutions is particularly relevant for individuals facing barriers such as financial limitations or stigma associated with seeking mental health care. By offering 24/7 support, chatbots can bridge gaps in traditional care systems, providing immediate assistance when human support may not be available. This scalability and constant availability make AI-based interventions a practical addition to the mental health care landscape.
However, the study also identified important limitations. The chatbot’s lack of human empathy remains a significant challenge, as many participants expressed a desire for deeper emotional connection during interactions. While the chatbot effectively facilitated stress management techniques and offered helpful resources, the absence of genuine empathy could limit its effectiveness for certain users. Addressing this limitation will require advancements in AI to enhance the conversational tone and simulate empathetic responses.
Future research should focus on integrating AI with human oversight to address limitations such as the absence of empathy and ethical concerns related to data privacy. A hybrid model, combining the strengths of AI with the expertise of human therapists, may provide optimal outcomes. For example, chatbots could function as supplementary tools within traditional therapeutic frameworks, allowing therapists to leverage chatbot-generated insights to tailor interventions to individual needs.
In addition to its role in mental health support, the AI-based chatbot possesses the potential to identify early signs of various health issues, including nutritional deficiencies and other physiological conditions that may contribute to anxiety and related psychological disorders. By analyzing user input and behavioral data, the chatbot can formulate informed dietary recommendations, such as the inclusion of essential vitamins and minerals that could alleviate symptoms.
For instance, research underscores the significance of vitamin D in the aging process [52,53,54] and its broader implications for overall health [55,56]. Adequate levels of vitamin D are crucial not only for physical well-being but also for cognitive function, with deficiencies potentially exacerbating mental health issues. Furthermore, studies highlight the role of food biochemistry in the prevention of nervous system diseases, emphasizing the pivotal contribution that proper nutrition makes to mental health management [57,58,59]. The chatbot can leverage this knowledge to conduct a more in-depth physiological analysis of users, offering insights into necessary dietary adjustments. For example, by monitoring symptoms related to mood and energy levels, the chatbot might suggest dietary modifications aimed at addressing potential deficiencies in vitamins such as B12 or magnesium, both known to influence mental health outcomes.
Beyond its nutritional guidance capabilities, the chatbot can detect early signals of other health issues that may exacerbate anxiety [60,61]. Medical conditions such as cervicofacial vascular anomalies [62]—where abnormal blood vessels can cause physical discomfort or concern—have been shown to heighten emotional distress if left unmanaged. Research indicates that early detection and intervention are crucial for mitigating the effects of such anomalies [63]. By monitoring physiological data and identifying early warning signs of potential anomalies, the chatbot could provide timely alerts, prompting users to seek further medical evaluation before the condition worsens.
The chatbot’s capability to tailor recommendations based on user data enhances its functionality as a comprehensive health support tool. It can combine insights from users’ behavioral patterns, physical symptoms, and medical knowledge to offer personalized recommendations, whether they involve adjusting dietary intake or flagging early signs of medical conditions like vascular anomalies. This holistic approach not only empowers users to take charge of their health but also underscores the potential of AI to bridge the gap between mental health, physical health, and nutrition. In this way, the chatbot becomes a valuable resource for managing both emotional well-being and detecting underlying health conditions that may contribute to psychological distress.
In conclusion, AI presents a promising opportunity for enhancing mental health care and expanding access to effective treatments for anxiety. While challenges and limitations persist, the integration of AI technologies with human support has the potential to yield significant and long-lasting outcomes. It is crucial that the development of these technologies be approached with careful consideration of ethical standards, data privacy, and the specific needs of users. By doing so, we can ensure that individuals facing mental health challenges have access to the assistance they require, thereby improving overall well-being and ensuring equitable access to care.

Author Contributions

A.M.: Data curation, formal analysis, writing—review and editing; R.C.: conceptualization, methodology, investigation, writing—original draft preparation; R.B.: resources, writing—review and editing; F.M.: supervision, project administration, validation. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Ethical review and approval were waived for this study, since the research was observational and non-invasive, involving interactions with a chatbot, and was classified as minimal risk. According to research guidelines, such low-risk studies do not require formal ethical approval, particularly when no sensitive or personal data are collected.

Informed Consent Statement

Informed consent (verbal consent) was obtained from all the participants involved in the study. The verbal consent obtained aligns with ethical standards for this type of research.

Data Availability Statement

The raw data supporting the findings of this study are available upon request from the corresponding author, subject to ethical restrictions. The original contributions presented in this study are included in the article, and any further inquiries can be directed to the corresponding author. The authors confirm that they did not use Artificial Intelligence (AI) tools for the initial drafting of this article. However, ChatGPT-4 was employed for specific tasks such as formatting, copyediting, and improving readability in certain sections, including the abstract and conclusion. ChatGPT-4 was also utilized to ensure coherence between sections and assist in identifying potential limitations of the work. The authors retained full control throughout the process, carefully reviewing all AI-generated suggestions to maintain the accuracy and integrity of the manuscript. The data input for the study and all revisions were provided and managed by the authors. Any contributions from ChatGPT-4 were thoroughly evaluated to ensure they did not introduce unintended content or inaccuracies. This disclosure reflects the authors’ commitment to transparency in the use of AI tools, which is becoming increasingly relevant in academic research. The authors advocate for responsible use of AI technologies and believe that acknowledging their role in the writing process contributes to the ethical evolution of scholarly work.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A

Appendix A.1. Example of Chatbot Prompts

To illustrate the adaptability and personalized nature of the AI-based chatbot, a variety of prompt examples are provided in this appendix. These prompts demonstrate how the chatbot responds to different user inputs related to anxiety symptoms, offering tailored interventions based on cognitive-behavioral therapy (CBT) techniques, mindfulness practices, and emotional support. The examples below cover a range of scenarios, from managing negative thoughts to coping with physical symptoms of anxiety, showcasing the chatbot’s capacity to deliver targeted, real-time guidance to users.
1.
Dealing with Negative Thoughts (Cognitive Restructuring)
-
User Input: “I keep thinking I’m going to fail at everything I try”.
-
Chatbot Response:
“It sounds like you’re struggling with some negative thoughts about yourself. Let’s challenge that.
Can you think of a time when you succeeded at something, even if it was something small?
How did that make you feel? Sometimes focusing on past successes can help us reframe our current thoughts”.
2.
Physical Symptoms of Anxiety (Breathing Techniques)
-
User Input: “My chest feels tight and I’m struggling to breathe”.
-
Chatbot Response:
“That sounds really uncomfortable. Let’s try to help your body relax. Try this breathing technique:
Breathe in deeply through your nose for 4 s, hold for 7 s, and exhale slowly through your mouth for 8 s.
Do this three times, and let me know how you’re feeling afterward”.
3.
Sleep Disruption Due to Anxiety (Sleep Hygiene)
-
User Input: “I can’t sleep, my mind won’t stop racing”.
-
Chatbot Response:
“I’m sorry to hear you’re having trouble sleeping. Let’s try a few things to calm your mind.
Can you try putting your phone away for a few minutes, dimming the lights, and doing a quick body scan exercise?
Start from your feet and work your way up, noticing each part of your body and releasing any tension you might be holding”.
4.
Social Anxiety (Exposure Therapy Techniques)
-
User Input: “I feel overwhelmed at social events and want to leave immediately”.
-
Chatbot Response:
“Social situations can be tough. One thing that might help is breaking it down into smaller, manageable steps.
For example, try staying for just five more minutes, and focus on one small conversation.
What’s the worst that could happen? Often, we imagine outcomes worse than reality”.
5.
Feeling Overwhelmed (Grounding Techniques)
-
User Input: “Everything feels too much. I don’t know where to start”.
-
Chatbot Response:
“When things feel overwhelming, it can help to focus on the present moment.
Try this grounding exercise: Look around the room and name 5 things you can see,
4 things you can feel, 3 things you can hear, 2 things you can smell, and 1 thing you can taste.
This will help you anchor yourself in the present”.
6.
Panic Attack (Crisis Intervention)
-
User Input: “I feel like I’m losing control. My heart is racing and I can’t focus”.
-
Chatbot Response:
“It sounds like you’re having a panic attack. Let’s focus on your breathing:
Inhale slowly through your nose for 3 s, hold for 3 s, and exhale gently through your mouth for 5 s.
Focus only on your breath. I’m here with you, let’s do this together until you feel more in control”.
7.
Self-Esteem and Self-Worth (Positive Affirmations)
-
User Input: “I don’t feel like I’m good enough”.
-
Chatbot Response:
“I understand those feelings can be overwhelming, but they don’t define you.
Let’s try repeating a few affirmations: ‘I am capable,’ ‘I am worthy of love and respect,’ ‘I am doing my best.’
You are more than your negative thoughts”.
8.
Worrying About the Future (Mindfulness and Acceptance)
-
User Input: “I keep worrying about things that haven’t happened yet”.
-
Chatbot Response:
“It’s normal to feel anxious about the future, but let’s focus on what’s within your control right now.
Can you take a moment to sit in a quiet space and pay attention to your breathing?
Focus on what’s happening around you in the present moment. Let’s bring your attention back to the here and now”.
9.
Procrastination (Behavioral Activation)
-
User Input: “I can’t seem to start anything. I feel paralyzed”.
-
Chatbot Response:
“When tasks seem overwhelming, breaking them into smaller steps can help.
Let’s start with something simple: What’s the first, tiniest step you can take?
Even if it’s just writing down one word or setting a timer for 5 min to work on it”.
10.
Grief and Loss (Emotional Support)
-
User Input: “I can’t stop thinking about the person I lost”.
-
Chatbot Response:
“Grief can be incredibly heavy. It’s important to allow yourself to feel those emotions, but also to take care of yourself.
Would it help to talk about your favorite memory with them? Sometimes reflecting on positive moments can bring comfort, even in difficult times”.

Appendix B

Appendix B.1. Informed Consent Script (Verbal Consent)

Purpose of the Study: You are invited to participate in a study testing an AI-based chatbot designed for anxiety management. This study aims to improve mental health support by offering personalized feedback through a chatbot interface.
Participation Details: Your participation is entirely voluntary. You will interact with the chatbot over the course of 7 days, during which your responses will be monitored. No personal identifying information will be collected.
Risks and Benefits: There are no known risks associated with your participation in this study. The chatbot’s interactions are based on general mental health strategies, and you may withdraw from the study at any time.
Confidentiality: The data collected will be used solely for research purposes and will remain anonymous.
Consent Statement: By agreeing to participate, you confirm that you understand the study’s purpose, procedures, and risks and agree to take part voluntarily.
Participant’s Agreement: Do you agree to participate in this study?

References

  1. Cate, A.; Troy, B. The Role of AI in Mental Health: Benefits, Risks, & Ethical Considerations. Available online: https://www.choosingtherapy.com/ai-and-mental-health/ (accessed on 13 October 2024).
  2. Casu, M.; Triscari, S.; Battiato, S.; Guarnera, L.; Caponnetto, P. AI Chatbots for Mental Health: A Scoping Review of Effectiveness, Feasibility, and Applications. Appl. Sci. 2024, 14, 5889. [Google Scholar] [CrossRef]
  3. Bancsik, K.; Ilea, C.D.N.; Daina, M.D.; Bancsik, R.; Șuteu, C.L.; Bîrsan, S.D.; Manole, F.; Daina, L.G. Comparative Analysis of Patient Satisfaction Surveys—A Crucial Role in Raising the Standard of Healthcare Services. Healthcare 2023, 11, 2878. [Google Scholar] [CrossRef] [PubMed]
  4. Karlsson, M.; Iversen, T.; Øien, H. Aging and Healthcare Costs. In Oxford Research Encyclopedia of Economics and Finance; Oxford University Press: Oxford, UK, 2018; p. 54. [Google Scholar] [CrossRef]
  5. De Meijer, C.; Wouterse, B.; Polder, J.; Koopmanschap, M. The effect of population aging on health expenditure growth: A critical review. Eur. J. Ageing 2013, 10, 353–361. [Google Scholar] [CrossRef] [PubMed]
  6. Manole, F.; Marian, P.; Mekeres, G.M.; Voiţă-Mekereş, F. Systematic review of the effect of aging on health costs. Arch. Pharm. Pract. 2023, 14, 58–61. [Google Scholar] [CrossRef]
  7. Sutton, J. Revolutionizing AI Therapy: The Impact on Mental Health Care. Available online: https://positivepsychology.com/ai-therapy/ (accessed on 13 October 2024).
  8. Kasula, B.Y. Ethical Considerations in the Adoption of Artificial Intelligence for Mental Health Diagnosis. Int. J. Creat. Res. Comput. Technol. Des. 2023, 5, 1–7. [Google Scholar]
  9. Bancsik, K.; Ilea, C.D.N.; Daina, M.D.; Bancsik, R.; Șuteu, C.L.; Bîrsan, S.D.; Manole, F.; Daina, L.G. Patient-Perceived Quality Assessment in Orthopedics and Traumatology Departments during COVID-19 Pandemic. Healthcare 2024, 12, 879. [Google Scholar] [CrossRef]
  10. Knapp, P.; Beck, A.T. Cognitive therapy: Foundations, conceptual models, applications and research. Braz. J. Psychiatry 2008, 30, 54–64. [Google Scholar] [CrossRef]
  11. Nicoară, N.D.; Marian, P.; Petriș, A.O.; Delcea, C.; Manole, F. A review of the role of cognitive-behavioral therapy on anxiety disorders of children and adolescents. Pharmacophore 2023, 14, 35–39. [Google Scholar] [CrossRef]
  12. Gao, J.; Garsole, P.; Agarwal, R.; Liu, S. AI Test Modeling and Analysis for Intelligent Chatbot Mobile App-A Case Study on Wysa. In Proceedings of the 2024 IEEE International Conference on Artificial Intelligence Testing (AITest), Shanghai, China, 15–18 July 2024; pp. 132–141. [Google Scholar]
  13. Mehta, A.; Niles, A.N.; Vargas, J.H.; Marafon, T.; Couto, D.D.; Gross, J.J. Acceptability and effectiveness of artificial intelligence therapy for anxiety and depression (Youper): Longitudinal observational study. J. Med. Internet Res. 2021, 23, e26771. [Google Scholar] [CrossRef]
  14. Alanzi, T.; Almahdi, R.; Alghanim, D.; Almusmili, L.; Saleh, A.; Alanazi, S.; Alshobaki, K.; Attar, R.; Al Qunais, A.; Alzahrani, H.; et al. Factors Affecting the Adoption of Artificial Intelligence-Enabled Virtual Assistants for Leukemia Self-Management. Cureus 2023, 15, e49724. [Google Scholar] [CrossRef]
  15. Tsaryk, V.Y.; Bychkova, D.; Klishch, S. Competitive analysis of existing software applications for improving mental health. Syst. Technol. 2024, 4, 166–179. [Google Scholar]
  16. Gillis, K.; Sills, D. The Impact of AI in the Mental Health Field. Available online: https://www.psychologytoday.com/us/blog/invisible-bruises/202407/the-impact-of-ai-in-the-mental-health-field (accessed on 13 October 2024).
  17. Patil, R.; Heston, T.F.; Bhuse, V. Prompt Engineering in Healthcare. Electronics 2024, 13, 2961. [Google Scholar] [CrossRef]
  18. Fulmer, R.; Joerin, A.; Gentile, B.; Lakerink, L.; Rauws, M. Using psychological artificial intelligence (Tess) to relieve symptoms of depression and anxiety: Randomized controlled trial. JMIR Ment. Health 2018, 5, e9782. [Google Scholar] [CrossRef] [PubMed]
  19. Alowais, S.A.; Alghamdi, S.S.; Alsuhebany, N.; Alqahtani, T.; Alshaya, A.I.; Almohareb, S.N.; Aldairem, A.; Alrashed, M.; Bin Saleh, K.; Badreldin, H.A. Revolutionizing healthcare: The role of artificial intelligence in clinical practice. BMC Med. Educ. 2023, 23, 689. [Google Scholar] [CrossRef] [PubMed]
  20. Kelly, C.J.; Karthikesalingam, A.; Suleyman, M.; Corrado, G.; King, D. Key challenges for delivering clinical impact with artificial intelligence. BMC Med. 2019, 17, 195. [Google Scholar] [CrossRef] [PubMed]
  21. Graham, S.; Depp, C.; Lee, E.E.; Nebeker, C.; Tu, X.; Kim, H.-C.; Jeste, D.V. Artificial intelligence for mental health and mental illnesses: An overview. Curr. Psychiatry Rep. 2019, 21, 26. [Google Scholar] [CrossRef]
  22. Kretzschmar, K.; Tyroll, H.; Pavarini, G.; Manzini, A.; Singh, I.; NeurOx Young People’s Advisory Group. Can your phone be your therapist? Young people’s ethical perspectives on the use of fully automated conversational agents (chatbots) in mental health support. Biomed. Inform. Insights 2019, 11, 1178222619829083. [Google Scholar] [CrossRef] [PubMed]
  23. Fitzpatrick, K.K.; Darcy, A.; Vierhile, M. Delivering cognitive behavior therapy to young adults with symptoms of depression and anxiety using a fully automated conversational agent (Woebot): A randomized controlled trial. JMIR Ment. Health 2017, 4, e7785. [Google Scholar] [CrossRef]
  24. Li, H.; Zhang, R.; Lee, Y.-C.; Kraut, R.E.; Mohr, D.C. Systematic review and meta-analysis of AI-based conversational agents for promoting mental health and well-being. Digit. Med. 2023, 6, 236. [Google Scholar] [CrossRef]
  25. Ray, A.; Bhardwaj, A.; Malik, Y.K.; Singh, S.; Gupta, R. Artificial intelligence and Psychiatry: An overview. Asian J. Psychiatry 2022, 70, 103021. [Google Scholar] [CrossRef]
  26. Cheng, S.W.; Chang, C.W.; Chang, W.J.; Wang, H.W.; Liang, C.S.; Kishimoto, T.; Chang, J.P.C.; Kuo, J.S.; Su, K.P. The now and future of ChatGPT and GPT in psychiatry. Psychiatry Clin. Neurosci. 2023, 77, 592–596. [Google Scholar] [CrossRef] [PubMed]
  27. Priyadarshana, Y.; Senanayake, A.; Liang, Z.; Piumarta, I. Prompt engineering for digital mental health: A short review. Front. Digit. Health 2024, 6, 1410947. [Google Scholar] [CrossRef] [PubMed]
  28. Wang, L.; Chen, X.; Deng, X.; Wen, H.; You, M.; Liu, W.; Li, Q.; Li, J. Prompt engineering in consistency and reliability with the evidence-based guideline for LLMs. NPJ Digit. Med. 2024, 7, 41. [Google Scholar] [CrossRef] [PubMed]
  29. Le Glaz, A.; Haralambous, Y.; Kim-Dufor, D.-H.; Lenca, P.; Billot, R.; Ryan, T.C.; Marsh, J.; Devylder, J.; Walter, M.; Berrouiguet, S. Machine learning and natural language processing in mental health: Systematic review. J. Med. Internet Res. 2021, 23, e15708. [Google Scholar] [CrossRef] [PubMed]
  30. Tang, J.; Shang, Y. Advancing Mental Health Pre-Screening: A New Custom GPT for Psychological Distress Assessment. arXiv 2024, arXiv:2408.01614. [Google Scholar]
  31. Alanzi, T.M.; Alharthi, A.; Alrumman, S.; Abanmi, S.; Jumah, A.; Alansari, H.; Alharthi, T.; Alibrahim, A.; Algethami, A.; Aburass, M. ChatGPT as a psychotherapist for anxiety disorders: An empirical study with anxiety patients. Nutr. Health Policy 2024, 7, 02601060241281906. [Google Scholar] [CrossRef]
  32. Nemesure, M.D.; Heinz, M.V.; Huang, R.; Jacobson, N.C. Predictive modeling of depression and anxiety using electronic health records and a novel machine learning approach with artificial intelligence. Sci Rep 2021, 11, 1980. [Google Scholar] [CrossRef]
  33. Kaviani, H.; Mousavi, A.S. Psychometric properties of the Persian version of Beck Anxiety Inventory (BAI). Tehran Univ. Med. J. 2008, 66, 136–140. [Google Scholar]
  34. Steer, R.A.; Ranieri, W.F.; Beck, A.T.; Clark, D.A. Further evidence for the validity of the beck anxiety inventory with psychiatric outpatients. J. Anxiety Disord. 1993, 7, 195–205. [Google Scholar] [CrossRef]
  35. Löwe, B.; Decker, O.; Müller, S.; Brähler, E.; Schellberg, D.; Herzog, W.; Herzberg, P.Y. Validation and standardization of the Generalized Anxiety Disorder Screener (GAD-7) in the general population. Med. Care 2008, 46, 266–274. [Google Scholar] [CrossRef]
  36. Yun-Jeong, K. Early Diagnosis of anxiety Disorder Using Artificial Intelligence. Int. J. Adv. Cult. Technol. 2024, 12, 242–248. [Google Scholar]
  37. Jacobson, N.C.; Feng, B. Digital phenotyping of generalized anxiety disorder: Using artificial intelligence to accurately predict symptom severity using wearable sensors in daily life. Transl. Psychiatry 2022, 12, 336. [Google Scholar] [CrossRef] [PubMed]
  38. Tzavela, E.C.; Mitskidou, P.; Mertika, A.; Stalikas, A.; Kasvikis, Y. Treatment engagement in the early phase of cognitive-behavior therapy for panic disorder: A grounded theory analysis of patient experience. Psychother. Res. 2018, 28, 842–860. [Google Scholar] [CrossRef] [PubMed]
  39. Spitzer, R.L.; Kroenke, K.; Williams, J.B.; Löwe, B. A brief measure for assessing generalized anxiety disorder: The GAD-7. Arch. Intern. Med. 2006, 166, 1092–1097. [Google Scholar] [CrossRef] [PubMed]
  40. Hamilton, N.; Freche, R.; Zhang, Y.; Zeller, G.; Carroll, I. Test anxiety and poor sleep: A vicious cycle. Int. J. Behav. Med. 2021, 28, 250–258. [Google Scholar] [CrossRef]
  41. Danieli, M.; Ciulli, T.; Mousavi, S.M.; Silvestri, G.; Barbato, S.; Di Natale, L.; Riccardi, G. Assessing the impact of conversational artificial intelligence in the treatment of stress and anxiety in aging adults: Randomized controlled trial. JMIR Ment. Health 2022, 9, e38067. [Google Scholar] [CrossRef]
  42. Liu, B.; Sundar, S.S. Should machines express sympathy and empathy? Experiments with a health advice chatbot. Cyberpsychol. Behav. Soc. Netw. 2018, 21, 625–636. [Google Scholar] [CrossRef] [PubMed]
  43. Liao, T.; Yan, B. Are You Feeling Happy? The Effect of Emotions on People’s Interaction Experience Toward Empathetic Chatbots. In Proceedings of the International Design Engineering Technical Conferences and Computers and Information in Engineering Conference, St. Louis, MO, USA, 14–17 August 2022; p. 11. [Google Scholar]
  44. Creswell, J.D. Mindfulness interventions. Annu. Rev. Psychol. 2017, 68, 491–516. [Google Scholar] [CrossRef]
  45. Rahsepar Meadi, M.; Bernstein, J.S.; Batelaan, N.; van Balkom, A.J.; Metselaar, S. Does a lack of emotions make chatbots unfit to be psychotherapists? Bioethics 2024, 38, 503–510. [Google Scholar] [CrossRef] [PubMed]
  46. Seitz, L. Artificial empathy in healthcare chatbots: Does it feel authentic? Comput. Hum. Behav. Artif. Hum. 2024, 2, 100067. [Google Scholar] [CrossRef]
  47. Inkster, B.; Kadaba, M.; Subramanian, V. Understanding the impact of an AI-enabled conversational agent mobile app on users’ mental health and wellbeing with a self-reported maternal event: A mixed method real-world data mHealth study. Front. Glob. Women’s Health 2023, 4, 1084302. [Google Scholar] [CrossRef] [PubMed]
  48. Divino, F. From Meditation to Techno-Mindfulness: On the Medicalization of Contemplative Practices and Future Prospects. Histories 2024, 4, 125–143. [Google Scholar] [CrossRef]
  49. Nazi, Z.A.; Peng, W. Large language models in healthcare and medical domain: A review. Informatics 2024, 11, 57. [Google Scholar] [CrossRef]
  50. Wen, B.; Norel, R.; Liu, J.; Stappenbeck, T.; Zulkernine, F.; Chen, H. Leveraging Large Language Models for Patient Engagement: The Power of Conversational AI in Digital Health. arXiv 2024, arXiv:2406.13659. [Google Scholar]
  51. Gaffney, H.; Mansell, W.; Tai, S. Conversational agents in the treatment of mental health problems: Mixed-method systematic review. JMIR Ment. Health 2019, 6, e14166. [Google Scholar] [CrossRef] [PubMed]
  52. Fantini, C.; Corinaldesi, C.; Lenzi, A.; Migliaccio, S.; Crescioli, C. Vitamin D as a Shield against Aging. Int. J. Mol. Sci. 2023, 24, 4546. [Google Scholar] [CrossRef]
  53. Trifan, D.F.; Tirla, A.G.; Mos, C.; Danciu, A.; Bodog, F.; Manole, F.; Ghitea, T.C. Involvement of Vitamin D3 in the Aging Process According to Sex. Cosmetics 2023, 10, 114. [Google Scholar] [CrossRef]
  54. Tuohimaa, P. Vitamin D and aging. J. Steroid Biochem. Mol. Biol. 2009, 114, 78–84. [Google Scholar] [CrossRef]
  55. Bocheva, G.; Slominski, R.M.; Slominski, A.T. The impact of vitamin D on skin aging. Int. J. Mol. Sci. 2021, 22, 9097. [Google Scholar] [CrossRef]
  56. Trifan, D.F.; Tirla, A.G.; Moldovan, A.F.; Moș, C.; Bodog, F.; Maghiar, T.T.; Manole, F.; Ghitea, T.C. Can vitamin D levels alter the effectiveness of short-term facelift interventions? Healthcare 2023, 11, 1490. [Google Scholar] [CrossRef]
  57. Voiţă-Mekereş, F.; Manole, F.; Voiţă, I.B.; Marian, P. The Role of Food Biochemistry in the Control and Prevention of Nervous System Diseases. J. Biochem. Technol. 2023, 14, 112–116. [Google Scholar] [CrossRef]
  58. Bourre, J.-M. Effects of nutrients (in food) on the structure and function of the nervous system: Update on dietary requirements for brain. Part 1: Micronutrients. J. Nutr. Health Aging 2006, 10, 377. [Google Scholar] [PubMed]
  59. Pogačnik, L.; Ota, A.; Poklar Ulrih, N. An overview of crucial dietary substances and their modes of action for prevention of neurodegenerative diseases. Cells 2020, 9, 576. [Google Scholar] [CrossRef] [PubMed]
  60. Thomas, A.J.; Kalaria, R.N.; O’Brien, T.J. Depression and vascular disease: What is the relationship? J. Affect. Disord. 2004, 79, 81–95. [Google Scholar] [CrossRef] [PubMed]
  61. Roy-Byrne, P.P.; Davidson, K.W.; Kessler, R.C.; Asmundson, G.J.; Goodwin, R.D.; Kubzansky, L.; Lydiard, R.B.; Massie, M.J.; Katon, W.; Laden, S.K. Anxiety disorders and comorbid medical illness. Gen. Hosp. Psychiatry 2008, 30, 208–225. [Google Scholar] [CrossRef] [PubMed]
  62. Kim, T.H.; Choi, J.W.; Jeong, W.S. Current concepts of vascular anomalies. Arch. Craniofacial Surg. 2023, 24, 145. [Google Scholar] [CrossRef] [PubMed]
  63. Vrinceanu, D.; Dumitru, M.; Marinescu, A.; Dorobat, B.; Palade, O.D.; Manole, F.; Muresian, H.; Popa-Cherecheanu, M.; Ciornei, C.M. New Insights into Cervicofacial Vascular Anomalies. J. Clin. Med. 2024, 13, 3515. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Conceptual model of chatbot effectiveness.
Figure 1. Conceptual model of chatbot effectiveness.
Information 15 00768 g001
Figure 2. Chatbot architecture.
Figure 2. Chatbot architecture.
Information 15 00768 g002
Figure 3. The process of continuous monitoring and data logging for an AI-powered chatbot used in anxiety management.
Figure 3. The process of continuous monitoring and data logging for an AI-powered chatbot used in anxiety management.
Information 15 00768 g003
Figure 4. Individual improvement trends across two phases.
Figure 4. Individual improvement trends across two phases.
Information 15 00768 g004
Figure 5. Progression of anxiety scores across phases.
Figure 5. Progression of anxiety scores across phases.
Information 15 00768 g005
Figure 6. Daily interaction time—Phase 1.
Figure 6. Daily interaction time—Phase 1.
Information 15 00768 g006
Figure 7. Daily interaction time—Phase 2.
Figure 7. Daily interaction time—Phase 2.
Information 15 00768 g007
Table 1. Inclusion and exclusion criteria.
Table 1. Inclusion and exclusion criteria.
CriteriaDescription
InclusionMild to moderate anxiety symptoms (BAI or GAD-7)
Willingness to participate in the study
ExclusionOngoing psychotherapy or medication for anxiety
Severe anxiety or other mental health conditions
Table 2. Participant demographics and baseline characteristics.
Table 2. Participant demographics and baseline characteristics.
CharacteristicValue
Sample Size50
Mean Age (SD)36.9 (±6.65) years
Gender Distribution52% Male, 48% Female
Baseline Anxiety Level (BAI)Mean: 70.16 (±3.02); Range: 65–75
Employment Status70% Employed, 30% Unemployed
Education Level40% Higher Education, 60% Secondary Education
Table 3. Statistical summary of anxiety improvement percentages and engagement metrics.
Table 3. Statistical summary of anxiety improvement percentages and engagement metrics.
PhaseImprovement Percentage (%)Mean Anxiety Score Reduction (SD)Average Daily Interaction Time (minutes) (SD)
Phase 121.15 (±2.32)Significant (t(49) = 10.24, p < 0.001)19.55 (±3.2)
Phase 220.42 (±2.96)Sustained (t(49) = 8.47, p < 0.001)24.15 (±4.5)
Table 4. Validation and robustness—statistical tests summary.
Table 4. Validation and robustness—statistical tests summary.
Statistical TestPhaseTest Statistic (Z/t)p-ValueResult
Shapiro–Wilk Test (Normality)Phase 1W = 0.9390.012Data slightly deviates from normality
Phase 2W = 0.9450.014Data slightly deviates from normality
Paired t-TestPhase 1 (Day 1 vs. Day 7)t(49) = 10.24<0.001Significant reduction in anxiety
Phase 2 (Mean Phase 1 vs. Mean Phase 2)t(49) = 8.47<0.001Sustained improvement
Wilcoxon Signed-Rank TestPhase 1Z = −6.160<0.001Robust validation of t-test
Phase 2Z = −5.930<0.001Robust validation of t-test
Table 5. Summary of anxiety scores across phases.
Table 5. Summary of anxiety scores across phases.
PhaseBaseline Anxiety Score (Day 1)Final Anxiety Score (Day 7)Mean Anxiety Score (7 Days)Improvement Percentage (%)
Phase 170.36 ± 2.9955.52 ± 3.5862.81 ± 3.2821.15
Phase 257.00 ± 3.2845.20 ± 2.7650.92 ± 2.8020.42
Table 6. Classification of participants into engagement categories.
Table 6. Classification of participants into engagement categories.
Daily Interaction Time Interval (min) Frequency Phase 1 % Phase 1 Frequency Phase 2% Phase 2
0–1512%36%
15–203060%2652%
20–251938%2142%
25–3000%00%
Over 3000%00%
Table 7. Participant satisfaction ratings.
Table 7. Participant satisfaction ratings.
MeasureNMinimumMaximumMeanStd. Deviation
Satisfaction Rating (1–5 Scale)50354.440.577
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Manole, A.; Cârciumaru, R.; Brînzaș, R.; Manole, F. Harnessing AI in Anxiety Management: A Chatbot-Based Intervention for Personalized Mental Health Support. Information 2024, 15, 768. https://doi.org/10.3390/info15120768

AMA Style

Manole A, Cârciumaru R, Brînzaș R, Manole F. Harnessing AI in Anxiety Management: A Chatbot-Based Intervention for Personalized Mental Health Support. Information. 2024; 15(12):768. https://doi.org/10.3390/info15120768

Chicago/Turabian Style

Manole, Alexia, Răzvan Cârciumaru, Rodica Brînzaș, and Felicia Manole. 2024. "Harnessing AI in Anxiety Management: A Chatbot-Based Intervention for Personalized Mental Health Support" Information 15, no. 12: 768. https://doi.org/10.3390/info15120768

APA Style

Manole, A., Cârciumaru, R., Brînzaș, R., & Manole, F. (2024). Harnessing AI in Anxiety Management: A Chatbot-Based Intervention for Personalized Mental Health Support. Information, 15(12), 768. https://doi.org/10.3390/info15120768

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop