Next Article in Journal
IEGS-BoT: An Integrated Detection-Tracking Framework for Cellular Dynamics Analysis in Medical Imaging
Previous Article in Journal
An Enhanced MIBKA-CNN-BiLSTM Model for Fake Information Detection
Previous Article in Special Issue
Performance Improvement of Seismic Response Prediction Using the LSTM-PINN Hybrid Method
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Mobile Mental Health Screening in EmotiZen via the Novel Brain-Inspired MCoG-LDPSNet

1
Department of Artificial Intelligence and Computational Neuroscience, EmotiZen GmbH, 55122 Mainz, Germany
2
Research Immunogenetics Laboratory, First Department of Neurology, School of Medicine, National and Kapodistrian University of Athens, Aeginition University Hospital, Vas. Sofias 72-74, 11528 Athens, Greece
3
Multiple Sclerosis and Demyelinating Diseases Unit, Center of Expertise for Rare Demyelinating and Autoimmune Diseases of CNS, First Department of Neurology, School of Medicine, National and Kapodistrian University of Athens, Aeginition University Hospital, 11528 Athens, Greece
4
Department of Computing, School of Digital, Technology, Innovation and Business (DTIB), The University of Staffordshire, College Road, Stoke-on-Trent ST4 2DE, UK
5
Department of Public Health, School of Medicine, University of Patras, 26504 Patras, Greece
6
Department of The Operations and Information Management, Aston Business School, Aston University, Birmingham B4 7ET, UK
*
Author to whom correspondence should be addressed.
Biomimetics 2025, 10(9), 563; https://doi.org/10.3390/biomimetics10090563
Submission received: 30 June 2025 / Revised: 16 August 2025 / Accepted: 19 August 2025 / Published: 23 August 2025

Abstract

Anxiety and depression affect millions worldwide, yet stigma and long wait times often delay access to care. Mobile mental health apps can decrease these barriers by offering on-demand screening and support. Nevertheless, many machine and deep learning methods used in such tools perform poorly under severe class imbalance, yielding biased, poorly calibrated predictions. To address this challenge, this study proposes MCoG-LDPSNet, a brain-inspired model that combines dual, orthogonal encoding pathways with a novel Loss-Driven Parametric Swish (LDPS) activation. LDPS implements a neurobiologically motivated adaptive-gain mechanism via a learnable β parameter driven by calibration and confidence-aware loss signals that amplifies minority-class patterns while preserving overall reliability, enabling robust predictions under severe data imbalance. On a benchmark mental health corpus, MCoG-LDPSNet achieved AUROC = 0.9920 and G-mean = 0.9451, outperforming traditional baselines like GLMs, XGBoost, state-of-the-art deep models (CNN-BiLSTM-ATTN), and transformer-based approaches. After transfer learning to social media text, the MCoG-LDPSNet maintained a near-perfect AUROC of 0.9937. Integrated into the EmotiZen App with enhanced app features, MCoG-LDPSNet was associated with substantial symptom reductions (anxiety 28.2%; depression 42.1%). These findings indicate that MCoG-LDPSNet is an accurate, imbalance-aware solution suitable for scalable mobile screening of individuals for anxiety and depression.

1. Introduction

Mental conditions such as depression and anxiety burden millions of people around the world [1,2]. With the exponential increase in such mental illnesses, innovative solutions like online platforms and mobile applications (apps) for screening anxiety and depression have emerged, attempting to face them [3,4]. For example, online platforms such as Psycho Web have been developed to collect data from cases diagnosed with mental disorders. The Psycho Web platform utilizes the k-nearest neighbours (KNN) algorithm to identify the type of mental disorder a patient is grieving from based on patient symptoms when evaluated by a mental health professional [5].
Recently, techniques for predicting mental health conditions, primarily anxiety and depression, widely used in current mental health mobile and web apps, rely on classical statistical, machine techniques, notably Generalized Linear Mixed Models (GLMMs), Logistic Regression (LR), Support Vector Machines (SVM), Naïve Bayes (NB), Decision Trees (DT), Random Forests (RF), Adaptive and eXtreme Gradient Boosting (AdaBoost and XGBoost), and deep learning methods like neural networks, but without specifying the architecture of neural networks that employed [6,7,8]. However, due to the imbalanced nature of mental health datasets, machine and deep computational models face limitations, such as overfitting, when processing datasets where early clinical deterioration signals are weak compared to most non-critical data. This necessitates models that can adapt to rapidly changing patterns [9]. Such imbalances can bias predictive models and the majority class, reducing their sharpness to individuals who could benefit most from an early screening of their mental health. Even though these machine techniques have advanced our understanding of how mental health issues may occur, they often struggle with the complexities inherent in the psychological data of anxiety and depression, which limits their effectiveness and poses key challenges such as:
  • Applying machine learning to mental health predictions requires greater caution and the development of innovative, domain-adapted methodological techniques [10,11].
  • The machine and deep learning models exhibit limited generalization ability due to class imbalance in the datasets [12].
  • There is limited research that can provide evidence of the effectiveness of mobile apps in mental health anticipation. The lack of a robust evaluation for computational models makes it challenging to confirm that the algorithms incorporated in mental health apps effectively achieve their intended purposes and deliver tangible benefits to users [6].
Based on the above-mentioned limitations of machine and deep learning approaches applied to mental health prediction, this study’s objective is to utilize techniques to improve computational models in the prognosis of mental issues, particularly anxiety and depression. Researchers have employed network improvement approaches to address challenges posed by imbalanced datasets, which heavily rely on ensemble learning [13]. However, it may be challenging for ensemble learning to handle the complexities of imbalanced data, as ensembling dense architectures can lead to overfitting due to inefficiencies in training and deployment [14].
Recent developments in neurofinance, an interdisciplinary approach that merges economics, neuroscience, and psychology, have culminated in the MCoRNNMCD-ANN, a biologically inspired architecture designed to tackle the challenges of imbalanced, high-dimensional time-series forecasting [15]. Drawing on the brain’s modular structure and capacity for synchronized yet orthogonal communication pathways, the MCoRNNMCD-ANN model maintains resilience against non-stationary, skewed data distributions and excels at detecting rare but high-impact events that often elude conventional computational approaches.
Likewise, by embedding principles of modularity and orthogonality, MCoRNNMCD-ANN boosts predictive accuracy within financial markets and demonstrates broad applicability across other complex systems characterised by intertwined biological and behavioural processes.
The MCoRNNMCD-ANN’s adoption of modularity and orthogonality has advanced predictive accuracy in neurofinance and highlighted the broader applicability of these principles to other complex biological and behavioural domains [16,17]. Notably, MCoRNNMCD-ANN has been recently praised as one of the leading cognitive models in business intelligence, where its ability to predict infrequent but consequential outcomes has been ascertained as invaluable in decision-support systems [18].
MCoRNNMCD-ANN’s flexible framework seamlessly integrates global pattern recognition with localized feature extraction. Furthermore, MCoRNNMCD-ANN’s combination with NLP components, such as word embeddings, could better capture unlabelled text sentiment data’s semantic and syntactic features. Capturing localized features and global behavioural shifts is important for accurate text classification and may enhance early mental detection [19]. A recent study has proposed a neuroscience-inspired AI framework that constructs cognitive models, such as MCoRNNMCD-ANN, in conjunction with NLP, thereby elevating predictive accuracy in neurofinance and extending its applications inherently into mental health [20].
Building on biological and neuroscience foundations, this study addresses the twin challenges of class imbalance and calibration in mental health detection, aiming to improve predictive power and generalizability for the early detection of anxiety and depression in mobile-health applications. We therefore propose the Modular Convolutional orthogonal Gated Loss-Driven Parametric Swish Network (MCoG-LDPSNet), a novel variant of MCoRNNMCD-ANN that contains two orthogonal gated recurrent subnetworks, one specialized for anxiety and one for depression, which learn disentangled, emotion-specific representations. This work makes a significant contribution by integrating these subnetworks with a single, loss-driven gain mechanism that is co-optimized with the learning objective. The subnetwork outputs are fused and immediately passed through a first-of-its-kind Loss-Driven Parametric Swish (LDPS) layer: LDPS implements a single learnable gain parameter (β) that dynamically modulates the fused activation. Two complementary loss drivers control β with distinct effective timescales: (i) a phasic driver (Focal Loss) produces large, sample-specific gradients on complex or minority examples and transiently up-regulates β, amplifying weak or underrepresented emotional cues (an effect inspired by acetylcholine transients that sharpen cortical responses) [21,22]; and (ii) a tonic driver (Brier-score regularizer) supplies a slower, aggregate gradient that dampens β when the network becomes overconfident, thereby improving calibration. In practice, β is initialized to a moderate value and hard-clipped during optimization to avoid runaway gain. The LDPS layer is trained together with upstream regularizers (dropout, SpatialDropout1D, orthogonally initialized GRUs) so that amplification is selective and robust rather than permissive of memorization [23]. This biologically inspired, dual-timescale modulation enables the network to boost minority cues when needed [24,25,26]. At the same time, preserving calibrated probability estimates is a balance we verify empirically, as presented in Section 3 and Section 4.
Moreover, this study evaluates MCoG-LDPSNet’s overall performance with a geometric mean (G-mean) and ROC analysis. These objective evaluation metrics are better choices when applied to imbalanced datasets for medical diagnosis and text classification [27]. This study utilizes a publicly available mental health corpus for anxiety and depression on Kaggle (https://www.kaggle.com/code/mesutssmn/sentiment-analysis-for-mental-health/input, last accessed on 30 May 2025), benchmarked against classic statistics such as LR and GLMM, machine learning algorithms like RF and XGBoost, state-of-the-art deep learning models like DeprMVM and CNN-BiLSTM-ATTN (CBA), and transformer models like BERT. Furthermore, this study employs transfer learning fine-tuning of MCoG-LDPSNet on Facebook, which includes anxiety and depression data from Islam et al. [28], to leverage broader linguistic representations while adapting to domain-specific, anxiety- and depression symptomatology-informed visualizations for users. Correspondingly, a cohort study is conducted to evaluate the proposed MCoG-LDPSNet performance integrated into the EmotiZen (https://emotizen.health/, last accessed on 30 May 2025) mobile and web app, which delivers continuous, on-demand screening for anxiety and depression.
This study addresses two principal questions:
  • Detection Efficacy: Does MCoG-LDPSNet substantially improve the early detection of anxiety and depression under severe class imbalance compared to traditional and state-of-the-art machine and deep learning approaches?
  • Mobile Feasibility: How does integrating the proposed MCoG-LDPSNet model improve the EmotiZen App’s accuracy, scalability, and personalization in detecting early signs and predicting symptom severity of anxiety and depression in real-time?
The contributions of this study can be expressed as follows:
  • First-in-Domain Bio-Inspired Dual-Path Text Model: This study introduces MCoG-LDPSNet, the first cognitive framework to apply bio-inspired MCoRNNMCD-ANN neurofinance principles to mental health text classification. By deploying parallel encoders for anxiety and depression, each with orthogonal-initialized GRUs, SpatialDropout1D regularisation, and SReLU gating, the proposed model disentangles affective cues and delivers significantly higher accuracy than both machine baselines and leading transformer-based approaches under severe class imbalance. Section 4 presents a detailed comparison and discusses whether the novel MCoG-LDPSNet is better suited to generalize in the imbalanced mental health dataset.
  • Novel Loss-Driven Parametric Swish (LDPS) Activation: To the best of the authors’ knowledge, no prior model has directly fused phasic Focal Loss and tonic Brier score within a Parametric Swish activation to drive its gain dynamics. In the proposed MCoG-LDPSNet, the learnable gain β is rapidly up-regulated by Focal Loss, emphasizing hard, minority-class examples, inspired by transient neuromodulatory bursts, and gently down-regulated by Brier score regularisation, enforcing well-calibrated, steady confidence, akin to tonic neuromodulation. This unique phasic–tonic dual-loss coupling, implemented within a single activation layer and augmented by sparsity constraints, uniquely equips the proposed MCoG-LDPSNet model to sharpen its sensitivity to rare emotional signals while suppressing false negatives. Section 3, Section 4 and Section 5 investigate whether the new LDPS activation is in place to enhance the performance of the MCoG-LDPSNet further.
  • Real-Time Adaptive Screening: Integrated within the EmotiZen mobile app, MCoG-LDPSNet updates its predictions in real time as new user inputs arrive. This real-time adaptability enables early, accurate screening of anxiety and depression symptoms, facilitating timely intervention. Section 5 discusses the real-time adaptability of the proposed MCoG-LDPSNet, which could enhance the early screening of anxiety and depression in the EmotiZen App.
The proposed MCoG-LDPSNet operationalizes biomimetic principles by mapping neural activities to model components. The architecture employs parallel, orthogonally initialized dual encoders for anxiety and depression, introducing a novel LDPS activation that couples a phasic, focal-loss driven gain boost with a tonic, Brier-score based calibration signal to adapt the Parametric Swish gain during training. These biomimetic-inspired design choices increase sensitivity to rare emotional signals, mitigate severe class imbalance, and enable a deployable on-device pipeline for real-time screening in the EmotiZen App, demonstrating how nature-informed engineering can meet practical challenges in digital mental health.
The rest of this paper is organized as follows:
Section 2 reviews the neuroscience foundations for neural networks, as well as state-of-the-art machine and deep learning models for predicting anxiety and depression. Section 3 illustrates and emphasizes the proposed architecture of the MCoG-LDPSNet model, estimating its effectiveness. Section 4 presents the results from a detailed comparative analysis of the proposed MCoG-LDPSNet model against both traditional and cutting-edge models from the literature, along with a discussion of these findings. Section 5 examines the practical implications of deploying the proposed MCoG-LDPSNet model in the EmotiZen App, as well as its potential for more accurate screening of anxiety and depression. Section 6 wraps up the principal findings of this research, addresses its limitations, and suggests directions for future work.

2. Neuro-Inspired Deep Learning for Early Detection of Anxiety and Depression

This study thoroughly examined multidisciplinary fields, including artificial intelligence, informatics, mental health, neuroscience, neurobiology, and traditional and state-of-the-art machine and deep learning approaches. The primary objective is to comprehensively synthesize existing conceptual and empirical articles, encompassing both secondary and primary research, through a meta-narrative review [29]. A semi-systematic review has also proven sufficient to gain a better understanding of complex areas, such as sentiment and natural language processing research [30,31,32]. To maximize predictive performance and generalizability, the development of the proposed MCoG-LDPSNet model followed a multi-stage learning and validation pipeline. Initially, MCoG-LDPSNet was trained on a large mental health corpus from Kaggle (https://www.kaggle.com/code/mesutssmn/sentiment-analysis-for-mental-health/input, last accessed on 30 May 2025) to learn broad representations of emotional and linguistic patterns relevant to anxiety and depression. The MCoG-LDPSNet was benchmarked quantitively against conventional and state-of-the-art machine learning and deep learning models, determining which approaches most effectively predict mental health outcomes related to depression and anxiety. Subsequently, the proposed MCoG-LDPSNet model underwent transfer learning using the Islam et al. Facebook dataset [28], allowing the MCoG-LDPSNet to refine its parameters and adapt to the nuances of social media discourse linked to anxiety and depression. The transfer learning strategy leveraged the strengths of large-scale and domain-specific data, resulting in a robust MCoG-LDPSNet model capable of nuanced mental health prediction.
The proposed MCoG-LDPSNet model was then integrated into the EmotiZen App for real-world deployment and evaluation. EmotiZen GmbH conducted a cohort study to validate the effectiveness of the proposed MCoG-LDPSNet. Primary data were collected from two cohorts of EmotiZen App users: one group used the standard version of EmotiZen, which offered weekly mental health recommendations and screenings, while the other engaged with an enhanced version featuring additional user engagement tools, such as a progress bar in connection with recommendation selections. During the study period from 1 January 2025 to 31 March 2025, longitudinal data on anxiety and depression symptoms were collected directly through the app. The predictive accuracy and practical relevance of the proposed MCoG-LDPSNet were assessed using this primary cohort dataset, allowing for rigorous evaluation of both the proposed model’s performance and the EmotiZen App’s impact on mental health outcomes in real-world settings. This end-to-end design, from large-scale pretraining and transfer learning to real-world cohort validation, demonstrates the practical utility and translational potential of the proposed approach for digital mental health support, as detailed in Section 4. User consent and protocol were established for the primary data collection from EmotiZen GmbH to ensure the accuracy of the results and to ensure that the ethics of the app are fully applied in compliance with regulations regarding data privacy, thereby ensuring the ethical conduct of this study, as outlined in the Declaration of Helsinki.
The initial screening in this study yielded 1250 research papers from Scopus (n = 850), IEEE Xplore (n = 300), and Web of Science (n = 100), encompassing studies from a broad range of years to ensure historical and contemporary coverage. After manually duplicating all records, 1064 records remained. An exhaustive review, using inclusion and exclusion criteria, observing strategically titles, abstracts, and keywords to determine the investigations most likely to be of interest to this research, resulted in 120 records being reviewed. A final selection of 35 studies was made, obtaining the most relevant and high-quality evidence for this analysis.
The inclusion criteria mandated that studies must meet the following requirements:
  • Be English-language publications in peer-reviewed, reputable journals or conference proceedings;
  • Focus on predictive models for screening or prognosis of mental health conditions, particularly anxiety and depression;
  • Validate their predictive performance using quantitative objective evaluation metrics such as AUROC, precision, recall, or F1-score;
  • Use state-of-the-art computational techniques, including traditional machine learning methods (e.g., LR, DT, RF), advanced deep learning architectures (e.g., CNN-BiLSTM-ATTN (CBA), and transformer-based models like BERT);
  • Contribute to developing or enhancing bio-inspired, neuroscience-informed or cognitive models relevant to the brain.
The exclusion criteria included excluding studies that met the following requirements:
  • Were published in non-English-language, which may have limited methodological innovation;
  • Were not peer-reviewed or empirically validated;
  • Lacked quantitative or experimental rigour and did not incorporate robust evaluation measures;
  • Focused exclusively on unrelated domains.
Key keywords that emerged from this study literature included the following:
“Mental Health”, “ Predictive Models”, “Anxiety”, “Depression”, “Imbalanced Datasets”, “Bio-inspired Models”, “Cognitive Architectures”, “Modular Neural Networks”, “Brain Processes”, “Sentiment Analysis”, “Machine Learning”, “Deep Learning”, “Transfer Learning”, “Ensemble learning”, “Logistic Regression”, “Support Vector Machines”, “Naïve Bayes”, “Decision Trees”, “Random Forests”, “eXtreme Gradient Boosting”, “CNN”, “RNN”, “BERT”, and “Mobile Mental Health Applications”.
Figure 1 illustrates the study selection process that was followed.
To structure this study’s literature synthesis, neuroscience foundations are considered to examine biological insights on brain modularity, vmPFC, anterior insula, amygdala circuitry, and inhibitory interneuron pathways that inform the design of resilient, predictive models. The following computational models cover the evolution from traditional statistical and machine-learning classifiers (e.g., LR and RF) through deep learning architectures (CNNs and RNNs), transformer-based methods (BERT), and hybrid state-of-the-art systems (e.g., CNN-BiLSTM-ATTN and DeprMVM). Prior methods have made noteworthy refinements in feature extraction and sequence modelling; a key novelty of this work lies in incorporating brain-inspired mechanisms into the MCoG-LDPSNet model’s architecture and learning dynamics. Specifically, the proposed MCoG-LDPSNet introduces a novel loss-driven adaptive activation function whose gain parameter is modulated by both focal loss and calibration regularization, thereby inspired by the dual timescale neuromodulatory processes observed in the brain. This biologically grounded design enables the proposed model to dynamically adjust its sensitivity to minority-class emotional cues and maintain robust calibration by addressing the challenges of class imbalance and overconfidence that persist in existing state-of-the-art methods. Thus, the proposed MCoG-LDPSNet represents a novel step beyond conventional architectures by incorporating neurobiological principles at the core of its predictive framework for early screening of anxiety and depression.

2.1. Biology and Neuroscience Foundations

One must first trace their roots back to neural substrates to elucidate how biologically inspired computational models can excel at predicting mental health outcomes. Early investigations combined neuroimaging, neuropsychiatric assessments, and brain stimulation studies to pinpoint depressive loci, implicating the prefrontal cortex, limbic structures, basal ganglia, and brainstem nuclei and revealing altered connectivity among these regions [33]. Complementary lesion and psychiatric analyses further underscored the amygdala, hippocampus, and thalamus as both primary and secondary hubs of depressive pathology, highlighting the need for distributed frameworks to capture complex affective processes [33]. Building on this foundation, researchers examined the regulatory influence of the ventromedial prefrontal cortex (vmPFC) over the amygdala in humans, specifically in relation to mood and anxiety. Their work tested a neurocircuitry model positing that vmPFC hypoactivity disinhibits the amygdala, elevating negative affect. Indeed, vmPFC lesions corresponded with heightened amygdala responses to aversive stimuli and increased resting-state amygdala connectivity compared to those of healthy controls, thereby cementing the vmPFC’s role as a key modulator of emotional reactivity and a potential therapeutic target [34].
Further studies delineated the inhibitory microcircuits that temper its output under anxiogenic conditions including specific GABAergic neuron subtypes in the basolateral and central nuclei and molecular determinants like gamma-aminobutyric acid (GABA), a neurotransmitter and chemical messenger in the brain. GABA receptors and synaptic organizer proteins were shown to gate anxiety responses, suggesting that fine-tuning inhibitory synapses could yield novel interventions for anxiety disorders [35]. Shifting to generalized anxiety disorder (GAD), functional MRI analyses revealed hyperactivation of the amygdala, vmPFC, and ventrolateral PFC during emotion regulation tasks. At the same time, resting-state scans exposed disrupted amygdala coupling with prefrontal, insular, and cerebellar regions. These findings framed GAD as a network-level disorder marked by emotional and cognitive dysregulation, warranting studies of at-risk populations to isolate its underlying neurobiology [36].
Parallel morphometric work across major depressive disorder (MDD), GAD, and panic disorder highlighted common and distinct cortical alterations within the prefrontal-limbic circuitry encompassing the amygdala, anterior cingulate, and prefrontal cortices and called for deeper exploration of frontotemporal and parietal contributions to these conditions [37]. Concurrently, advancements in systems neuroscience painted cognitive flexibility and resilience as emergent properties of a modular brain architecture, dynamically regulated by neuromodulatory signals like acetylcholine and dopamine that adjust cortical gain across the ventromedial and dorsolateral PFC. Disruptions in these modulatory processes have been linked to depression’s hallmark deficits in emotional regulation and cognitive control, offering blueprints for computational models to emulate the brain’s balance of sensitivity and stability, especially when tackling imbalanced mental health datasets [24,38,39,40,41].
Extending this integrative lens, recent investigations into Beck’s cognitive theory employed neuroimaging to map negative cognitive bias in MDD. Hyperactive amygdala responses foster fear and anxiety, hippocampal dysfunction skews memory toward harmful content, and PFC imbalances erode regulatory control together, perpetuating depressive thought patterns and guiding the development of bias-modification and targeted neuromodulatory therapies [42]. Against this neuroscientific backdrop, AI approaches have emerged to tackle diagnostic and prognostic challenges in psychiatry. Reviews of machine learning, such as SVMs, and deep learning, such as CNNs, demonstrated their potential for early detection and personalized treatment planning [43,44]. However, mechanistic models remain scarce and warrant further validation in adolescent and adult cohorts [45].
Bridging these domains, studies leveraging the Research Domain Criteria (RDoC) framework compared cognitive bias signatures across anxiety and depression, revealing both disorder-specific and transdiagnostic patterns that robustly predict symptom severity and point toward bias-informed cognitive interventions [46]. Collectively, these neurobiological insights serve as a scaffold for designing deep learning architectures, such as CNNs and RNNs, that can mimic brain-like modularity, connectivity, and neuromodulation to enhance the prediction of anxiety and depression. In the next section, we delve into the strengths of these machine and deep learning models, setting the stage for benchmarking with the proposed MCoG-LDPSNet’s brain-informed architecture.

2.2. Machine and Deep Learning in Mental Health

To predict mental health conditions such as anxiety and depression, reducing potentially the frequency and harshness of ongoing symptoms, researchers have proposed and applied several machine- and deep learning techniques. For example, researchers applied NLP and machine learning techniques to predict depression from text data on social media, comprising 1500 sentences gathered from platforms such as Facebook, Twitter, and Instagram. The researchers applied data preprocessing techniques, including tokenization, removal of stop words, removal of empty strings, removal of punctuation, stemming, and lemmatization. Six machine learning classifiers were used: Multinomial Naïve Bayes (MNB), LR, Linear SVC, KNN, RF, and DT. MNB and LR, achieving the highest accuracy compared to Linear SVC, outperformed it by 1.06%. MNB and LR performances were 2.12% better than RF, outperforming KNN by 4.30% and DT by 6.52%. These calculations indicate the superior performance of MNB and LR as they outperformed the other classifiers in all the comparisons. The researchers suggest that future research develop a mobile application incorporating machine learning to enable individuals to check their depression level [47].
Researchers utilized a CNN model to identify a user’s mental state based on social media posts. They aimed to detect whether users’ posts belonged to an exact mental disorder, including depression, anxiety, borderline, schizophrenia, and autism. For data, they collected posts from mental health communities on Reddit. The researchers considered that this model could help identify potential sufferers of mental illness based on their social media posts. NLP techniques were employed to tokenize the posts and filter out frequently used words, while XGBoost was operated for comparison with the CNN model. The CNN model outperformed XGBoost by 10.32%, indicating enhanced accuracy in identifying depressive symptoms. Similarly, in anxiety detection, the CNN model achieves a 9.98% higher accuracy compared to XGBoost, demonstrating improved performance in recognizing anxiety-related patterns. The consistent outperformance of CNN across both depression and anxiety detection suggests its more substantial generalization capabilities and effectiveness in handling the nuanced language patterns associated with mental health discussions on social media platforms. Finally, they proposed validating the CNN model with data from other social network services [48].
Researchers strived to manage the negative influence of the COVID-19 pandemic crisis on mental health, stressing that the early detection and intervention of depression prevent the illness from evolving to a more severe state and prevent the development of other health conditions. The study proposed a survey comprising 21 questions, based on the Hamilton tool and the advice of a psychiatrist, to collect data on depression. The data was then analyzed utilizing machine learning techniques like DT, KNN, and Gaussian Naïve Bayes (GNB). KNN provided better results in terms of accuracy, outperforming DT by 2.95% and GNB by 4.55%. Their study suggested using machine learning-based models to replace conventional methods of detecting sadness by asking people encouraging questions and obtaining regular feedback from them. For future research, they proposed further investigation into the use of machine learning in depression detection, as well as the exploration of other machine learning techniques and their effectiveness [49].
Researchers investigated anxiety in 127 university engineering students in India using machine learning, gathering data through a questionnaire that met the criteria for Likert scale measurement. Machine learning algorithms, including NB, DT, RF, and SVM, were applied to classify the anxiety level based on the consequences of anxiety after being trained on pre-existing questionnaire data points. The accuracy results revealed that RF emerged as the top-performing algorithm, surpassing the NB and DT by 10.50% and SVM by 4.40%. For future studies, they proposed focusing on implementing interventions based on the identified causes and effects of anxiety to support students’ mental health [50].
Likewise, researchers used a dataset of 61,619 college students from 133 US higher education institutions using machine learning predictive models to identify college students at heightened risk of anxiety and depressive disorders. Their study provided a practical tool for professional counsellors to identify at-risk students and proactively guide prevention and intervention strategies. Researchers utilized predictive benchmarks, including XGBoost, RF, DT, and LR. In terms of area under the curve (AUC), XGBoost demonstrated better performance in both anxiety and depression categories. For anxiety, XGBoost outperformed LR and RF by 1.37% and DT by 4.13%. In the depression category, XGBoost and RF tied at an AUC of 0.77, surpassing LR by 1.31% and DT by 5.33%. For future research, they proposed that the models be further validated and tested in different populations to assess their generalizability [51].
Researchers explored distinguishing symptoms between depression and anxiety and utilized a streamlined version of the Symptom Checklist 90 (SCL-90) with 4262 patients. To achieve this, they developed classification models, such as KNN, SVM, RF, and AdaBoost. The accuracy, AUC, precision, and F1 score were the objective metrics used to measure the SCL-90 outcomes by the classification models. Regarding AUC, SVM outperformed KNN by 1.38%, RF by 2.05%, and AdaBoost by 6.04%. Although SVM, RF, and AdaBoost achieved an accuracy of 94.38%, SVM surpassed KNN by 1.68%. Overall, SVM performed the best, especially in terms of AUC. For future research, it is suggested that researchers further test the generalizability of the classification models [52].
Recently, researchers developed a machine learning-based risk prediction model for depression in 2733 middle-aged and elderly individuals with hypertension in China, using a survey for data collected from the China Health and Retirement Longitudinal Study (CHARLS) between 2018 and 2020. Machine learning models, such as LR, RF, and XGBoost, were developed to compare their prediction efficiency in CHARLS. Comparing the AUC, the LR showed the highest AUC, outperforming XGBoost by 0.71% and RF by 1.71%. Researchers proposed that further research is needed to validate the findings in other samples [53].
A current study investigated the detection of depression on a dataset that includes structural (English) and non-structural (Roman Urdu) languages. Moreover, the datasets, one in Roman Urdu manually converted from English comments on Facebook and another in English from Kaggle, were merged for the experiments. The researchers compared the performance of various machine and deep models, including SVM, Support Vector Machine Radial Basis Function (SVM-RBF), RF, and BERT. The results show that SVM outperformed the SVM-RBF, RF, and BERT by approximately 2.41% in accuracy. The researchers recommended that future studies investigate advanced hybrid machine learning models to improve accuracy in predicting depression in European countries [54].
Researchers developed a predictive model for non-suicidal self-injury (NSSI) among adolescents in western China aimed to evaluate the risk of NSSI in Chinese adolescents using machine learning algorithm-based models. Their study collected sociodemographic and psychological data from 13,304 adolescents in 50 schools in western China. Their outcomes showed that the multivariate logistic regression (MLV) model identified several risk factors for adolescent NSSI, including gender, age, history of psychiatric consultation, stress, depression, anxiety, tolerance, and emotion abreaction. The XGBoost model identified depression and anxiety as the top two predictors of NSSI in adolescents. In the training set, XGBoost outperformed the MLV model in accuracy by 0.10%, but the MLV regression model had a higher AUC of 2.44%. XGBoost again demonstrated a slight increase in accuracy of 0.10% for the testing set, while the MLV regression model maintained a marginally better AUC of 0.36%. These slight discrepancies exhibited that both models perform very similarly, with XGBoost having a slight edge in accuracy and the logistic regression model showing a slight advantage in AUC. The overall predictive ability of both models appears to be strong and comparable, identifying several key predictors of NSSI, including depression and anxiety. They proposed that the models used could be further validated in other regions and populations for future research [55].
A recent study proposed an emotional and mental intelligence (EMI) chatbot for the early detection of mental health issues. The objective was to address the barriers of stigma, accessibility, and affordability in mental healthcare based on the notion of a Digital Twin, a virtual replica designed to represent a physical object in order to assess and classify mental health issues such as anxiety and depression. EMI was developed in collaboration with a clinical psychiatrist, and a pre-trained BERT model was employed to detect various severity levels. BERT detected symptoms of mental health with 69% accuracy. For future research, they recommended addressing the challenges of imbalanced datasets and focusing on the generalizability and scalability of their framework. Additionally, more comprehensive evaluation metrics and performance measures can provide a deeper understanding of the chatbot’s effectiveness in mental health assessment [56].
Depression has long been characterized as a pervasive mental health disorder, and the ubiquity of social media has opened new avenues for automated screening via text classification. A recent study suggested the usage of a Convolutional Neural Network in conjunction with a Bidirectional Long Short-Term Memory with attention mechanism CNN-BiLSTM-ATTN (CBA) model for depression detection. It benchmarked the performance of CNN-BiLSTM-ATTN (CBA) against seven established architectures: LSTM, BiLSTM, CNN, CNN-LSTM, CNN-BiLSTM, BiLSTM-Attention, and CNN-BiGRU, on the CLEF2017 dataset. Their proposed CNN-BiLSTM-ATTN (CBA) achieved an AUC-ROC of 0.85, outperforming LSTM/BiLSTM by 11.2%, CNN-BiGRU by 12.5%, BiLSTM-Attention by 7.32%, and the strongest CNN-based baselines by 3.6%. These results underscore the efficacy of the attentive hybrid architectures for more discriminative depression detection [57].
In a recent study, the researchers aimed to detect whether a person is depressed. SVM and multilayer perceptrons (MLP) were utilized to formulate an ensemble approach, namely, hybrid DeprMVM. A survey with psychological and sociodemographic features was used to collect data from 604 participants. They also operated data manipulation methods, such as SMOTE and cluster sampling, to improve accuracy. Their findings showed that the proposed ensemble of DeprMVM, which incorporates SMOTE and cluster sampling techniques, demonstrated notable improvements in AUC compared to other classifiers, outperforming KNN by 9.63%, SVM by 4.17%, and RFC by 2.06%. Compared to the high-performing XGB and MLP classifiers, the ensemble of DeprMVM still achieves a 1.03% improvement in both cases. They proposed that further research is needed to validate the effectiveness of their suggested ensemble approach in different populations and settings. They also proposed the development of a user-friendly tool based on the presented model that could be explored for practical applications in healthcare settings [58].
The studies mentioned above have achieved noteworthy results in predicting mental health conditions concerning anxiety and depression. However, researchers have pointed out that there is still significant potential for future improvement in the predictions of mental health issues. At the same time, significant opportunities remain to enhance model sensitivity, generalizability, and robustness, especially under imbalanced conditions in the real world. The gaps and future research suggestions can be summarized as follows:
  • Researchers should create mobile applications incorporating AI subfields like machine learning to enable individuals to self-assess their depression levels [47].
  • Researchers should investigate the deployment of user-friendly tools to support informed decision-making about mental health in real-world healthcare settings [58].
  • Researchers should address the issues of imbalanced datasets and assess the framework’s scalability and robustness across diverse populations and settings [56].
  • Researchers should investigate the effectiveness of different machine and deep learning algorithms in detecting mental health conditions, highlighting the need to improve model accuracy and generalizability [51,52,56].
Based on the above future directions and the limitations presented in Section 1, more research needs to be conducted in mental health to present new mobile applications that may support early screening and personal recommendation to individuals; moreover, the improvement of computational models’ accuracy and generalizability and their effectiveness in imbalanced datasets in mental health need to be addressed. Given these challenges, this study proposes the MCoG-LDPSNet, a novel neuroscience-informed framework extending the neurofinance-based MCoRNNMCD-ANN model. The proposed approach integrates a new brain-inspired LDPS mechanism to enhance further prediction of anxiety and depression in imbalanced datasets. The proposed MCoG-LDPSNet can also be incorporated into mobile tools, such as the EmotiZen App, for early and accurate screening of mental health. Ultimately, this integration could empower individuals with timely mental health insights for anxiety and depression signs, potentially enhancing well-being with internet-delivered cognitive behaviour therapy (iCBT) recommendations, decreasing the undiagnosed conditions [59,60,61].

3. Materials and Methods

3.1. Data Collection

The first dataset utilized from Kaggle contained anonymous statements with mental health statuses like normal, depression, suicidal, anxiety, stress, bipolar, and personality disorder, including most likely clean data. Moreover, the statements from Kaggle’s data (https://www.kaggle.com/code/mesutssmn/sentiment-analysis-for-mental-health/input, last accessed on 30 May 2025) encompassed diverse social media posts, and each entry was classified with a specific mental health status. This study utilized only anxiety, extracting 3888 statements, and depression, extracting 15,404 assertions. The data from Kaggle was deemed most likely clean and after initial checks, no further NLP cleaning techniques were performed for the text. It is worth noting that this study used a tokenizer, which filters out punctuation and converts text to lowercase by default, thereby reducing some noise. Further, deep learning approaches are more resilient and less sensitive to noise than traditional techniques.
The proposed MCoG-LDPSNet has been fine-tuned on domain-specific tasks, such as predicting anxiety and depression, using a smaller dataset from Islam et al. [28]. This dataset included a collection of Facebook page comments, totalling approximately 7000. Comments were labelled as anxiety if they contained anxiety and as depression if they contained depression. After this step, any comments that did not mention either mental condition were excluded from further analysis. The final dataset included 2446 comments labelled as anxiety and 337 as depression. This process confirmed that only comments relevant to this study of these two mental health states were included, resulting in a focused dataset for analysis.
Finally, this study collected primary data through a cohort study to validate the reliability of the proposed MCoG-LDPSNet, deployed in the EmotiZen App, for the prediction and early screening of anxiety and depression.

3.2. Proposed Model

The proposed MCoG-LDPSNet is a modular neural architecture motivated by the neurofinance MCoRNNMCD-ANN for binary classification, designed to disentangle anxiety and depression signals in text. It consists of two parallel text-encoder pathways, one for anxiety and one for depression. Each embedding tokens via a 64-dimensional SpatialDropout1D layer, extracting local n-gram features with Conv1D, pooling, and capturing sequence context through an orthogonally initialized GRU with PReLU. The final outputs of each module are concatenated into a joint representation, which is then passed through a learnable dense layer and the novel Loss-Driven Parametric Swish activation function before being output as a sigmoid. This novelty is achieved by embedding the LDPS layer whose gain parameter β is co-optimized by (1) Focal Loss, which down-weights easy negatives and amplifies gradients on occasional emotional cues, driving β higher for minority-class examples [21,22] and (2) Brier-score, which penalizes miscalibration during training to prevent β from overshooting [23]. This dynamic, error-driven nonlinearity is inspired by the brain’s adaptive processing and cortical gain control, which sharpens responses to unexpected stimuli [62]. By situating this LDPS layer immediately after fusion, the proposed network amplifies scarce emotional cues when they are most vulnerable to being overshadowed, yielding an imbalance-aware, intrinsically calibrated model. Figure 2 illustrates the proposed model for MCoG-LDPSNet.

3.2.1. Module 1: Anxiety Text Encoder

The proposed MCoG-LDPSNet model operates on raw textual data on Kaggle representing user expressions or posts. Each input document is first pre-processed and tokenized to convert raw text into a numerical format suitable for neural processing.
The input text is segmented into a sequence of tokens (words, sub-words, or characters) denoted by the following:
x 1 , , V T ,
V = 10000 ; T = 100
where T = 100 is the fixed length of the token sequence per sample. Tokens beyond this length are truncated or padded. V = 10,000 are the vocabulary entries, representing the most frequent tokens in the training corpus.
The anxiety module embeds tokens into a 64-dimensional space, applies spatial dropout, captures local patterns via 1D convolution, pools the features, and encodes temporal context using an orthogonal-initialized GRU with PReLU activation. Finally, a dense with SReLU layer produces the anxiety-specific vector z a ∈ ℝ32.
Each token x t is mapped to a dense vector representation, capturing semantic and syntactic properties:
E t a = W e   e x t ,
  • Here, W e R V × d   ,   d = 64 , is the embedding matrix, where d = 64 is the embedding dimension.
  • e x t is a one-hot vector for token x t .
  • The result is a sequence of embeddings with E a = E 1 a , , E t a E a , with E t a   R 64 .
To reduce overfitting and encourage robustness, SpatialDropout1D with dropout probability p = 0.2   is applied across embedding features:
E t a =   SpatialDropout1D   ( E t a ; p = 0.2 )
To detect local n-gram patterns, a 1D convolutional layer with kernel size 3 is applied across the sequence embeddings:
C t a = R e L U   E a W c + b c ,
where
W c R 3 × d × 64     is the convolutional filter.
The ReLU activation introduces non-linearity, enabling the model to learn complex feature representations.
Temporal max-pooling selects the most salient features across the entire sequence length:
h a 0 = m a x t C t a , h a 0 R 64
This yields a fixed-size summary vector, independent of sequence length, that emphasizes the strongest activations in each feature channel.
The pooled features are passed through a GRU recurrent layer, initialized with orthogonal weights to enhance gradient flow and stabilize training:
h t a = P R e L U   ( W x h a h a 0 + W h h a h t 1 a + b h a ) ,   h t a R 32
The PReLU activation enables the network to adaptively learn the slope of the negative part, thereby further improving its expressiveness.
After temporal encoding, dropout regularization is applied:
  h ( 1 )     =   D r o p o u t   ( h T a   ; p = 0.3 )
The final hidden state h T a experiences dropout p = 0.3 before being projected to the anxiety-specific embedding:
z a = S R e L U   W d a h 1 + b d a , z a R 32
This embedding provides a close and discriminative representation of anxiety-related text features.

3.2.2. Module 2: Depression Text Encoder

The depression encoder reflects the architecture and operations of the anxiety encoder (Module 1), independently learning depression-specific latent features z d R 32 . This dual-stream approach allows the proposed MCoG-LDPSNet model to disentangle anxiety- and depression-related signals effectively.

3.2.3. Fusion and Loss-Driven Parametric Swish (LDPS)

Anxiety and depression embeddings are concatenated:
z f = z a ; z d R 64
This joint representation is transformed and passed through the LDPS activation, which modulates the output gain via a learnable parameter β :
u = D r o p o u t   ( P a r a m e t r i c S w i s h   ( W f z f + b f ; β 0 ) ; p = 0.4 )
The gain parameter β is constrained and dynamically modulated as follows:
β = c l i p   e x p l o g β , ϵ , β max = 5 ,
with maximum gain clipped to 5 to prevent numerical instabilities during training.
The output is then computed by a gain-adaptive hardsigmoid gating:
v = u h a r d s i g m o i d   β   u , v R 32
The trainable l o g β parameter is optimized via a composite loss function:
L = L f o c a l + λ L b r i e r ,
  • L f o c a l   addresses class imbalance by emphasizing complex examples (phasic gain adaptation);
  • L b r i e r   serves as a calibration regularizer to constrain β (tonic gain restraint).
The respective gradients concerning l o g β induce a biphasic modulation analogous to neuromodulatory gain control:
L f o c a l l o g β , drives rapid β boosts, promoting swift, transient gain increases on hard examples.
L b r i e r l o g β , applies slower β restraint persistent gain moderation to maintain calibration.
This biphasic adaptation of neuromodulatory mechanisms in biological neural systems has motivated the design of networks to dynamically handle feature amplification during training. It is worth noting that this is the first-of-its-kind LDPS activation to embed gain adaptation directly in the network—phasic–tonic modulation driven end-to-end by Focal Loss and Brier, inspired by neurobiological neuromodulation. To our knowledge, this is the first implementation of an activation function embedding end-to-end gain adaptation driven by dual phasic–tonic losses motivated by biological neuromodulation mechanisms.

3.2.4. Output Layer

The classification output is obtained through a sigmoid activation applied to the LDPS-activated features:
y ^ = σ ( w o v + b o )
This design obviates the need for additional post-hoc calibration, yielding outputs that are inherently calibrated and robust to class imbalance.

3.2.5. Transfer Learning Procedure

This research utilizes transfer learning to fine-tune the proposed MCoG-LDPSNet model, employing a Facebook dataset that contains indicators of anxiety and depression [28]. This approach leverages broader linguistic representations that MCoG-LDPSNet was trained on using large-scale data while adapting to domain-specific symptomatology of anxiety and depression, resulting in a robust MCoG-LDPSNet model capable of nuanced mental health prediction for anxiety and depression.
Parameters for anxiety and depression encoders are pre-trained on a large source dataset (Kaggle) by minimizing the total loss:
θ a n x = a r g   m i n θ a n x L T f a n x x ; θ a n x , y T
θ d e p = a r g   m i n θ d e p L T f d e p x ; θ d e p , y T
where f d e p and f d e p denote the anxiety and depression encoder modular networks, respectively, and L T T is the combined loss on the source dataset T .
Transfer learning is performed by initializing parameters with pretrained weights and fine-tuning higher-level layers:
θ a n x 0 = θ a n x ,           θ d e p 0 = θ d e p
During fine tuning, embedding layers are frozen to preserve learned representations, while higher-level layers, including orthogonal GRU modules, dropout, and dense layers, are trainable:
θ T = θ f r o z e n , θ t r a i n a b l e
θ f r o z e n = θ a n x , θ d e p
The fine-tuning objective remains the same:
θ a n x = a r g   m i n θ a n x L T f a n x x ; θ a n x , y T
θ d e p = a r g   m i n θ d e p L T f d e p x ; θ d e p , y T
The final prediction for a new input is computed by concatenating the anxiety and depression embeddings:
z = C o n c a t e n a t e f a n x x ; θ a n x , f d e p x ; θ d e p
which is passed to the output layer classifier
y ^ = g z ; θ o u t
where the output layer parameters with mean 0 and variance σ 2  are initialized as follows:
θ o u t 0 N 0 , σ 2
and optimized according to the following:
θ o u t = a r g   m i n θ o u t L T g z ; θ o u t , y T
where θ o u t * is learned by minimizing the loss on target data.

3.2.6. Hyperparameters in the Proposed MCoG-LDPSNet

The architecture and hyperparameters for the proposed MCoG-LDPSNet model, despite its inspiration by the MCoRNNMCD-ANN, were carefully selected based on a combination of solid theoretical foundations and hands-on experimentation applied in the fields of NLP and deep learning, particularly in mental health prediction applications. This study employed a train, validation, and test split with percentages of 60%, 20%, and 20%, respectively.
  • Embeddings: An embedding size of 64 was chosen to balance the trade-off between semantic expressiveness and model complexity. Prior works have exhibited that neural embeddings effectively capture semantic relationships while minimizing overfitting [63,64].
  • SpatialDropout: A dropout rate of 0.2 was applied to the embedding outputs to regularize the model early in the feature extraction pipeline. SpatialDropout prevents the co-adaptation of embedding features across sequence timesteps, thereby improving generalization, as demonstrated in sequence modelling [65].
  • Orthogonal GRU Units: The recurrent orthogonal units in the GRU were set to 32 to adequately capture temporal dependencies in the input sequence without introducing excessive model complexity. This choice was informed by both the sequence length (T = 100) and the dataset size, with experiments showing that larger GRU sizes resulted in marginal gains but increased the risk of overfitting [66].
  • Dropout: Dropout rates of 0.3 in the dense layers and 0.4 in the fusion layers were chosen to mitigate overfitting in higher-level representations. These rates are consistent with standard dropout values in neural architectures for text classification tasks, and empirical tuning confirmed their effectiveness in stabilizing training and enhancing generalization [67].
  • LDPS Gain Parameters: The LDPS activation parameters were initialized to moderate gain values to ensure stable gradient flow and model convergence. The maximum gain was clipped to 5 with a small epsilon of 10−3 to prevent numerical instabilities during training. To address class imbalance and calibration simultaneously, this study employed a Focal Loss, which emphasizes hard and minority examples, in conjunction with the Brier score, which encourages calibrated probability outputs [68].
  • Batch Size and Learning Rate: To provide a stable and efficient training process, a learning rate of 0.001 and a batch of 128 were empirically determined, consistent with best practices in deep learning for moderate-sized datasets [69].

4. Results and Discussion

4.1. Technical Specifications and Performance Evaluation

To compare the proposed MCoG-LDPSNet with the benchmark computational models, objective evaluation classification metrics such as accuracy, F1 score, precision, recall, and sensitivity, as well as AUROC and G-mean, were considered the best metrics for the classification task. Moreover, the selected baseline models were chosen to be compared with the proposed MCoG-LDPSNet based on the limitations of Section 1, filling the future research in Section 2, which introduced a variety of machine learning algorithms like GLM, LR, SVM, NB, DT, RF, AdaBoost, and XGBoost applied in mental health predictions. Likewise, these models are used in Section 2, along with CNNs, LSTMs, BERT, and the state-of-the-art DeprMVM and CNN-BiLSTM-ATTN (CBA). The data from Kaggle was also used to train, test, and validate the sets for 19 classification algorithms, specifically for anxiety (class 0) and depression (class 1). All the computational models in the comparisons were used with the same portion for all the models, meaning 60%, 20%, and 20%. Moreover, the benchmarks utilized their parameters in studies applied to classification tasks related to mental health. Each model was trained and evaluated 50 times with distinct random seeds under Intel® Core™ i7-9750H (Hyper-Threading Technology), 16 GB RAM, 512 GB PCIe SSD, NVIDIA GeForce RTX 2070 8 GB. The Anaconda computational environment, which includes Keras and TensorFlow, was utilized in Python (version 3.11) to conduct the experiments. Table 1 presents the results applied to the Kaggle dataset using seven performance metrics: accuracy, F1, precision, recall, specificity, geometric mean (G-mean), and area under the ROC curve (AUC), along with the running time of 50 times. Table 2 presents the standard deviation of the algorithms.
As shown in Table 1, the proposed MCoG-LDPSNet exhibited the best objective classification metrics, including G-mean and ROC analysis. The MCoG-LDPSNet was also compared and outperformed the MCoRNNMCD-ANN. However, it is worth noting that the MCoRNNMCD-ANN was the second-best performing model among the remaining ones. GRU showed that it can provide notably accurate predictions. At the same time, LSTM demonstrated its ability to retain information over extended periods and analyse complex data patterns, as evidenced by its improved metric outcomes. The state-of-the-art CNN-BiLSTM-ATTN (CBA) improved its classification metrics, demonstrating a sufficient balance between precision and recall. These models are effective due to their high AUC and G-mean scores.
Transformers like BERT performed substantially more poorly on AUC and G-mean with a tremendous runtime, which highlights both the difficulty of the task and the practical constraints around using transformers for low-latency mobile screening. At the same time, the simpler algorithms, such as Naïve Bayes models (multinomial and Gaussian), demonstrated significantly lower processing times but yielded far lower performance, making them the least promising option. Consequently, the proposed MCoG-LDPSNet outperformed the Bernoulli NB, which presented a lower AUC of 83.3% compared to MCoG-LDPSNet. In the G-mean, the SVM showed 287% lower performance than MCoG-LDPSNet.
The top-performing five models strike a successful balance between precision and recall, often utilizing advanced neural network structures to manage complex data. Table 3 provides a more in-depth comparison of the top-performing models.
Table 3 demonstrates that the proposed MCoG-LDPSNet model outperforms the top computational approaches. MCoG-LDPSNet achieved a superior AUC of 0.9920 and a balanced G-mean of 0.9451, with a runtime of 1237 s. These objective metrics set a high benchmark for evaluating the classification of real-world computational architectures.
The proposed MCoG-LDPSNet outperformed the MCoRNNMCD-ANN model, which showed a 0.49% lower AUC and a 1.76% lower G-mean than the MCoG-LDPSNet model. The MCoRNNMCD-ANN runtime was 86% slower than the MCoG-LDPSNet.
Similarly, the proposed MCoG-LDPSNet outperformed the CNN-BiLSTM-ATTN (CBA) model, which indicated a 0.92% lower AUC and a 2.35% lower G-mean than the MCoG-LDPSNet. Compared to runtime, it was 45% slower than MCoG-LDPSNet, making it less attractive for applications where processing time is critical.
MCoG-LDPSNet outperformed the GRU, which exhibited a 1.27% lower AUC and a 2.70% lower G-mean than MCoG-LDPSNet. GRU runtime was also 95% slower than MCoG-LDPSNet.
While offering reasonable performance, the LSTM model falls short compared to the proposed MCoG-LDPSNet model. LSTM records an AUC of 3.18% lower and a G-mean of 7.01% lower than the MCoG-LDPSNet. LSTM runtime is slower by 105% than MCoG-LDPSNet, suggesting that LSTM may be less suitable for time-sensitive tasks.
In summary, the proposed MCoG-LDPSNet remains the best overall model. The MCoRNNMCD-ANN, CNN-BiLSTM-ATTN (CBA), GRU, and LSTM models underperform the proposed MCoG-LDPSNet in classification objective metrics and suffer from significantly longer processing times. MCoG-LDPSNet has also improved the computational running time compared to MCoRNNMCD-ANN, showing not only its predictive power in imbalanced data but also a tremendous improvement in running time. These findings indicated the exceptional performance of the proposed MCoG-LDPSNet, which strikes a balance between predictive and computational efficiency, making it well-suited for the essentials of mental health prognosis, especially in the context of anxiety and depression. Finally, as shown in Table 2, MCoG-LDPSNet’s very low σ (AUC ± 0.0010, G-mean ± 0.0100) demonstrates exceptional consistency compared to the rest. Figure 3 illustrates the ROC of the top five models. Likewise, Figure 4 displays the confusion matrix of the top five approaches where the proposed MCoG-LDPSNet model shows an improvement over the MCoRNNMCD-ANN model. It correctly identifies more anxiety cases (709 vs. 686), and fewer anxiety cases are misclassified as depression (68 vs. 91). Similarly, it misclassifies fewer depression cases as anxiety (62 vs. 69) and correctly identifies more depression cases (3018 vs. 3011), suggesting better overall performance. The LSTM model performed the least effectively, particularly in correctly identifying cases of anxiety.

4.2. Transfer Learning

The outstanding performance of the proposed MCoG-LDPSNet has been fine-tuned in a domain-specific task with smaller data [28]. Transfer learning was crucial in our workflow because it enabled MCoG-LDPSNet to leverage rich, generalizable representations learned from a large source corpus, capturing diverse linguistic and emotional patterns before being fine-tuned on a smaller, domain-specific dataset for anxiety and depression. By freezing the lower-level embedding parameters and only updating the higher-order convolutional, recurrent, and LDPS layers, the model retained its robust feature extractors while adapting efficiently to new, scarce data. This strategy mitigated overfitting, accelerated convergence, and yielded significantly better performance in low-resource screening scenarios compared to training from scratch.
Transfer learning achieved highly competitive performance across all evaluated metrics. With an average accuracy of 97.31% ± 0.23%, a precision of 99.29% ± 0.12%, and an F1 score of 98.46% ± 0.13%, the system demonstrated a strong discriminative capability in distinguishing between anxiety and depression narratives. Moreover, high recall (97.65%) and specificity (97.59%) values indicate that the model maintains a balanced capacity to catch both true positives and true negatives, which is critical in sensitive mental health screening contexts. The AUROC further confirms this robustness, averaging 0.9937 ± 0.0004, indicating near-perfect separability between the two emotional states. From a mental informatics perspective, these findings highlight the utility of NLP-driven approaches in early screening for affective disorders, particularly when combined with transfer learning. Figure 5 illustrates the confusion matrix, showing very few misclassifications between the two classes, along with the ROC curve of the fine-tuned MCoG-LDPSNet.

5. Practical Implications of the Proposed MCoG-LDPSNet

This analysis conducted a cohort study for evaluation of the real-world impact of integrating the MCoG-LDPSNet model into the EmotiZen App over 12 weeks, from 1 January to 31 March 2025. All data were collected during routine app usage and fully de-identified upon extraction. Only data from EmotiZen users who provided consent were included in this study, and all data were handled in accordance with the principles outlined in the Declaration of Helsinki. Users had previously agreed to the fully anonymized research use of their in-app responses via EmotiZen’s terms of service, rendering this minimal risk research exempt from additional institutional review. Due to the nature of this study and its alignment with GDPR standards, no further ethical approval was required.

5.1. Participants Identification and Screening

To validate the EmotiZen App and the proposed model’s realism in early screening of anxiety and depression, all eligible users were asked at study entry (January 2025) to complete both of the following:
  • The standard multiple-choice PHQ-4 (MC PHQ-4; fixed-response, 0–3 per item);
  • The app’s free-text PHQ-4 (open-ended, natural language).
These assessments were completed within the same onboarding window (the first study week), allowing for a direct comparison of the model’s mapped scores with the standard PHQ-4.
Moreover, the goal was to determine whether the proposed algorithm’s early predictions, integrated into the app, combined with the new features of the EmotiZen App, would enhance their mental well-being. This study observed two groups:
  • Group A: Standard EmotiZen experience (anxiety and depression predictions, visualizations and iCBT recommendations by severity);
  • Group B: Enhanced experience (anxiety and depression predictions, visualizations, and iCBT recommendations by severity and engagement features like progress and the (i) weekly push notifications, (ii) the option to choose their recommendation in a favourable order, (iii) a progress bar to track their advancements, and (iv) an incremental reward screening).
The cohort selection was made based on demographic metadata (age, gender, and region tag) and applied the following criteria:
Inclusion (January 2025)
  • Age ≥ 18 years (based on self-reported birth year metadata);
  • Residence in Hessen (Wiesbaden) or Rhineland-Palatinate (Mainz);
  • Prior EmotiZen use before January 1 2025 (inferred from any login event in December 2024 or earlier);
  • Proficiency in English (confirmed by the in-app language set to English);
  • Total modified PHQ-4 score between 3 and 8 (mild to moderate) on any January submission.
Exclusion (January 2025)
  • Total modified PHQ-4 ≥ 9 (“severe”);
  • Any free-text response containing self-harm keywords (flagged by the NLP pipeline);
  • Self-reported ongoing inpatient psychiatric care or recent hospitalization (<30 days);
  • Language preference not set to English;
  • No recorded login from a smartphone or computer during January 2025 (indicating unreliable access).
Of 82 users completing a January PHQ-4, 25 were excluded (12 for severe scores, 5 for self-harm risk, 3 for intensive treatment, and 5 for language/access). The remaining 57 users comprised the baseline cohort. By 31 March 2025, seven had not completed any Week 12 PHQ-4 and were excluded from the primary paired analysis, yielding a final analytic sample of 50 users.

5.2. EmotiZen App Components

Modified PHQ-4 Screening
Instead of fixed multiple-choice items, EmotiZen presents the four PHQ-4 questions as open-ended prompts, allowing users to express nuances in natural language. Each response is scored 0–3 for anxiety and 0–3 for depression (total 0–12). Users responded to four open-ended prompts corresponding to standard PHQ-4 items, each answered in free text:
  • “Over the past two weeks, how often have you felt relaxed versus nervous, anxious, or on edge?”;
  • “Over the past two weeks, how often have you felt you could stop worrying or control your worries?”;
  • “Over the past two weeks, how often have you felt optimistic versus depressed or hopeless?”;
  • “Over the past two weeks, how often have you felt engaged and motivated versus having little interest?”.
Each response was analyzed using the Affin sentiment lexicon, which assigns a numerical sentiment score to words and phrases [70]. The Affin-derived feature vectors for each PHQ-4 item were input into the proposed MCoG-LDPSNet model, which was initially trained for predicting and classifying anxiety and depression, yielding significant results, as shown in Section 4. The output probabilities of the MCoG-LDPSNet model are rule-based and mapped using fixed thresholds for four scores (0–3 each), which are summed to yield a total PHQ-4 score (0–12). The total PHQ-4 score is mapped to standard severity bands:
  • None: 0–2;
  • Mild: 3–5;
  • Moderate: 6–8;
  • Severe: 9–12.
After the MCoG-LDPSNet prediction and PHQ-4 mapping (model outputs converted to item scores using pre-specified probability thresholds), the EmotiZen App assigns the user to a severity band, suggesting one text-based iCBT task per week within that band. Tasks are prioritized and selected from a fixed, expert-defined hierarchy; an ordered clinician-ranked list matched to MCoG-LDPSNet outputs. The app filters the hierarchy for tasks eligible for the user based on symptom-target matching to the user’s item-level profile and current severity, then selects the highest-priority weekly task. For example, for “Mild,” breathing exercises may be recommended. EmotiZen offers text-based iCBT recommendations based on their potential effectiveness in improving mental well-being [59,60,61].

5.3. Timeline and Follow-Up

Baseline (Weeks 1–4, January):
  • Both standard PHQ-4 and EmotiZen App questions were completed during onboarding.
  • Demographics recorded.
Intervention (Weeks 5–8, February):
  • Cohort A received screening for anxiety and depression and weekly iCBT delivery.
  • Cohort B received the engagement (i) weekly push notifications, (ii) the option to choose their recommendation in a favourable order, (iii) a progress bar to track their advancements, and (iv) an incremental reward screening.
Follow-Up (Weeks 9–12, March):
  • Continued weekly iCBT and PHQ-4, final Week 12 PHQ-4 endpoint, Week 12 satisfaction survey.
  • Fifty users were followed for the whole 12-week period.

5.4. Statistical Validation and Interpretation

To test differences between groups, the t-test is appropriate, as it compares mean scores of approximately normally distributed variables from two independent groups. Within each arm, paired t-tests can be used to assess whether the pre- to post-feature changes are statistically significant. To quantify the magnitude of effects, Cohen’s d should be calculated for each t-test. Table 4 shows the participant demographics per group.
As observed in Table 4, the mean age difference is slight (33.6 vs. 35.2 years), and gender distributions are nearly identical (Group A: 60% F; Group B: 64% F). Matching on education and employment was also confirmed, ensuring that any downstream effects are unlikely driven by baseline demographic imbalances.
Moreover, Table 5 shows the outcomes of task performance for each group.
Based on Table 5, we observed that Group B showed a larger total drop (1.08 points) than Group A (0.80 points). Subscale improvements mirror this: anxiety decreased by 0.44 points versus 0.20 points, and depression decreased by 0.64 points versus 0.60 points. Task completion was substantially higher in Group B (85% vs. 65%), suggesting that greater engagement with deep learning-ranked recommendations contributed to the additional 0.28-point improvement on the NLP (PHQ-4).
The post-features between group comparisons are illustrated in Table 6.
Table 6 shows that differences between-groups were conducted with independent-samples t-tests d f = n 1 + n 2 2 are statistically significant at week-12: anxiety t(48) = 2.48, p = 0.017 (Cohen’s d = −0.70, medium-large), depression t(48) = 3.38, p = 0.001 (Cohen’s d = −0.96, large), and total PHQ-4 t(48) = 6.35, p < 0.001 (Cohen’s d = −1.80, very large). Eventually, Group B had lower (better) week-12 scores. These results indicate that the enhanced engagement features in Group B were associated with clinically meaningful and statistically significant greater symptom reductions compared with the standard app features in this cohort. The final improvements in outcomes are presented in Table 7.
Group B’s total drop of 1.08 points exceeds Group A’s 0.80 by 0.28 points. Anxiety improvement is more than double (0.44 vs. 0.20), while depression gains are marginally higher (0.64 vs. 0.60). These numbers support the data-driven, optimized engagement which produces more robust symptom relief across domains. The NLP (PHQ-4) is a set of questions that users answer via the EmotiZen App.
Finally, Figure 6 illustrates the mean percentage reductions in PHQ-4 total and subscale scores from baseline to week 12 for Groups A and B, calculated from the group means in Table 5. Group A’s PHQ-4 total score decreased from 3.92 to 3.12, an absolute change of 0.80, representing a 20.4% reduction, while Group B’s score declined from 3.08 to 2.00, an absolute change of 1.08, corresponding to a 35.1% reduction. For the anxiety subscale, Group A’s score dropped from 1.84 to 1.64 (Δ = 0.20, −10.9%), whereas Group B’s declined from 1.56 to 1.12 (Δ = 0.44, −28.2%). On the depression subscale, Group A showed a decrease from 2.08 to 1.48 (Δ = 0.60, −28.8%), and Group B from 1.52 to 0.88 (Δ = 0.64, −42.1%). Across all measures, Group B demonstrated consistently greater relative reductions in symptoms than Group A, indicating a more substantial improvement in both anxiety and depression over the 12 weeks.

5.5. Post-Hoc Correlation Analysis of the Proposed MCoG-LDPSNet Predictions vs. Mental Health Professional-Administered PHQ-4

To further validate the proposed MCoG-LDPSNet predictive performance deployed in EmotiZen App screening, we conducted Pearson correlations between model-predicted subscale/total scores (via the app) and contemporaneous mental health expert-administered PHQ-4 scores across all participants and within each study. Table 8 illustrates the correlations between the proposed MCoG-LDPSNet predictions and standard PHQ-4.
The correlations in Table 8 were high (total r ≈ 0.974; anxiety r ≈ 0.976; depression r reported as near 1.000). These high r values indicate strong face validity of the mapping in this cohort analysis. These results (Table 8) provide compelling face validity for the proposed MCoG-LDPSNet algorithm. The proposed model’s almost perfect concordance with gold-standard mental health expert ratings suggests that the following:
  • Semantic fidelity: Open-ended responses retain the same severity information as fixed-choice items when processed through the proposed MCoG-LDPSNet architecture.
  • Clinical interchangeability: In routine use, EmotiZen’s AI-driven scores can reliably substitute for in-person PHQ-4 administration, enabling more scalable and user-friendly screening.
  • Robustness to engagement differences: high correlations in both cohorts confirm that extra app features do not distort the model’s predictive accuracy.
Combined with the primary analyses showing significant symptom reductions correlated with app engagement, this post-hoc correlation underscores the dual utility of EmotiZen as an accurate digital screener capable of improving mental health well-being.
Figure 7 shows the near-unity correlation values across all panels, confirming that the proposed MCoG-LDPSNet algorithm replicates mental health PHQ-4 assessments with exceptional fidelity. This strong evidence supports the deployment of EmotiZen’s free-text screening as a valid proxy for the traditional PHQ-4 in both research and real-world mental health workflows.
Following the results presented in Section 4 and Section 5, both of our guiding questions are conclusively addressed:
  • Detection Efficacy: MCoG-LDPSNet demonstrated a marked improvement in identifying anxiety and depression under extreme class imbalance, achieving a 4.5% increase in AUROC and a 7.01% gain in G-mean over leading benchmarks, including GLM, XGBoost, DeprMVM, CNN-BiLSTM-ATTN, and BERT. These gains confirm that our Loss-Driven Parametric Swish activation and adaptive gain control mechanism substantially enhance sensitivity to minority-class patterns without sacrificing overall calibration or robustness.
  • Mobile Feasibility: When embedded in the EmotiZen App and fine-tuned via transfer learning on social media user data, MCoG-LDPSNet not only sustained its predictive accuracy in a live setting but also scaled seamlessly across diverse user profiles. The enhanced version of EmotiZen, which leverages on-demand screening, personalized iCBT recommendations, and engagement tools, yielded higher task completion rates (85% vs. 65%) and greater symptom reduction (1.08 vs. 0.80 total NLP-PHQ-4 points). These developments underscore that incorporating the proposed MCoG-LDPSNet into the EmotiZen App significantly improves the real-time identification of anxiety and depression, personalization, and user engagement, thereby validating its practical utility for scalable, frontline mental health support.

5.6. Ethical Considerations

This study was designed with the principle that technology should serve to augment, not supplant, human judgment and care. By embedding the proposed MCoG-LDPSNet model within the EmotiZen App, we ensured that all predictions of anxiety and depression severity remained transparent and interpretable to both users and clinicians. At every step, users retained control over their data and the subsequent iCBT recommendations: They could review, modify, or override the app’s suggestions and were free to opt in or opt out of any feature. To ensure that users do not rely exclusively on the algorithm without also receiving guidance on mental health, automated sentiment scoring and thresholding were combined with concise instructions on how to use the app and links to expert resources. This approach emphasizes the collaborative use of intelligence and user engagement to improve human understanding while preserving user autonomy and privacy.

6. Conclusions

This work introduces the MCoG-LDPSNet, a novel variation of MCoRNNMCD-ANN and a brain-inspired, adaptive-gain architecture specifically designed to overcome class-imbalance pitfalls that plague existing GLM, XGBoost, DeprMVM, CNN-BiLSTM-ATTN, and transformer-based methods applied to many mobile apps and platforms. By integrating a novel, learnable β-parameter in the Loss-Driven Parametric Swish layer, calibrated through confidence-aware loss signals, MCoG-LDPSNet dynamically reshaped its activation to enhance minority-class sensitivity.
In rigorous head-to-head benchmarks, the proposed MCoG-LDPSNet achieved an impressive 83.3% AUC gain against models such as Bernoulli NB, and a G-mean outstanding improvement of 287% against SVM. Transformers like BERT performed substantially poorly on these metrics with an extensive runtime of 630,400 s, which highlights both the hardship of the task and the practical constraints around using transformers for low-latency mobile screening. Furthermore, this study did not utilize larger pre-trained models on heterogeneous web-scale corpora whose content, biases, and license conditions are often opaque; relying on such weights can introduce unknown representational priors and governance complications, which are especially important in sensitive mental health applications. Against the top five models, the proposed MCoG-LDPSNet outperformed them, including the second-best model of this study, which was our previous MCoRNNMCD-ANN, by 0.49% in AUC and 1.76% in G-mean. Notably, the MCoG-LDPSNet runtime was 86% faster than the MCoRNNMCD-ANN, showing not only an improvement in performance but also a tremendous speed improvement, making it more sustainable and cost-effective in computational time.
When deployed in the EmotiZen App and fine-tuned via transfer learning on real-world user data, the proposed MCoG-LDPSNet model delivered highly reliable screening, resulting in meaningful improvements in user engagement and symptom reduction. These results demonstrate that MCoG-LDPSNet not only pushes the boundaries of deep learning for mental health in terms of technology but also has excellent potential for more scalable, on-demand, and equitable screening, which would enable earlier intervention and better outcomes for various populations.
However, limitations must be acknowledged. First, the sample during the cohort analysis was relatively small (n = 50) and geographically constrained to two German states, which may limit generalizability. Second, reliance on self-reported, free-text responses introduces potential biases (e.g., social desirability) that may affect the model’s mapping accuracy. Future research should involve larger, more diverse cohorts and randomized controlled trials to confirm efficacy across different cultures and languages. Extending this study, we also envision adaptive interventions that are dynamically tailored not only to anxiety and depression but also to other diseases of the central nervous system, like multiple sclerosis. Ultimately, integrating behavioural data, such as activity patterns, could enrich the model’s context awareness, enabling personalized digital mental health care that empowers humanity.

Author Contributions

C.B. conceptualized, designed, performed the experiments, and developed the proposed Modular Convolutional orthogonal Gated Loss-Driven Parametric Swish Network (MCoG-LDPSNet) and the baselines and replicated benchmark algorithms; M.A., M.S., E.J. and A.P. provided guidance, review, editing, and direction for the research and evaluation. All authors significantly contributed to the writing and review. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The Kaggle mental health sentiment corpus (https://www.kaggle.com/code/mesutssmn/sentiment-analysis-for-mental-health/input, last accessed on 30 May 2025) and the Islam et al. dataset [28] are publicly accessible. By contrast, our 12-week EmotiZen cohort data cannot be shared due to participant anonymity and the sensitivity of health information, as mandated by applicable data protection regulations.

Conflicts of Interest

Christos Bormpotsis is the founder and owner of EmotiZen GmbH. He is also the sole developer of the Modular Convolutional orthogonal Gated Loss-Driven Parametric Swish Network (MCoG-LDPSNet) proposed model in this study. EmotiZen GmbH developed and deployed the EmotiZen App, its proprietary tool used in this study, and provided the de-identified cohort data. EmotiZen GmbH was involved in study deployment, data collection, and data analysis. The authors performed manuscript preparation. The remaining authors declare no conflicts of interest.

References

  1. Fan, Y.; Fan, A.; Yang, Z.; Fan, D. Global burden of mental disorders in 204 countries and territories, 1990–2021: Results from the global burden of disease study 2021. BMC Psychiatry 2025, 25, 486. [Google Scholar] [CrossRef]
  2. Kieling, C.; Buchweitz, C.; Caye, A.; Silvani, J.; Ameis, S.H.; Brunoni, A.R.; Cost, K.T.; Courtney, D.B.; Georgiades, K.; Merikangas, K.R.; et al. Worldwide Prevalence and Disability From Mental Disorders Across Childhood and Adolescence: Evidence From the Global Burden of Disease Study. JAMA Psychiatry 2024, 81, 347–356. [Google Scholar] [CrossRef] [PubMed]
  3. Doğan, T.; Koçtürk, N.; Akın, E.; Kurnaz, M.F.; Öztürk, C.D.; Şen, A.; Yalçın, M. Science-Based Mobile Apps for Reducing Anxiety: A Systematic Review and Meta-Analysis. Clin. Psychol. Psychother. 2024, 31, e3058. [Google Scholar] [CrossRef] [PubMed]
  4. Lecomte, T.; Potvin, S.; Corbière, M.; Guay, S.; Samson, C.; Cloutier, B.; Francoeur, A.; Pennou, A.; Khazaal, Y. Mobile Apps for Mental Health Issues: Meta-Review of Meta-Analyses. JMIR Mhealth Uhealth 2020, 8, e17458. [Google Scholar] [CrossRef] [PubMed] [PubMed Central]
  5. Morillo, P.; Ortega, H.; Chauca, D.; Proaño, J.; Vallejo-Huanga, D.; Cazares, M. Psycho Web: A Machine Learning Platform for the Diagnosis and Classification of Mental Disorders. In Advances in Neuroergonomics and Cognitive Engineering; Proceeding of the AHFE International Conference on Industrial Cognitive Ergonomics and Engineering Psychology, Washington, DC, USA, 24–28 July 2019; Ayaz, H., Ed.; Springer: Berlin/Heidelberg, Germany, 2020; Volume 953. [Google Scholar] [CrossRef]
  6. Milne-Ives, M.; Selby, E.; Inkster, B.; Lam, C.; Meinert, E. Artificial intelligence and machine learning in mobile apps for mental health: A scoping review. PLOS Digit. Health 2022, 1, e0000079. [Google Scholar] [CrossRef] [PubMed]
  7. Bondar, J.; Morrow, C.B.; Gueorguieva, R.; Brown, M.; Hawrilenko, M.; Krystal, J.H.; Chekroud, A.M. Clinical and financial outcomes associated with a workplace mental health program before and during the COVID-19 pandemic. JAMA Netw. Open 2022, 5, e2216349. [Google Scholar] [CrossRef]
  8. Koch, K. Proof That Preventative Mental Health Care Works: White Paper on Preventative Mental Health Support [White Paper]; nilo.health: Berlin, Germany, 2025; Available online: https://nilohealth.com/guides/whitepaper-preventative-mental-health-care (accessed on 30 May 2025).
  9. Andrew, J.; Rudra, M.; Eunice, J.; Belfin, R.V. Artificial intelligence in adolescents mental health disorder diagnosis, prognosis, and treatment. Front. Public Health 2023, 11, 1110088. [Google Scholar] [CrossRef] [PubMed]
  10. Rutledge, R.B.; Chekroud, A.M.; Huys, Q.J. Machine learning and big data in psychiatry: Toward clinical applications. Curr. Opin. Neurobiol. 2019, 55, 152–159. [Google Scholar] [CrossRef] [PubMed]
  11. Obagbuwa, I.C.; Danster, S.; Chibaya, O.C. Supervised machine learning models for depression sentiment analysis. Front. Artif. Intell. 2023, 6, 1230649. [Google Scholar] [CrossRef]
  12. Espino-Salinas, C.H.; Luna-García, H.; Cepeda-Argüelles, A.; Trejo-Vázquez, K.; Flores-Chaires, L.A.; Mercado Reyna, J.; Galván-Tejada, C.E.; Acra-Despradel, C.; Villalba-Condori, K.O. Convolutional Neural Network for Depression and Schizophrenia Detection. Diagnostics 2025, 15, 319. [Google Scholar] [CrossRef]
  13. Chen, W.; Yang, K.; Yu, Z.; Shi, Y.; Chen, C.L.P. A survey on imbalanced learning: Latest research, applications and future directions. Artif. Intell. Rev. 2024, 57, 137. [Google Scholar] [CrossRef]
  14. Mahajan, P.; Uddin, S.; Hajati, F.; Moni, M.A. Ensemble Learning for Disease Prediction: A Review. Healthcare 2023, 11, 1808. [Google Scholar] [CrossRef]
  15. Bormpotsis, C.; Sedky, M.; Patel, A. Predicting Forex Currency Fluctuations Using a Novel Bio-Inspired Modular Neural Network. Big Data Cogn. Comput. 2023, 7, 152. [Google Scholar] [CrossRef]
  16. Matrone, G.M.; van Doremaele, E.R.W.; Surendran, A.; Laswick, Z.; Griggs, S.; Ye, G.; McCulloch, I.; Santoro, F.; Rivnay, J.; van de Burgt, Y. A modular organic neuromorphic spiking circuit for retina-inspired sensory coding and neurotransmitter-mediated neural pathways. Nat. Commun. 2024, 15, 2868. [Google Scholar] [CrossRef]
  17. Ritz, H.; Shenhav, A. Orthogonal neural encoding of targets and distractors supports multivariate cognitive control. Nat. Hum. Behav. 2024, 8, 945–961. [Google Scholar] [CrossRef] [PubMed]
  18. Ao, S.-I.; Hurwitz, M.; Palade, V. Cognitive Computing and Business Intelligence Applications in Accounting, Finance and Management. Big Data Cogn. Comput. 2025, 9, 54. [Google Scholar] [CrossRef]
  19. Tang, C.; Ma, K.; Cui, B.; Ji, K.; Abraham, A. Long text feature extraction network with data augmentation. Appl. Intell. 2022, 52, 17652–17667. [Google Scholar] [CrossRef]
  20. Bormpotsis, C.; Nanos, M.; Patel, A. A Neuroscience-Informed AI Framework to Decode the Complexities of Neurofinance. IEEE Trans. Technol. Soc. 2025, 6, 305–313. [Google Scholar] [CrossRef]
  21. Lin, T.-Y.; Goyal, P.; Girshick, R.; He, K.; Dollár, P. Focal Loss for Dense Object Detection. In Proceedings of the 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; pp. 2999–3007. [Google Scholar] [CrossRef]
  22. Hasselmo, M.; Sarter, M. Modes and Models of Forebrain Cholinergic Neuromodulation of Cognition. Neuropsychopharmacol 2011, 36, 52–73. [Google Scholar] [CrossRef]
  23. Brier, G.W. Verification of forecasts expressed in terms of probability. Mon. Weather. Rev. 1950, 78, 1–3. [Google Scholar] [CrossRef]
  24. Aston-Jones, G.; Cohen, J.D. An integrative theory of locus coeruleus-norepinephrine function: Adaptive gain and optimal performance. Annu. Rev. Neurosci. 2005, 28, 403–450. [Google Scholar] [CrossRef] [PubMed]
  25. Ferguson, K.A.; Cardin, J.A. Mechanisms underlying gain modulation in the cortex. Nat. Rev. Neurosci. 2020, 21, 80–92. [Google Scholar] [CrossRef] [PubMed]
  26. Abdulsadig, R.S.; Rodriguez-Villegas, E. A comparative study in class imbalance mitigation when working with physiological signals. Front. Digit. Health 2024, 6, 1377165. [Google Scholar] [CrossRef] [PubMed]
  27. Wang, S.; Yao, X. Diversity analysis on imbalanced data sets by using ensemble models. In Proceedings of the 2009 IEEE Symposium on Computational Intelligence and Data Mining, Nashville, TN, USA, 30 March–2 April 2009; pp. 324–331. [Google Scholar] [CrossRef]
  28. Islam, M.R.; Kabir, M.A.; Ahmed, A.; Kamal, A.R.M.; Wang, H.; Ulhaq, A. Depression detection from social network data using machine learning techniques. Health Inf. Sci. Syst. 2018, 6, 8. [Google Scholar] [CrossRef]
  29. Wong, G.; Greenhalgh, T.; Westhorp, G.; Buckingham, J.; Pawson, R. RAMESES publication standards: Meta-narrative reviews. BMC Med. 2013, 11, 20. [Google Scholar] [CrossRef]
  30. Chaddad, A.; Li, J.; Lu, Q.; Li, Y.; Okuwobi, I.P.; Tanougast, C.; Desrosiers, C.; Niazi, T. Can Autism Be Diagnosed with Artificial Intelligence? A Narrative Review. Diagnostics 2021, 11, 2032. [Google Scholar] [CrossRef]
  31. Snyder, H. Literature review as a research methodology: An overview and guidelines. J. Bus. Res. 2019, 104, 333–339. [Google Scholar] [CrossRef]
  32. Zhang, T.; Schoene, A.M.; Ji, S.; Ananiadou, S. Natural language processing applied to mental illness detection: A narrative review. Npj Digit. Med. 2022, 5, 46. [Google Scholar] [CrossRef]
  33. Pandya, M.; Altinay, M.; Malone, D.A.; Anand, A. Where in the Brain Is Depression? Curr. Psychiatry Rep. 2012, 14, 634–642. [Google Scholar] [CrossRef]
  34. Motzkin, J.C.; Philippi, C.L.; Wolf, R.C.; Baskaya, M.K.; Koenigs, M. Ventromedial prefrontal cortex is critical for the regulation of amygdala activity in humans. Biol. Psychiatry 2015, 77, 276–284. [Google Scholar] [CrossRef]
  35. Babaev, O.; Piletti Chatain, C.; Krueger-Burg, D. Inhibition in the amygdala anxiety circuitry. Exp. Mol. Med. 2018, 50, 1–16. [Google Scholar] [CrossRef]
  36. Madonna, D.; Delvecchio, G.; Soares, J.C.; Brambilla, P. Structural and functional neuroimaging studies in generalized anxiety disorder: A systematic review. Braz. J. Psychiatry 2019, 41, 336–362. [Google Scholar] [CrossRef]
  37. Maggioni, E.; Delvecchio, G.; Grottaroli, M.; Garzitto, M.; Piccin, S.; Bonivento, C.; Brambilla, P. Common and different neural markers in major depression and anxiety disorders: A pilot structural magnetic resonance imaging study. Psychiatry Res. Neuroimaging 2019, 290, 42–50. [Google Scholar] [CrossRef]
  38. Gallen, C.L.; D’Esposito, M. Brain Modularity: A Biomarker of Intervention-related Plasticity. Trends Cogn. Sci. 2019, 23, 293–304. [Google Scholar] [CrossRef]
  39. Koenigs, M.; Grafman, J. The functional neuroanatomy of depression: Distinct roles for ventromedial and dorsolateral prefrontal cortex. Behav. Brain Res. 2009, 201, 239–243. [Google Scholar] [CrossRef]
  40. Bertolero, M.A.; Yeo, B.T.; D’Esposito, M. The modular and integrative functional architecture of the human brain. Proc. Natl. Acad. Sci. USA 2015, 112, E6798–E6807. [Google Scholar] [CrossRef] [PubMed]
  41. Esfahlani, F.Z.; Jo, Y.; Puxeddu, M.G.; Merritt, H.; Tanner, J.C.; Greenwell, S.; Betzel, R.F. Modularity maximization as a flexible and generic framework for brain network exploratory analysis. NeuroImage 2021, 244, 118607. [Google Scholar] [CrossRef] [PubMed]
  42. Jiang, Y. A theory of the neural mechanisms underlying negative cognitive bias in major depression. Front. Psychiatry 2024, 15, 1348474. [Google Scholar] [CrossRef]
  43. Dwyer, D.B.; Falkai, P.; Koutsouleris, N. Machine learning approaches for clinical psychology and psychiatry. Annu. Rev. Clin. Psychol. 2018, 14, 91–118. [Google Scholar] [CrossRef]
  44. Tufail, H.; Cheema, S.M.; Ali, M.; Pires, I.M.; Garcia, N.M. Depression Detection with Convolutional Neural Networks: A Step Towards Improved Mental Health Care. Procedia Comput. Sci. 2023, 224, 544–549. [Google Scholar] [CrossRef]
  45. Mardini, M.T.; Khalil, G.E.; Bai, C.; DivaKaran, A.M.; Ray, J.M. Identifying Adolescent Depression and Anxiety Through Real-World Data and Social Determinants of Health: Machine Learning Model Development and Validation. JMIR Ment. Health 2025, 12, e66665. [Google Scholar] [CrossRef]
  46. Richter, T.; Stahi, S.; Mirovsky, G.; Hel-Or, H.; Okon-Singer, H. Disorder-specific versus transdiagnostic cognitive mechanisms in anxiety and depression: Machine-learning-based prediction of symptom severity. J. Affect. Disord. 2024, 354, 473–482. [Google Scholar] [CrossRef]
  47. Hossain, M.T.; Talukder Rahman, M.A.; Jahan, N. Social Networking Sites Data Analysis using NLP and ML to Predict Depression. In Proceedings of the 2021 12th International Conference on Computing Communication and Networking Technologies (ICCCNT), Kharagpur, India, 6–8 July 2021; pp. 1–5. [Google Scholar] [CrossRef]
  48. Kim, J.; Lee, J.; Park, E.; Han, J. A deep learning model for detecting mental illness from user content on social media. Sci. Rep. 2020, 10, 11846. [Google Scholar] [CrossRef] [PubMed]
  49. Malik, A.; Shabaz, M.; Asenso, E. Machine learning based model for detecting depression during COVID-19 crisis. Sci. Afr. 2023, 20, e01716. [Google Scholar] [CrossRef]
  50. Bhatnagar, S.; Agarwal, J.; Sharma, O.R. Detection and classification of anxiety in university students through the application of machine learning. Procedia Comput. Sci. 2023, 218, 1542–1550. [Google Scholar] [CrossRef]
  51. Zhai, Y.; Zhang, Y.; Chu, Z.; Geng, B.; Almaawali, M.; Fulmer, R.; Du, X. Machine learning predictive models to guide prevention and intervention allocation for anxiety and depressive disorders among college students. J. Couns. Dev. 2025, 103, 110–125. [Google Scholar] [CrossRef]
  52. Wang, T.; Xue, C.; Zhang, Z.; Cheng, T.; Yang, G. Unraveling the distinction between depression and anxiety: A machine learning exploration of causal relationships. Comput. Biol. Med. 2024, 174, 108446. [Google Scholar] [CrossRef]
  53. Ai, F.; Li, E.; Ji, Q.; Zhang, H. Construction of a machine learning-based risk prediction model for depression in middle-aged and elderly hypertensive people in China: A longitudinal study. Front. Psychiatry 2024, 15, 1398596. [Google Scholar] [CrossRef]
  54. Rehmani, F.; Shaheen, Q.; Anwar, M.; Faheem, M.; Bhatti, S.S. Depression detection with machine learning of structural and non-structural dual languages. Healthc. Technol. Lett. 2024, 11, 218–226. [Google Scholar] [CrossRef]
  55. Zhong, Y.; He, J.; Luo, J.; Zhao, J.; Cen, Y.; Song, Y.; Luo, J. A machine learning algorithm-based model for predicting the risk of non-suicidal self-injury among adolescents in western China: A multicentre cross-sectional study. J. Affect. Disord. 2024, 345, 369–377. [Google Scholar] [CrossRef]
  56. Abilkaiyrkyzy, A.; Laamarti, F.; Hamdi, M.; El Saddik, A. Dialogue system for early mental illness detection: Toward a digital twin solution. IEEE Access 2024, 12, 2007–2024. [Google Scholar] [CrossRef]
  57. Thekkekara, J.P.; Yongchareon, S.; Liesaputra, V. An attention-based CNN-BiLSTM model for depression detection on social media text. Expert. Syst. Appl. 2024, 249, 123834. [Google Scholar] [CrossRef]
  58. Saha, D.K.; Hossain, T.; Safran, M.; Alfarhood, S.; Mridha, M.F.; Che, D. Ensemble of hybrid model based technique for early detecting of depression based on SVM and neural networks. Sci. Rep. 2024, 14, 25470. [Google Scholar] [CrossRef]
  59. Karyotaki, E.; Riper, H.; Twisk, J.; Hoogendoorn, A.; Kleiboer, A.; Mira, A.; Cuijpers, P. Efficacy of self-guided internet-based cognitive behavioural therapy in the treatment of depressive symptoms: A meta-analysis of individual participant data. JAMA Psychiatry 2017, 74, 351–359. [Google Scholar] [CrossRef]
  60. Newby, J.; Mason, E.; Kladnistki, N.; Murphy, M.; Millard, M.; Haskelberg, H.; Mahoney, A. Integrating internet CBT into clinical practice: A practical guide for clinicians. Clin. Psychol. 2021, 25, 164–178. [Google Scholar] [CrossRef]
  61. Käll, A.; Biliunaite, I.; Andersson, G. Internet-delivered cognitive behaviour therapy for affective disorders, anxiety disorders and somatic conditions: An updated systematic umbrella review. Digit. Health 2024, 10, 20552076241287643. [Google Scholar] [CrossRef] [PubMed]
  62. Parikh, V.; Kozak, R.; Martinez, V.; Sarter, M. Prefrontal acetylcholine release controls cue detection on multiple timescales. Neuron 2007, 56, 141–154. [Google Scholar] [CrossRef] [PubMed]
  63. Rahbar, A.; Jorge, E.; Dubhashi, D.; Haghir Chehreghani, M. Do Kernel and Neural Embeddings Help in Training and Generalization? Neural Process. Lett. 2023, 55, 1681–1695. [Google Scholar] [CrossRef]
  64. Peng, H.; Mou, L.; Li, G.; Chen, Y.; Lu, Y.; Jin, Z. A comparative study on regularization strategies for embedding-based neural networks. arXiv 2015, arXiv:1508.03721. [Google Scholar] [CrossRef]
  65. Labach, A.; Salehinejad, H.; Valaee, S. Survey of dropout methods for deep neural networks. arXiv arXiv:1904.13310, 2019. [CrossRef]
  66. Matrenin, P.V.; Manusov, V.Z.; Khalyasmaa, A.I.; Antonenkov, D.V.; Eroshenko, S.A.; Butusov, D.N. Improving Accuracy and Generalization Performance of Small-Size Recurrent Neural Networks Applied to Short-Term Load Forecasting. Mathematics 2020, 8, 2169. [Google Scholar] [CrossRef]
  67. Srivastava, N.; Hinton, G.; Krizhevsky, A.; Sutskever, I.; Salakhutdinov, R. Dropout: A simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. 2014, 15, 1929–1958. [Google Scholar]
  68. Ramachandran, P.; Zoph, B.; Le, Q.V. Searching for activation functions. arXiv 2017, arXiv:1710.05941. [Google Scholar] [CrossRef]
  69. Keskar, N.S.; Mudigere, D.; Nocedal, J.; Smelyanskiy, M.; Tang, P.T.P. On large-batch training for deep learning: Generalization gap and sharp minima. arXiv 2016, arXiv:1609.04836. [Google Scholar] [CrossRef]
  70. Yoon, S.; Parsons, F.; Sundquist, K.; Julian, J.; Schwartz, J.E.; Burg, M.M.; Diaz, K.M. Comparison of different algorithms for sentiment analysis: Psychological stress notes. Stud. Health Technol. Inform. 2017, 245, 1292. [Google Scholar] [CrossRef]
Figure 1. The flowchart documents justifications from the data extraction and quality assessment.
Figure 1. The flowchart documents justifications from the data extraction and quality assessment.
Biomimetics 10 00563 g001
Figure 2. The proposed MCoG-LDPSNet comprises two modules for anxiety and depression (MCoG), with their outputs fused and passed to the LDPS, which incorporates Focal Loss inspired by transient neuromodulation and the Brier score for sustained calibration. Both modules pre-train knowledge fusion and fine-tune for domain-specific prediction of anxiety and depression.
Figure 2. The proposed MCoG-LDPSNet comprises two modules for anxiety and depression (MCoG), with their outputs fused and passed to the LDPS, which incorporates Focal Loss inspired by transient neuromodulation and the Brier score for sustained calibration. Both modules pre-train knowledge fusion and fine-tune for domain-specific prediction of anxiety and depression.
Biomimetics 10 00563 g002
Figure 3. ROC of: (a) proposed MCoG-LDPSNet; (b) MCoRNNMCD-ANN; (c) CNN-BiLSTM-ATTN (CBA); (d) GRU; and (e) LSTM.
Figure 3. ROC of: (a) proposed MCoG-LDPSNet; (b) MCoRNNMCD-ANN; (c) CNN-BiLSTM-ATTN (CBA); (d) GRU; and (e) LSTM.
Biomimetics 10 00563 g003
Figure 4. Confusion matrix of: (a) proposed MCoG-LDPSNet; (b) MCoRNNMCD-ANN; (c) CNN-BiLSTM-ATTN (CBA); (d) GRU; and (e) LSTM.
Figure 4. Confusion matrix of: (a) proposed MCoG-LDPSNet; (b) MCoRNNMCD-ANN; (c) CNN-BiLSTM-ATTN (CBA); (d) GRU; and (e) LSTM.
Biomimetics 10 00563 g004
Figure 5. (a) Confusion matrix of the fine-tuned MCoG-LDPSNet; (b) ROC of the fine-tuned MCoG-LDPSNet.
Figure 5. (a) Confusion matrix of the fine-tuned MCoG-LDPSNet; (b) ROC of the fine-tuned MCoG-LDPSNet.
Biomimetics 10 00563 g005
Figure 6. The changes in outputs per group.
Figure 6. The changes in outputs per group.
Biomimetics 10 00563 g006
Figure 7. Scatterplots and linear fits of NLP-predicted vs. clinician-administered PHQ-4.
Figure 7. Scatterplots and linear fits of NLP-predicted vs. clinician-administered PHQ-4.
Biomimetics 10 00563 g007
Table 1. Mean of objective evaluation metrics.
Table 1. Mean of objective evaluation metrics.
ModelAccuracyF1PrecisionRecallSpecificityG-MeanAUCTime for 50 Runs (s)
Logistic Regression0.80280.88780.81330.97730.11130.32950.68185.60
Multinomial NB0.67040.77470.85250.71010.51320.60350.63011.27
Bernoulli NB0.66090.77900.81180.74880.31230.48350.54111.68
KNN0.78670.87740.81090.95580.11660.33350.596592.47
SVM0.80760.89210.80770.99630.06000.24410.72325200.28
AdaBoost0.82460.89570.85300.94290.35580.57900.8296196.60
XGBoost0.85600.91470.86760.96730.41520.63360.8787114.84
CatBoost0.83790.90520.84910.96920.31770.55480.851295.54
GLM0.80270.88780.81330.97730.11130.32950.681840.74
Decision Tree0.80870.88860.83050.95580.22630.46270.782017.57
Random Forest0.83690.90630.83690.98820.23710.48380.8569402.46
GRU0.95090.96910.96650.97170.87090.91990.97953470.50
LSTM0.93380.95880.95300.96570.81130.87950.96163984.50
CNN0.90510.94120.92590.95740.70480.82050.94821176.66
BERT0.76700.83390.88450.81430.54110.59580.6730630,400.00
DeprMVM0.90080.80980.82380.79890.93780.86500.9091111.50
CNN-BiLSTM-ATTN (CBA)0.95290.97040.96810.97280.87650.92320.98291958.75
MCoRNNMCD-ANN (ours)0.95830.97400.97060.97750.88250.92860.98723093.00
MCoG-LDPSNet (ours)0.96600.97870.97780.97970.91190.94510.99201237.00
Table 2. Standard deviations.
Table 2. Standard deviations.
ModelAccuracyF1PrecisionRecallSpecificityG-MeanAUC
Logistic Regression0.00580.00370.00590.00300.00980.01440.0101
Multinomial NB0.00730.00600.00740.00820.01940.01150.0102
Bernoulli NB0.00860.00670.00690.00880.01190.01030.0095
KNN0.00640.00410.00640.00400.01100.01580.0104
SVM0.00620.00380.00610.00140.00760.01550.0095
AdaBoost0.00460.00300.00560.00400.02030.01600.0060
XGBoost0.00550.00350.00560.00350.01730.01320.0062
CatBoost0.00580.00360.00590.00380.01540.01350.0064
GLM0.00580.00370.00590.00300.00980.01440.0101
Decision Tree0.00540.00350.00880.01070.04420.04290.0082
Random Forest0.00560.00350.00590.00220.01490.01520.0068
GRU0.00780.00500.00440.00790.01750.01060.0053
LSTM0.02860.01650.03280.01660.15090.09480.0246
CNN0.00700.00410.01280.01200.05820.03030.0056
BERT0.12300.08150.06710.11590.01780.01050.1852
DeprMVM0.01840.04170.04910.05570.01740.03100.0329
CNN-BiLSTM-ATTN (CBA)0.00290.00180.00730.00720.02990.01290.0015
MCoRNNMCD-ANN (our) 0.00350.00210.00740.00600.03130.01420.0019
MCoG-LDPSNet (our)0.00390.00240.00530.00580.02200.01000.0010
Table 3. Top five models by AUC and G-mean.
Table 3. Top five models by AUC and G-mean.
RankModelAUCG-Mean
1MCoG-LDPSNet (ours)0.99200.9451
2MCoRNNMCD-ANN (ours)0.98720.9286
3CNN-BiLSTM-ATTN (CBA)0.98290.9232
4GRU0.97950.9199
5LSTM0.96160.8795
Table 4. Participant demographics (n = 25 per group).
Table 4. Participant demographics (n = 25 per group).
CharacteristicGroup A (Control)Group B (Enhanced)
Participants2525
Mean Age (years)33.6 ± 3.535.2 ± 2.0
Gender (F/M)15/1016/9
Education LevelSimilarSimilar
Employment StatusSimilarSimilar
Table 5. Outcomes and task performance.
Table 5. Outcomes and task performance.
MeasureGroup AGroup B
Baseline PHQ-43.92 ± 0.493.08 ± 0.40
Post-PHQ-43.12 ± 0.672.00 ± 0.58
Absolute Improvement0.801.08
Baseline Anxiety1.84 ± 0.691.56 ± 0.58
Post-Anxiety1.64 ± 0.811.12 ± 0.67
Anxiety Reduction0.200.44
Baseline Depression2.08 ± 0.761.52 ± 0.59
Post-Depression1.48 ± 0.650.88 ± 0.60
Depression Reduction0.600.64
Task Completion (%)65.0 ± 12.585.0 ± 14.4
Table 6. Post intervention between group comparisons.
Table 6. Post intervention between group comparisons.
MeasureGroup AGroup Bt(48)p-ValueCohen’s d
Anxiety1.64 ± 0.811.12 ± 0.672.480.017–0.70
Depression1.48 ± 0.650.88 ± 0.603.380.001–0.96
Total NLP (PHQ-4)3.12 ± 0.672.00 ± 0.586.35<0.001–1.80
Table 7. Improvements in outcomes.
Table 7. Improvements in outcomes.
MeasureA: InitialA: PostA: ΔB: InitialB: PostB: Δ
Anxiety1.841.640.201.561.120.44
Depression2.081.480.601.520.880.64
Total NLP (PHQ-4)3.923.120.803.082.001.08
Table 8. Pearson correlations between MCoG-LDPSNet predictions and standard PHQ-4 scores.
Table 8. Pearson correlations between MCoG-LDPSNet predictions and standard PHQ-4 scores.
CohortMeasurerp-Value
OverallAnxiety0.9762.1 × 10−33
Depression1.000<1 × 10−300
Total0.9741.9 × 10−32
Group AAnxiety0.9577.5 × 10−14
Depression1.000<1 × 10−300
Total0.9141.7 × 10−10
Group BAnxiety1.000<1 × 10−300
Depression1.0001.6 × 10−181
Total1.000<1 × 10−300
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Bormpotsis, C.; Anagnostouli, M.; Sedky, M.; Jelastopulu, E.; Patel, A. Mobile Mental Health Screening in EmotiZen via the Novel Brain-Inspired MCoG-LDPSNet. Biomimetics 2025, 10, 563. https://doi.org/10.3390/biomimetics10090563

AMA Style

Bormpotsis C, Anagnostouli M, Sedky M, Jelastopulu E, Patel A. Mobile Mental Health Screening in EmotiZen via the Novel Brain-Inspired MCoG-LDPSNet. Biomimetics. 2025; 10(9):563. https://doi.org/10.3390/biomimetics10090563

Chicago/Turabian Style

Bormpotsis, Christos, Maria Anagnostouli, Mohamed Sedky, Eleni Jelastopulu, and Asma Patel. 2025. "Mobile Mental Health Screening in EmotiZen via the Novel Brain-Inspired MCoG-LDPSNet" Biomimetics 10, no. 9: 563. https://doi.org/10.3390/biomimetics10090563

APA Style

Bormpotsis, C., Anagnostouli, M., Sedky, M., Jelastopulu, E., & Patel, A. (2025). Mobile Mental Health Screening in EmotiZen via the Novel Brain-Inspired MCoG-LDPSNet. Biomimetics, 10(9), 563. https://doi.org/10.3390/biomimetics10090563

Article Metrics

Back to TopTop