Next Article in Journal
The Effects of Khat Chewing among Djiboutians: Dental Chemical Studies, Gingival Histopathological Analyses and Bioinformatics Approaches
Next Article in Special Issue
Enhancing Oral Squamous Cell Carcinoma Detection Using Histopathological Images: A Deep Feature Fusion and Improved Haris Hawks Optimization-Based Framework
Previous Article in Journal
Prostate-Specific Membrane Antigen Radioligand Therapy in Non-Prostate Cancers: Where Do We Stand?
Previous Article in Special Issue
Overt Word Reading and Visual Object Naming in Adults with Dyslexia: Electroencephalography Study in Transparent Orthography
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Discriminant Input Processing Scheme for Self-Assisted Intelligent Healthcare Systems

1
Applied College of Mahail Aseer, King Khalid University, Abha 62529, Saudi Arabia
2
Department of Computer Science, College of Computer Engineering and Sciences in Al-Kharj, Prince Sattam bin Abdulaziz University, P.O. Box 151, Al-Kharj 16278, Saudi Arabia
3
School of Computing, Gachon University, Seongnam 13120, Republic of Korea
4
Department of Computer Science and Information Systems, College of Applied Sciences, AlMaarefa University, Ad Diriyah, Riyadh 13713, Saudi Arabia
5
Department of Computer Engineering, Aligarh Muslim University, Aligarh 202002, India
*
Authors to whom correspondence should be addressed.
Bioengineering 2024, 11(7), 715; https://doi.org/10.3390/bioengineering11070715
Submission received: 9 June 2024 / Revised: 6 July 2024 / Accepted: 9 July 2024 / Published: 14 July 2024

Abstract

:
Modern technology and analysis of emotions play a crucial role in enabling intelligent healthcare systems to provide diagnostics and self-assistance services based on observation. However, precise data predictions and computational models are critical for these systems to perform their jobs effectively. Traditionally, healthcare monitoring has been the primary emphasis. However, there were a couple of negatives, including the pattern feature generating the method’s scalability and reliability, which was tested with different data sources. This paper delves into the Discriminant Input Processing Scheme (DIPS), a crucial instrument for resolving challenges. Data-segmentation-based complex processing techniques allow DIPS to merge many emotion analysis streams. The DIPS recommendation engine uses segmented data characteristics to sift through inputs from the emotion stream for patterns. The recommendation is more accurate and flexible since DIPS uses transfer learning to identify similar data across different streams. With transfer learning, this study can be sure that the previous recommendations and data properties will be available in future data streams, making the most of them. Data utilization ratio, approximation, accuracy, and false rate are some of the metrics used to assess the effectiveness of the advised approach. Self-assisted intelligent healthcare systems that use emotion-based analysis and state-of-the-art technology are crucial when managing healthcare. This study improves healthcare management’s accuracy and efficiency using computational models like DIPS to guarantee accurate data forecasts and recommendations.

Graphical Abstract

1. Introduction

Modern society heavily relies on clever systems. Anytime an individual uses an intelligent system, the network or process provides with better services and runs more efficiently. Emotionally grounded intelligent systems are a relatively new field of study. Feelings are inherent to all living things, including humans [1]. Regarding intelligent systems’ interaction and analysis, emotions are crucial. It is possible to record people’s facial expressions using surveillance cameras. The classification of human facial expressions under specific situations allows for the development emotion-based intelligent systems [2]. Integrating emotion-based analysis with modern technology, self-assisted intelligent healthcare systems improve services by offering observation-based self-assistance and diagnoses; these systems are then used in healthcare institutions. These systems help with diagnosis, treatment planning, and patient care by analyzing patient data, including emotional data, then making individualized suggestions and improving the accuracy of healthcare administration. One way to classify emotions is with the use of a fuzzy means algorithm that is based on deep learning.
In the field of mental health, electroencephalograms (EEGs) have the potential to aid in diagnosis and fast feedback on treatment strategies for mental health problems, such as depression, deteriorating health due to certain behaviors, etc. The electrical activity of brain cells can be detected by EEGs, which are then used to extract features. The use of microphones in healthcare settings allows for the detection of patients’ emotional states through the analysis of their speech signals and body language. We may examine emotional states using AI markup language by observing neural activity. Electronic brain waves (EEGs) are used in healthcare management systems to assess patients’ emotional states. According to studies, an individual’s emotional state can be deduced from electroencephalogram (EEG) data collected from facial expressions and actions. Improvements in patient treatment are possible, thanks to EEG’s higher performance compared with other analytic methods, which provide information regarding a patient’s mental health and thoughts.
Clustering techniques are used to change the expression differences from the provided data. Emotional intelligence apps are primarily utilized in e-healthcare systems to enhance service quality [3]. The Sensitive Artificial Listener framework is used to analyze people’s emotions. The primary goal of this framework is to improve communication within the company by keeping an eye on employee expressions [4]. Smartphones employ a convolutional model as part of their emotion recognition mechanism. Such a method helps understand user sentiment, which improves desired service quality and boosts overall performance [5].
There is extensive use of emotion-based intelligent systems in healthcare applications and systems. The analysis process and the provision of better, timelier services to patients are improved by healthcare applications that use emotions. Electronic health records use electroencephalogram (EEG) signals to determine a patient’s emotional state [6]. EEG is a tool for diagnosing mental health issues and providing timely feedback for individualized treatment plans. In mental health issues, including depression, worsening health as a result of activities, etc. [7], EEGs detect the electrical activity of brain cells that are subsequently used to extract characteristics. In healthcare applications, microphones can detect patients’ emotions based on their vocal signals and movements [8].
Affect Aura can determine patients’ moods using their visual, auditory, mental, and other gathered data. Predictions in healthcare applications are the most commonly utilized. Emotions are classified using pre-existing case data and pre-existing category labels [9]. The healthcare system improves its services to impacted patients using state-of-the-art algorithms [10]. Advanced models are utilized to predict an individual’s emotions based on their behavior, enhancing emotion identification accuracy [11]. Every system and management heavily relies on emotion analysis. The k-nearest neighbor technique, support vector machines, and quadratic discriminant analysis are among the analysis processes. Artificial intelligence markup language is utilized to analyze emotions based on brain activity [12]. One tool in healthcare management systems for examining patients’ emotions is the EEG. According to research [13], EEG data from patients’ expressions and activities can be used to analyze their emotional state. By providing insight into a patient’s mental health and thoughts, EEG delivers superior performance compared with other analytic processes, improving patient treatment. The expressed emotions of the patient and their loved ones are analyzed using the Expressed Emotions approach [14]. Using data gathered from surveillance cameras, it deduces people’s facial expressions and emotional states. Robots are increasingly finding applications in healthcare, particularly in analyzing patients’ emotions and activities [15]. Healthcare monitoring is the main focus of the conventional approach. Regardless, there are a few drawbacks, such as the reliability and scalability of the fractal pattern feature generation method concerning various EEG data sources and neurological conditions—more evidence of the efficacy of the emotion analysis paradigm in real-world clinical contexts and with different demographics. Complex nonlinear systems frequently give particular challenges when modelled on a global scale. First, the complexity of data distribution patterns makes it hard to estimate them, making it much more difficult to ascertain the model’s structure. Process-monitoring models trained using the unimodal assumption could generate many false alarms.
A patient’s emotional state can be deduced from facial expressions, mental state, and auditory or physical symptoms [16]. The main objective points are as follows:
  • To improve self-assistance services and diagnoses through emotion-based analysis and advanced technology in healthcare data.
  • To assess emotional data in intelligent healthcare systems, the study has introduced DIPS, which uses cutting-edge processing methods.
  • To enhance the accuracy of DIPS’s recommendations by identifying similar data patterns across multiple streams.
The following is a summary of the research that was conducted. A comprehensive analysis of the existing research methods and literature is carried out in the second section. The processes for processing, the research methodology, and the research plan are all described in Section 3. A discussion of the analysis of the result is found in Section 4. In the fifth section, the primary conclusion and future work are discussed.

2. Related Works

In Table 1, the survey finds healthcare monitoring, emotion identification, wise ageing, wound healing, and smart city traffic management research deficiencies. These gaps include integrating emotion recognition systems into healthcare monitoring, optimizing mobile healthcare app data management, improving emotion analysis frameworks, exploring biomedical applications of hydrogels, improving smart ageing solutions, and integrating real-time emotion recognition data in traffic management. More research is needed to improve health treatment, data preservation, and older adults’ quality of life.
Li et al. [19] present a multistep deep (MSD) approach that can accurately identify multimodal emotions from records that may contain inaccurate information. Additionally, the MSD system uses specialized deep neural networks to extract features from visual and physiological inputs, taking spatiotemporal information into account. Using an open-source multimodal database for the simulation trials, we find that the MSD system outperforms the conventional one regarding unweighted average recall. Positive experimental results confirm the suggested system’s potential impact on real-world IoT applications.
Tuncer et al. [20] present the fractal Firat pattern (FFP), which takes its name from the logo of Firat University. This paper introduces a multilayer feature generator using FFP and TQWT signal decomposition techniques. An approach to emotion recognition based on fractal pattern features was suggested in the article (FPFEA). Iterative selectors were used in the feature selection process. This model was evaluated on 14-channel emotional EEG signals using a support vector machine (SVM), k-nearest neighborhood (k-NN), and linear discriminant analysis (LDA). The suggested framework’s SVM classifier reached 99.82% accuracy.
Pane et al. [27] provide the results of an EEG-based public emotion dataset that includes four categories: happy, sad, furious, and relaxed. The most effective classifier in ensemble learning is valence lateralization random forest (VL-RF). Using grid search optimization, the RF model’s parameters were fine-tuned. We used SVM and LDA, two popular EEG methods, to compare RF. By utilizing three pairs of asymmetry channels—T7–T8, C3–C4, and O1–O2—the accuracy of emotion classification improved dramatically compared with the absence of lateralization. Among the three methods tested, RF achieved the best classification accuracy (75.6% vs. 69.8% for SVM and 60.4% for LDA).
Gong et al. [30] present a new model for EEG emotion identification that considers the brain’s hemispheric asymmetry along with the dimensional, wavelength, and spatial multi-domain characteristics of EEG signals. The goal of this model is to enhance the effectiveness of emotion detection. The two matrix models train a convolutional neural network that can extract depth characteristics in two different streams: spatial and spectral. We achieved stellar average results of 98.33%/2.46%, 92.15%/5.13%, 97.50%/1.68% (valence), and 97.58%/1.42% (arousal) on the SEED, SEED-IV, and DEAP public datasets, respectively, during our comprehensive trials.
Kamble et al. [31] suggest a noise-free method for obtaining the desired EEG frequency range for emotional emotion identification using a dual-stage correlation and instantaneous frequency (CIF) threshold. There is evidence from this study that EML classifiers outperform CML classifiers, specifically in an ensemble setting. The most effective F1 scores were 84.53% for arousal, 76.24% for valence, and 89.16% for dominance, as reported by random forest with differential entropy features. Compared with the three CML classifiers, the average F1 scores of the three EML classifiers were around 2.30%, 7.49%, and 2.65% higher, respectively. Finally, the proposed CIF-based filtering approach aids EML classifiers in detecting emotional states.

3. Discriminant Input Processing Scheme

Patients will receive self-assisted healthcare services and diagnoses based on observation with the help of Emotion-Aware Intelligent Systems designed for use in healthcare facilities. By relying on experience rather than cutting-edge technology, this system deviates from the norm in meeting user requests. Stream differentiation, feature analysis, stream similarity, suggestion, state analysis, accuracy, and recorded input from the input device are all supported by this system. All of the healthcare input devices share the recorded input. Several phases involve healthcare-related emotion classification from data acquired by input devices, including wearable sensors, before data stream differentiation for subsequent processing and analysis. Devices that receive signals relating to the user’s activity and physiological state collect raw data. Multifunctional sensors, audio devices, and visual gadgets might all fall under this category. These characteristics are vital to extracting relevant features from the preprocessed data to differentiate between various emotional states—features related to the body, sounds, and sight. Matching biological signals with their related aural and visual data is one example of synchronizing data streams; merging the characteristics gathered into a single dataset can help. The recorded input, stream distinction, and similarity primarily determine the emotion-aware healthcare system properties shared by DIPS. The workflow of the suggested method is shown in Figure 1.
There are several stages in DIPS that are used to capture and preprocess emotions. Data storage and retrieval, data accuracy and falsity analysis, approximation techniques to simplify the analysis, and data used to produce insights and predictions for individualized healthcare services are all part of the process. Emotional signals can be recorded using sensors or monitoring devices. This procedure improves the services and diagnoses provided by self-assistance healthcare providers.
DIPS stores and retrieves data from the input device’s recorded input. Accuracy, false data, approximation, analysis time, and data usage are all aspects of recorded input-based emotion data analysis that the suggested scheme adheres to. Q represents the recorded sequence input from the intelligent computing center’s observation-based diagnostic, where information is updated. The input device is investigated for patient emotion recording and data projection in a human-understandable format. The observation-based diagnostic center offers self-assisted healthcare services. The input device records these emoticons. The healthcare analysis monitors this recorded input using the input device. Assume that n stands for the patient’s healthcare data analysis. This system detects the patient’s physiological emotions utilizing the input device. Individual patient ID numbers are used to separate each  n . Grouping the patient ID with the approach list is necessary for DIPS to provide care. The diagnostic center revealed that this method has few certified users. A certified user is a patient registered with the diagnostic facility and assigned space with this patient ID.
In healthcare services and self-assistance diagnosis, DIPS tracks an individual’s emotional state using physiological emotions. Physiological responses to these feelings can include shifts in hormone levels, heart rate, blood pressure, and other vital signs. The patient’s general health, stress levels, and emotional reactions can be better understood when medical personnel analyze and interpret these feelings. As a result of this integration, care and treatment plans for patients are more precise and efficient. Table 2 represents the notations and their definitions.

3.1. Observation-Based Diagnosis

The diagnostic center requires a computing model that is fair with DIPS. All of these computations are saved by the input device, which is controlled by the analysis time A t . The DIPS service provider builds the use of stream differentiation S d and stream similarity S s features extracted for the segmented data D s .

Preliminaries

With these segmented data D s for emotion analysis, the various streams converge using a futuristic process based on segmented data retained by the observation-based diagnostic center. The two stream computations P c operated depending on A t are derived as
P c = { j = 1 i j ( A t ) i [ E k ] ,   A t > 1   j = 1 i A t j [ ( A t ) i 1 ] [ E k ] ,   A t 1  
In Equation (1), i denotes the number of recorded inputs of n to be saved for starting the functions. Similarly, the recorded input and analysis time of observation-based diagnosis and DIPS are the same—i.e., A t > 1 , otherwise, A t = A b A c ; therefore, A b and A c are the recorded input and analysis, respectively. DIPS processes the recorded input from the input device. The first instance of recording input involves the process A b >   A c , and hence, the input device follows P c as per the sequence of the A t 1 condition. In a recording instance, the emotional data receive E k derived as
E k = { x C p A t j ,   i f   0 > A t > 1   y C p A t ( j A t )   ,   i f   0 A t 1  
In Equation (2), x and y are the count of verified input sequences and varying emotion data sequences.
This occurrence of emotion sequence is recorded to R i of the observation-based diagnostic center. Therefore, the emotion data function is computed as
n u h ( x , n ) Q [ R i ] | x + j |   a n d   n v h ( y , n ) Q [ R i ] | y + j A t |
In Equation (3), the variables n u and n v are the recorded emotional data for the input device and n u is the recorded input for the update. In this manner, both conditions of [ n u S s + x S d ] = [ n v S s + x S d ] [ n u S s + y S d A t ] = [ n v S s + y S d A t ] are achieved. Therefore, the emotion data n are said to be differentiated.
In Algorithm 1, to make it easier to understand, this version uses function calls or placeholders for the complex mathematical expressions and notations (such as calculate expression_1 and compute emotion data_1). Because of this, the complex complexities of the calculations are not as overwhelming, and the pseudocode is easier to read and comprehend. To make the pseudocode more concise, certain variables have had their names abbreviated, and unnecessary comments have been eliminated.
The input device forms this intelligent computation of emotional data alongside the healthcare system approach list. The record contains the patient’s identification and the time of analysis. A patient’s n u and n v observations are encrypted using a specific performance. At this point, the detecting instance and patient ID are used to retrieve and determine the security portion of emotional data. This section will not be approved if the analysis duration is increased due to emotional data stored in DIPS or if it is accessed with a fraudulent communication patient ID. Figure 2 displays the feature-based diagnostic.
Algorithm 1. Observation-Based Diagnosis
Input: Initialize at initial analysis time
S d : Stream distinction feature
S s : Stream similarity feature
D s : Segmented data for emotion analysis
P c : Initial computation value
Output: Compute:  P c  based on  A t , Adjust:  E k  based on  A t , Analyze: Emotion data function
Swap:  P c  is based on feature-based analysis, and Deny: The access decision is based on  S d  and  S s condition.
Step 1: Compute  P c  based on  A t
if  A t  > 1 then
for  j  from 1 to  I  do
P c += compute_expression_1(j, A t E k )
else
for  j  from 1 to ( j A t ), do
P c += compute_expression_2( j ,   A t E k )
Step 2: Compute  E k  based on  A t
if    0   >   A t > 1 then
E k  = compute_expression_3( x ,   C p A t j )
else if   0   < = A t <= 1 then
E k  = compute_expression_4( y ,   C p A t j )
Step 3: Compute the emotion data function
n u  = compute1( x ,   n R i j )
n v  = compute2( y ,   n ,   R i j ,   A t )
Step 4: Feature-based analysis
if Ab! = Ac then
Pc follows 0  > = A t >= 1 condition
else if  A b = = A c   and   A t == 1 then
P c  is swapped from x + y = 1 instance
Step 5: Access  S A  SA based on S D and S A condition
if  S D   >   S A  then
Compute accuracy, false data, approximation, and data utilization ratio
else
Deny access to emotional data
It is not necessarily the case that additional characteristics or data points added to an ML model improve its accuracy. There are a few reasons why precision drops, such as overfitting and restricted generalizability, are consequences of the Curse of Dimension, which occurs when the amount of input characteristics sparsely grows at the expense of a feature. Not every feature helps make predictions. The model’s accuracy can be negatively affected by noise introduced by unnecessary or duplicated features. When there are more features to input, the model risks overfitting and capturing noise instead of the actual pattern in the training data. Unseen data perform poorly as a consequence. The computational expense for learning and forecasting goes up as the number of input features goes up, which might not be worth it if the reliability does not go up as well.
The data functions for segregation are responsible for inducing the channels of input data streaming. In this process,   A b and   A c are identified using the similarity check process. The similarity-verified data are agreed using the extracted features. In this feature extraction (as in Figure 2), limited ones are considered for verifying 0 A t 1 . As discussed above, if   A b A c , then P c follows the 0 A t 1 condition. Alternatively, if   A b = A c , and   A t = 1 , then P c is swapped from the x + y = 1 instance such that A t is given in common by P c between the observation-based diagnostic center and DIPS. This transaction of arrangement currently functions in an incrementing sequence of x or y depending upon the n u and n v stream differentiation. The recorded input of the healthcare system offers stream differentiation based on x and y that is satisfied. Let S D represent the stream differentiation under a multiple-feature study such that emotion data analysis S A is analyzed. The emotion data analysis in the healthcare system is functioned from the input device for capturing the emotion and handling the sequence. The two conditions are used for classifying the data using features to the recommendation, i.e., either S D > S A or   S D S A . These two conditions are analyzed in a varying instance to classify the emotion data in a better occurrence. The condition of S D > S A is observed and establishes the extracted feature analysis given to the stream similarity function with respect to P c and   A t . Therefore, the condition of S D S A is not extracted by the feature analysis and needs the inaccessibility of emotion data, i.e., P c for S D > S A and S D S A is defined as
P c u = { j = 1 i j ( A t ) i + [ A i × S A i ] ,   i f   S D > S A   j = 1 i A t j [ ( A t ) i 1 ] + A i [ S D i S A i ] ,   i f   S D S A  
In Equation (4), P c u is the variable that represents feature analysis operated by the emotion data through a patient. From this condition, the approval access on its stream differentiation and similarity   A u   D u basis is operated at the analysis time S A . The encrypted data about the patient do not modify by third person by accessing the healthcare information changes with   S A . The similarity checks for P c u access on the A u and D u recommendation. The modified emotion data shared between the healthcare system and the patient are recommended if   S D > S A ; it makes certain accuracy, false data, approximation, and data utilization ratio access to n .

3.2. Similarity Checking

DIPS is able to make precise assessments and suggestions because it employs similarity to compare and match patterns in recorded emotional data. Patterns can be better identified, recommendations can be more accurate, and data can be more effectively used in healthcare analysis with the help of similarity checks. Emotional data analysis, recommendation making, and self-assisted healthcare service efficiency and accuracy are all greatly enhanced by this approach.
Similarity checking is carried out per specific recommendation, and information is readily available inside healthcare systems. The least cost in analysis time-based verification was required for this similarity check. In checking for similarity, the simultaneous factor serves as the first instance. This procedure is considered to be   P c . It is shared between the healthcare system and the observation-based diagnostic center. Let { D 1 ,   D 2 , , D n } {D1, D2…Dn} be the sequence of the emotion data recommendation from the observation-based diagnostic center and be generated by intelligent computing.
This instance of checking similarity in the healthcare system denotes either n u or n v such that the secured healthcare data are given as if n u + [ A t Q [ R i ] ] S d [ A t ( x j ) ] = n u + A t [ Q [ R i ] ( x j ) ] Q [ R i ] [ A t ( x j ) ] or n v + [ ( A t Q [ R i ] ) S d [ A t × y ] ] = n v + A t [ Q [ R i ] × y ] . The above representation corresponds to the recommendation of n u or  n v . Therefore, the RHS is coordinated with the healthcare data properties Q [ R i ] and   S d . Hence, similarity verification experiences a change in sequence if   S D > S A . Otherwise, if the change in recommendation is accessed, then
n u + [ A t Q [ R i ] ] C p [ A t ( x j S D A t ) ] = n u + A t [ Q [ R i ] ( x j S D A t ) ] Q [ R i ] [ A t ( x j S D A t ) ]   n v + [ ( A t Q [ R i ] ) S d [ A t × y + ( x j ) A t ] ] = n v + A t [ Q [ R i ] × y + A t ( S D S A ) ]
According to the equation presented above, the similarity check and recommendation of the emotion data for the various instances of D 1 to D n are computed for intelligent computing. The verification of similarity is carried out in line with the approximation, and the data utilization ratio is determined by Equation (6) as follows:
S D ρ ( A t ) · Q [ R i ]   ρ ( S D ) j + 1 i [ S D i S A i ] n u   S A ρ ( A t ) ·   Q [ R i ]   ρ ( S A ) · P c j + 1 i [ A t × x y + ( x j ) A t ] { [ ( A t ) i 1 ] × ρ ( Q [ R i ] ) } j n v
Equation (6) computes the calculation of the recommendations observed in S D and S A for the current state ρ ( A t ) that is protected from the previous state of analysis. In the initial state of the recommendation process, the estimation of   S D and S A initial and final states ρ ( S D ) and ρ ( S A ) , the following inputs are for transfer learning. This process is induced to identify similar data shared between multiple origins of streams in the given input. The arrangement of instance handling of stream similarity is used to find the false data in innovative processing between A and   D     ρ ( A t ) . This transfer learning process is well defined in the following process.

3.3. Transfer Learning for State Analysis

In the state analysis process, transfer learning is used to retain the previous recommendation   A or   D and data feature for further data utilization f in sequential data streams. This learning depends on previously saved recommendations from A t ; this ensures better state recommendation accuracy, regardless of the repeating recorded input data and similar characteristics, is achievable. The sequence of varying instances through intelligent computing helps to separate A t for both states of the intervals and n ( Q [ R i ] ) for all A . In this manner, the transfer learning process performs two types of stating: recommendation state and analysis state. In this state computation process, A and   D are required to increase the accuracy of emotion data   A t in the healthcare system. Therefore, in the state analysis, the sequence of observed instance of A t is retained to improve the Z t join with better recommendation accuracy and detection of false emotion data. In particular, the sequence of emotion data differentiation is A t and Q [ R i ] . Figure 3 presents the transfer learning process for the recommendation state.
The state input is fed for similarity and approximation measures in two instances. The proposed state verifies the approximation for identifying   f such that data unavailability is precisely determined. The computation is varied for   S A n and the similarity sequence for providing recommendations as output (Figure 3). The computation of A t   Q [ R i ]   is mapped under S D and S A depending upon the following instance of sequence. In this process, A t and the analysis time are categorized independently through state analysis.
Algorithm 2 defines function calls, comments, variables, and keywords well. The three primary operations are as follows: ‘TransferLearningForRecommendationState’ calculates the recommendation state sequence, ‘TransferLearningForStateAnalysis’ calculates the analysis state sequence, and ‘MainTransferLearning’ computes the final output by combining the two. Detailed mathematical calculations and formulae are not included in this simplified execution.
Algorithm 2. Transfer Learning for State Analysis
Input: data instance: data_instance_1, data_instance_2,…, data_instance_n
Output: S D  SD: recommendation state sequence; SA: analysis state sequence
Step 1: function TransferLearningForRecommendationState(input_data) return
// Compute recommendation state sequence
for each data instance  ( R i )  R_i in input_data, do
S D [ i ]  = compute_recommendation_state  ( R i )
return SD
Step 1: function TransferLearningForStateAnalysis(input_data) return
// Compute analysis state sequence
for each data instance  ( R i )  in input_data, do
S A [ i ]  = compute_analysis_state  ( R i )
return  S A
Step 2a: Update state analysis
for each  S A [ i ] in S A  do
if false_data_detected ( S A [ i ] ) then
S A [ i ]  = update_state_analysis  ( S A [ i ] )
return  S A
Step 3: function MainTransferLearning(input_data) return
S D = TransferLearningForRecommendationState(input_data)
S A = TransferLearningForStateAnalysis(input_data)
return S A ,   S D
Step 4: Compute final output
final_output = compute_final_output  ( S D ,   S A )
return final_output
The state analysis processing is computed for n ( Q [ R i ] ) and ( n [ R i ] ) after which a precise transfer model helps to update the initial state. The state analysis sequence is estimated using Equation (7) as
S D = Q [ R 1 ]   S D 1 = 2 Q [ R 1 ] + ( n [ R i ] ) 1 ρ ( Q [ R 1 ] ) 1   S D 2 = 3 Q [ R 2 ] + 2 ( n [ R i ] ) 2 ρ ( Q [ R 2 ] ) 2     S D n = n ( Q [ R n ] ) + n [ R n ] ρ ( Q [ R 1 ] ) n + 1   R e c o m m e n d a t i o n   s t a t e |   S A = Q [ R 1 ]   S A 2 = 2 ( n [ R 2 ] ) + ρ ( Q [ R 2 ] ) 2   S A 4 = 4 ( n [ R 4 ] ) + ρ ( Q [ R 4 ] ) 2 ρ ( Q [ R 4 ] ) 2     S A n = n [ R n ] + ρ ( n ( Q [ R n ] ) ) n + 1 ρ ( n [ R n ] ) n + 2 j   a n a l y s i s   s t a t e  
The estimation of state analysis generates two outputs, Q [ R n ] and [ R n ] , from sequence S D 1 to S D n and updated instance S A 2 to S A n , respectively. In particular, the mapping is performed under multiple streams based on the occurring sequence. In this case, the condition of Q [ R 1 ] S D is not equal to Q [ R 1 ] S A in the mapping condition. Therefore, if the state occurrence of S D 1 is the initial state of emotion data, then S A 2 is performed using n [ R n ] ; i.e., Q [ R 1 ] is divided as per the standard of n [ R n ] instance, and then n [ R n ] + ρ ( n ( Q [ R n ] ) ) n + 1 ρ ( n [ R n ] ) n + 2 j is the consecutive updating of the recommendation state instances. In this manner, the starting state is ( S D , Q [ R n ] ) from which ( S A , n [ R n ] ) is separated using segmented data. In these segmented data, the segmentation of S D and S A is derived such that n [ R n ] = { n [ R n ] ρ ( n [ R n ] ) } and Q [ R n ] = { S D ρ ( n [ R n ] ) } are mapped independently. The analysis state of the update sequence is achieved in its first state, from which the consecutive state is mapped alone. In both states, the false data are increased before the recommendation state is updated as
S A = 1 ( n [ R 1 ] ) + Q [ R n + 1 ] n [ R 1 ]   S A 2 = 2 ( n [ R 2 ] ) + Q [ R n + 2 ] n [ R 2 ] ρ ( n [ R n 1 ] )   S A 4 = 3 ( n [ R 3 ] ) + Q [ R n + 3 ] n [ R 3 ] ρ ( n [ R n 2 ] )     S A n = j ( n [ R n ] ) + Q [ R n + j ] n [ R n ] ρ ( n [ R n j 1 ] )
The state analysis update is estimated at segmented data of all n [ R n ] or the previous start of the next   Q [ R n ] . The state analysis update depends on S A n n u being updated. The overall streams of the state analysis handle the state of ( S A n , n [ R n ] ) and   ( S A n ,   n u ) . In this process, the first S A n denotes the update for S D , and the next state denotes   S A . It is to be pointed out that either of the state S A n that is mapped under S D or S A is updated with the emotion data in the state analysis. From the state analysis, the recorded inputs   n [ R n ] , n u and f are used for the consecutive assessment of the S D 1 to S D n sequence. In this condition, the false data are derived as
f = j + 1 n P c A t ρ ( Q [ R n ] ) ( n u ) + n A t ( P c + n ) t   ρ ( n [ R n ] ) + ρ ( Q [ R n ] ) + ( A t ) i [ ( A t ) i 1 ]
As Equation (9) represents the variable, m is the occurrence of n u in Q [ R n ] as divided. This false data analysis helps us to compute the sequence of instance with the probability of mapping in either n u or   Q [ R n ] . Therefore, if   f = 0 , then the second instance if ρ ( Q [ R n ] ) is performed under Q [ R n ] . Similarly, if   n u < n v , then f = 0 (i.e., the update sequence of f = 0 ), which means that the sequential data streams of   n [ R n ] are achieved. The accuracy of n [ R n ] is valid until the above condition fails. Hence, the consecutive occurrence of n sequentially varies both   n [ R n ] and Q [ R n ] until the given state is discarded. Figure 4 presents the transfer learning process for state analysis.
Different from recommendation learning, the state analysis relies on the sequence and   S A for analyzing the streams such that   S A n is identified under different utilization. The utilizations are distinguished between   f and state updates for preventing false data (Figure 4). A stream analysis is performed recurrently to improve accuracy if false data persist. The above process shows that self-assisted healthcare services and observation-based diagnosis for patients are derived through a sequence of estimations as per the following Equations (2), (3), (7), and (9) of either n [ R n ] and Q [ R n ] follows:
n [ R n ] = n u ρ ( Q [ R n ] ) + S D   ( A t ) i   R n = j n u [ n v ρ ( n [ R n ] ) + S D   ( A t ) i ]  
Equation (10) is the combined output of the state analysis and accuracy ( A t ) i for all n [ R n ] instances. Hence, Z t in Q [ R n ] follows   j n u [ n v ρ ( n [ R n ] ) + S D   ( A t ) i ] ; this is shared between multiple streams of inputs f . Therefore, the data utilization ratio for the different patterns in emotion stream inputs for the instance of Q [ R n ] is derived as
n ( Q [ R n ] ) j n u   ( A t ) i = n [ R n ] n u ρ ( Q [ R n ] ) S D f j   ( A t ) i + n · n [ R n ]   j n u   ( A t ) i = S D f j   ( A t ) i + n u ρ ( Q [ R n ] ) n · n [ R n ]   ( A t ) i = n [ S D f j   ( A t ) i + n u ρ ( Q [ R n ] ) n · n [ R n ] ]  
where the variable S D used in Equation (11) is computed from the consequences of n [ R n ] as in Equation (3), which is computed from Q [ R n ] as in Equation (5), derived as its sequences of recorded inputs. The emotional data analysis of healthcare systems, i.e., Z t with the false data f in Q [ R n ] in n [ S D f j   ( A t ) i + n u ρ ( Q [ R n ] ) n · n [ R n ] ] , is the final estimation of  A t . If f and S A are not performed consecutively, then the entire state of Z t will be mapped under f ( Q [ R n ] ) , resulting in high accuracy.
Figure 5 presents an analysis of approximation and ratio under different state sequences. The proposed scheme achieves less approximation by detecting   S A < S D and   S A S D conditions and f reduction. This process is augmented using multiple stream analysis, reducing approximation. In the state-sequence-based data utilization, the change in   S A n and   S D n reduces f through the previous state update. S A n and   S D n are utilized for different computing instances, maximizing data utilization. Therefore, as data utilization increases, the false rate decreases based on n [ R n ] and n ( Q [ R n ] ) sequences. Table 3 presents the similarity ratio for different data input streams.
Table 3 analyzes the similarity ratio for different data input streams. The proposed scheme improves feature extraction by maximizing   E k exploitation. In the learning recommendation, the variations are reduced by preventing   f in   n [ R n ] and   S A n -based updates. This achieves fair data utilization regardless of the feature extraction ratio, which reduces f and maximizes data utilization and similarity. In Table 4, the false rate for different inputs is presented.
Table 4 displays the input streams (120) derived from the patients’ recorded inputs using the emotion data analysis and individual patient ID numbers. According to the diagnostic center, only a few certified users use this procedure. Patients who signed up with the diagnostic center and received a space assignment using this patient ID are considered certified users.
Table 4 presents the values for false rates for different inputs. The change in state sequences is defined as high/low such that the high state sequences are experienced if unavailability is high. Contrarily, the unavailability results in a false rate. Therefore, the f mitigation is prevented based on availability, avoiding an increase in false rate.

4. Results and Discussion

4.1. Dataset Description

This section continues by presenting the performance analysis through a comparative discussion. The ascertained dataset [32] is utilized when evaluating and distinguishing various emotions. This dataset gives emotional data based on physiological responses recorded using EEG, ECG, and facial expression labeling. This research introduces ASCERTAIN, a database that uses commercial physiological sensors to provide multimodal information for implicit personality and affect recognition. Using commercially available sensors, ASCERTAIN records the EEG, ECG, GSR, and facial activity of 58 users in real-time as they watch emotionally charged movie clips, in addition to their Big Five personality traits and emotional self-ratings. Our investigation begins with a review of the literature on users’ emotional assessments and personality measures, followed by a review of the linear and nonlinear physiological correlates of temperament and emotion. The experimental results show that comparing user reactions with emotionally homogeneous movies is the best way to uncover personality differences. Both the affective and personality dimensions can achieve above-chance recognition. Inputs are collected from 50 individuals utilizing five personality traits with 180 labels. Raw analysis is performed on the video data for emotion identification for 20 s. The average age of the 58 participants was 30 years, and they were exposed to 36 movie clips in the study. Electrocardiograms, galvanic skin responses, electroencephalography, and trajectories of face landmarks were used to synchronize the data. Annotations regarding the quality of all recorded data, Big Five personality trait scores, ratings of 50 descriptive adjectives, and self-reports for 36 videos, including 58 individuals, were all part of the data review. Examining how viewing videos affects character quirks was the primary goal of the research. Accuracy, false rate, approximation, analysis time, and data consumption parameters are all subject to the comparative analysis. This study compares the proposed scheme with existing studies, namely, Valence Lateralization with Ensemble Learning (VL+EL) [27], Fractal Pattern Feature-Based Emotion Recognition Approach (FPFEA) [20], and MSD [19].

4.2. Accuracy Comparison

The proposed scheme achieves fair accuracy for different feature extraction rates and inputs. In this scheme, similarity and   S A verification achieve the detection and analysis without errors. A b A c and   S D S A analyze the input for   P c u identification. This identification reduces the input (false) processing, improving   n u . The state sequences are validated using   S A n by assigning different iterative computations. In the different input   S D , the feature extraction is maximized by providing   f in different   R n . Based on the features,   S D n and   S A n are detected, and S D n is capable of providing different outputs for the requesting users through services. This alone increases the accuracy. However, the analysis state continues to mitigate to maximize accuracy. The recurrence in transfer state learning maximizes n ( Q [ R n ] ) by reassigning the state sequences. Finally, the Z t mapping for   f ( Q [ R n ] ) maximizes accuracy for the consenting   n [ R n ] . Therefore, the accuracy is precisely high, as presented in Figure 6.
Adding traits or data points to a machine learning model’s input does not necessarily make it more accurate. Standard methods for dealing with an abundance of input features, particularly when it surpasses the number of training examples, include regularization, dimensionality reduction, and feature selection. When training a model, it is critical to balance the amount and quality of the characteristics and information points used as inputs. The suggested model uses feature analysis, which means that the model’s accuracy in Figure 6 will remain unaffected by the increased output.

4.3. False Rate Comparison

A comparative analysis for different feature extraction rates and inputs is presented in Figure 7. The S A and   S D processes are validated for identifying   f such that the sequences are analyzed as A t ( S D S A ) . In transfer learning, ρ ( n [ R n ] ) is analyzed such that the instance Q   [ R 1 ] is segregated for   ( n + 1 ) and   ( n + 2 j )   j Q . This segregation is performed for identifying   n A t and   ρ ( n [ R n ] ) . In this identification,   n u occurrence in   Q [ R n ] alone generates   f that is mitigated using recurrent analysis until the false state is reduced. It is extended for the sequences   n [ R n ] and   R n such that   f in any   R n is reduced under different processing instances. The analysis is carried out until the accuracy (maximum) is reached. Therefore, j n u   f R n is used alone for computing the accuracy in identifying   j . It is unanimous for different inputs based on the extracted features.

4.4. Approximation Comparison

The proposed scheme identifies f -based approximation for maximizing accuracy. The proposed   P c is verified based on similarity verification such that   S A is improved. The state sequences are analyzed based on   S D n and   S A n for n ( Q [ R n ] ) . Based on   n v and   n u , the analysis for approximation requiring and leaving out sequences are identified. Through this identification, the learning and training iterates are classified. The sequence is determined for   [ A t ( x j ) ] = n u such that different approximation levels are retained. This is required for validating the data sequence provided that Q [ R i ] i n and   S d are matched. Contrarily, if   S D > S A is achieved, then   ( x j ) A t       j S D is approximated independently to satisfy either   S d > S A or   S D S A . Therefore, the approximation requirements are provided for different   n v and   n u requisites. ρ ( A t ) is updated as   S A 2 to S A n through different training iterations, reducing the approximation (refer to Figure 8).

4.5. Analysis Time Comparison

For the different feature extraction rates and inputs, the analysis time is presented in Figure 9. In the proposed scheme, the analysis is segregated for   f based and f confined instances. In the learning-aided process,   S D and   S A determine the utilization at a maximum level. The proposed scheme recommends the assigned data without requiring additional instances. ρ ( Q [ R i ] )   i n determines the computation requiring instances, preventing losses in augmentations of   A t , preventing additional time. In the similarity-checking process,   n [ R n ] and   ( S A n ,   n [ R n ] ) are segregated for independent analysis. Based on this process, the computations are distinguished without requiring additional time. The sequence   n [ R n ] and   R n are identified independently without maximizing the computation time. Therefore, different processes in data analysis are preserved without confining   S D > S A such that the time requirements are smaller. The process varies with the inputs associated with it and the features extracted to retain unanimous results throughout.

4.6. Data Utilization Comparison

A comparative analysis of data utilization for different feature extraction rates and inputs is given in Figure 10. The inputs are utilized for accuracy maximization and   f reduction based on transfer learning recommendations. n [ R n ] and   [ R n ] determine the data requirements for further detection, provided that approximation and f are reduced. In the pursuit processes, P c u is reduced by identifying different features recurrently. This identification defines   S D n and   S A n such that   n ( Q [ R n ] ) is maximized as   S D n in the first training phase. Contrarily, the following   S A n is analyzed for   f , from which   S A -preceded data are used for computation. The incoming   P c is defined using multi- E k such that   S A n   is performed for n ( Q [ R n ] ) in different   i n . Therefore, the utilization in the first instance greatly prevents further requirements. As the feature extraction rate increases, the utilization is pursued in achieving the maximum   E K exploitation. This is included for all   S D observed in different analysis times, requiring computational instances. These instances are responsible for maximizing data utilization. Table 5 presents the false rate and accuracy for different state sequences for five different emotions.
For the emotions above, first, we examine the collected data; among them, “happy” or “smiling” frequently yields accurate results. Second, the data make it easy to identify mood changes, while the others are categorized as miscellaneous. The contrasts mentioned earlier are presented in Table 6 and Table 7.
Summary: The proposed DIPS maximizes accuracy and data utilization by 15.9% and 15.96%, respectively. It reduces false rate by 8.75%, approximation by 10.02%, and analysis time by 9.47%.
Summary: The proposed scheme achieves 18.8% high accuracy, 9.56% low false rate, 9.65% low approximation, 9.24% low analysis time, and 15.42% high data utilization.
According to the diagnostic facility, few certified users use this process. Accredited patients are diagnostic center sign-ups who acquired a space assignment using this patient ID. In Figure 10, the data points are not very similar; there is a difference in patients’ emotions, so the false rate, accuracy, and other metrics vary in data points. Machine learning models may not be more accurate with more attributes or data. When input features exceed training examples, regularization, dimensionality reduction, and feature selection are used. The volume and quality of input characteristics and information points must be balanced when training a model. Since the suggested model uses feature analysis, the additional output will not impair accuracy. The computational cost of learning and forecasting increases with the quantity of input features, which could be detrimental if the dependability does not improve simultaneously.

5. Conclusions and Future Study

DIPS makes integrating emotion analysis into self-assisted intelligent healthcare systems possible, which is encouraging. Input devices can save and retrieve emotional data, improving healthcare services’ precision and efficiency. Patient care and treatment outcomes can be improved by better understanding patients’ well-being and emotional responses through the system’s focus on physiological emotions. Limited certified users, skewed or inadequate data, inadequate computational resources, and the inability to generalize across various healthcare settings and demographics are some of the difficulties that DIPS encounters. Research into DIPS’s potential uses in certain healthcare domains or populations, optimization of computing efficiency, enhancement of data quality, and ongoing development to guarantee its utility and robustness in intelligent healthcare systems should all be priorities for future studies. These are the areas where DIPS can improve and become a more helpful tool for AI healthcare systems sensitive to patients’ emotions.

Author Contributions

Conceptualization, M.A. and H.M.; methodology, M.M., A.K.D. and S.A.; software, M.A. and H.M.; validation, M.M.; formal analysis, A.K.D.; resources, M.M., A.K.D., H.M. and S.A.; data curation, S.A.; writing—original draft preparation, M.A. and H.M.; writing—review and editing, A.K.D. and S.A.; visualization, M.M.; funding acquisition, H.M. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (No. 2021R1F1A1055408).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data that support the findings of this study are openly accessible at the following link: https://sensor.informatik.uni-mannheim.de/#dataset_realworld_subject7, accessed on 5 March 2024.

Acknowledgments

The authors extend their appreciation to the Deanship of Scientific Research at King Khalid University for funding this work through a large group research project under grant number RGP2/461/45. The authors extend their appreciation to Prince Sattam bin Abdulaziz University for funding this research work through project number PSAU/2024/01/29802. The author Ashit Kumar Dutta would like to acknowledge the support provided by AlMaarefa University while conducting this research work.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Cao, S.; Liu, H.; Hou, Z.; Li, X.; Wu, Z. EEG-Based Hardware-Oriented Lightweight 1D-CNN Emotion Classifier. In Proceedings of the 15th International Conference on Intelligent Human-Machine Systems and Cybernetics, Hangzhou, China, 26–27 August 2023; pp. 210–213. [Google Scholar]
  2. Lyu, S.; Cheung, R.C. Efficient Multiple Channels EEG Signal Classification Based on Hierarchical Extreme Learning Machine. Sensors 2023, 23, 8976. [Google Scholar] [CrossRef] [PubMed]
  3. Kute, S.S.; Tyagi, A.K.; Aswathy, S.U. Industry 4.0 challenges in e-healthcare applications and emerging technologies. Intell. Interact. Multimed. Syst. e-Healthc. Appl. 2022, 265–290. [Google Scholar]
  4. Dobre, G.C. Social Interactions in Immersive Virtual Environments: People, Agents, and Avatars. Doctoral Dissertation, Goldsmiths, University of London, London, UK, 2023. [Google Scholar]
  5. Lv, Z.; Poiesi, F.; Dong, Q.; Lloret, J.; Song, H. Deep learning for intelligent human–computer interaction. Appl. Sci. 2022, 12, 11457. [Google Scholar] [CrossRef]
  6. Li, B.; Lima, D. Facial expression recognition via ResNet-50. Int. J. Cogn. Comput. Eng. 2021, 2, 57–64. [Google Scholar] [CrossRef]
  7. Zheng, J.; Huang, L.; Li, S.; Lajoie, S.P.; Chen, Y.; Hmelo-Silver, C.E. Self-regulation and emotion matter: A case study of instructor interactions with a learning analytics dashboard. Comput. Educ. 2021, 161, 104061. [Google Scholar] [CrossRef]
  8. Chuah, S.H.W.; Yu, J. The future of service: The power of emotion in human-robot interaction. J. Retail. Consum. Serv. 2021, 61, 102551. [Google Scholar] [CrossRef]
  9. Correia, A.I.; Castro, S.L.; MacGregor, C.; Müllensiefen, D.; Schellenberg, E.G.; Lima, C.F. Enhanced recognition of vocal emotions in individuals with naturally good musical abilities. Emotion 2022, 22, 894. [Google Scholar] [CrossRef] [PubMed]
  10. Zhang, J.X.; Dixon, M.L.; Goldin, P.R.; Spiegel, D.; Gross, J.J. The neural separability of emotion reactivity and regulation. Affect. Sci. 2023, 4, 617–629. [Google Scholar] [CrossRef]
  11. Murphy, J.M.; Bennett, J.M.; de la Piedad Garcia, X.; Willis, M.L. Emotion recognition and traumatic brain injury: A systematic review and meta-analysis. Neuropsychol. Rev. 2021, 32, 1–17. [Google Scholar] [CrossRef]
  12. Duriez, P.; Guy-Rubin, A.; Kaya Lefèvre, H.; Gorwood, P. Morphing analysis of facial emotion recognition in anorexia nervosa: Association with physical activity. Eat. Weight. Disord. -Stud. Anorex. Bulim. Obes. 2021, 27, 1053–1061. [Google Scholar] [CrossRef]
  13. Iwakabe, S.; Nakamura, K.; Thoma, N.C. Enhancing emotion regulation. Psychother. Res. 2023, 33, 918–945. [Google Scholar] [CrossRef] [PubMed]
  14. Nandwani, P.; Verma, R. A review on sentiment analysis and emotion detection from text. Soc. Netw. Anal. Min. 2021, 11, 81. [Google Scholar] [CrossRef] [PubMed]
  15. Jahangir, R.; Teh, Y.W.; Hanif, F.; Mujtaba, G. Deep learning approaches for speech emotion recognition: State of the art and research challenges. Multimed. Tools Appl. 2021, 80, 23745–23812. [Google Scholar] [CrossRef]
  16. Christ, N.M.; Elhai, J.D.; Forbes, C.N.; Gratz, K.L.; Tull, M.T. A machine learning approach to modeling PTSD and difficulties in emotion regulation. Psychiatry Res. 2021, 297, 113712. [Google Scholar] [CrossRef] [PubMed]
  17. Meng, W.; Cai, Y.; Yang, L.T.; Chiu, W.Y. Hybrid Emotion-aware Monitoring System based on Brainwaves for Internet of Medical Things. IEEE Internet Things J. 2021, 8, 16014–16022. [Google Scholar] [CrossRef]
  18. Dhote, S.; Baskar, S.; Shakeel, P.M.; Dhote, T. Cloud computing assisted mobile healthcare systems using distributed data analytic model. IEEE Trans. Big Data 2023, 1–12. [Google Scholar] [CrossRef]
  19. Li, M.; Xie, L.; Lv, Z.; Li, J.; Wang, Z. Multistep Deep System for Multimodal Emotion Detection with Invalid Data in the Internet of Things. IEEE Access 2020, 8, 187208–187221. [Google Scholar] [CrossRef]
  20. Tuncer, T.; Dogan, S.; Subasi, A. A new fractal pattern features a generation function-based emotion recognition method using EEG. Chaos Solitons Fractals 2021, 144, 110671. [Google Scholar] [CrossRef]
  21. Ahamed, F. Smart Aging: Utilization of Machine Learning and the Internet of Things for Independent Living. Doctoral Dissertation, Western Sydney University, Kingswood, Australia, 2021. [Google Scholar]
  22. Fei, Z.; Yang, E.; Li, D.D.-U.; Butler, S.; Ijomah, W.; Li, X.; Zhou, H. Deep convolution network-based emotion analysis towards mental health care. Neurocomputing 2020, 388, 212–227. [Google Scholar] [CrossRef]
  23. Du, Y.; Du, W.; Lin, D.; Ai, M.; Li, S.; Zhang, L. Recent progress on hydrogel-based piezoelectric devices for biomedical applications. Micromachines 2023, 14, 167. [Google Scholar] [CrossRef]
  24. Subasi, A.; Tuncer, T.; Dogan, S.; Tanko, D.; Sakoglu, U. EEG-based emotion recognition using tunable Q wavelet transform and rotation forest ensemble classifier. Biomed. Signal Process. Control 2021, 68, 102648. [Google Scholar] [CrossRef]
  25. Kao, F.-C.; Ho, H.-H.; Chiu, P.-Y.; Hsieh, M.-K.; Liao, J.-C.; Lai, P.-L.; Huang, Y.-F.; Dong, M.-Y.; Tsai, T.-T.; Lin, Z.H. Self-assisted wound healing using piezoelectric and triboelectric nanogenerators. Sci. Technol. Adv. Mater. 2022, 23, 1–16. [Google Scholar] [CrossRef] [PubMed]
  26. Dheeraj, K.; Ramakrishnudu, T. Negative emotions detection on online mental-health related patients texts using the deep learning with MHA-BCNN model. Expert Syst. Appl. 2021, 182, 115265. [Google Scholar] [CrossRef]
  27. Pane, E.S.; Wibawa, A.D.; Purnomo, M.H. Improving the accuracy of EEG emotion recognition by combining valence lateralization and ensemble learning with tuning parameters. Cogn. Process. 2019, 20, 405–417. [Google Scholar] [CrossRef] [PubMed]
  28. Anjum, M.; Shahab, S.; Dimitrakopoulos, G.; Guye, H.F. An In-Vehicle Behaviour-Based Response Model for Traffic Monitoring and Driving Assistance in the Context of Smart Cities. Electronics 2023, 12, 1644. [Google Scholar] [CrossRef]
  29. Upreti, K.; Mahaveerakannan, R.; Dinkar, R.R.; Maurya, S.; Reddy, V.; Thangadurai, N. An Experimental Evaluation of Hybrid Learning Methodology based Internet of Things Assisted Health Care Monitoring System. Res. Sq. 2021. preprint. [Google Scholar]
  30. Gong, L.; Chen, W.; Zhang, D. An Attention-Based Multi-Domain Bi-Hemisphere Discrepancy Feature Fusion Model for EEG Emotion Recognition. IEEE J. Biomed. Health Inform. 2024. online ahead of print. [Google Scholar] [CrossRef]
  31. Kamble, K.S.; Sengupta, J. Multi-channel EEG-based affective emotion identification using a dual-stage filtering approach. In Data Analytics for Intelligent Systems: Techniques and Solutions; IOP Publishing: Bristol, UK, 2024; pp. 1–3. [Google Scholar]
  32. Available online: https://ascertain-dataset.github.io/ (accessed on 5 March 2024).
Figure 1. Discriminant Input Processing Scheme’s workflow.
Figure 1. Discriminant Input Processing Scheme’s workflow.
Bioengineering 11 00715 g001
Figure 2. Feature-based analysis in DIPS.
Figure 2. Feature-based analysis in DIPS.
Bioengineering 11 00715 g002
Figure 3. Transfer learning for recommendation state.
Figure 3. Transfer learning for recommendation state.
Bioengineering 11 00715 g003
Figure 4. Transfer learning for state analysis.
Figure 4. Transfer learning for state analysis.
Bioengineering 11 00715 g004
Figure 5. Approximation and ratio (data utilization and false rate) for different state sequences.
Figure 5. Approximation and ratio (data utilization and false rate) for different state sequences.
Bioengineering 11 00715 g005
Figure 6. Accuracy comparisons.
Figure 6. Accuracy comparisons.
Bioengineering 11 00715 g006
Figure 7. False rate comparisons.
Figure 7. False rate comparisons.
Bioengineering 11 00715 g007
Figure 8. Approximation comparisons.
Figure 8. Approximation comparisons.
Bioengineering 11 00715 g008
Figure 9. Analysis time comparisons.
Figure 9. Analysis time comparisons.
Bioengineering 11 00715 g009
Figure 10. Data utilization comparisons.
Figure 10. Data utilization comparisons.
Bioengineering 11 00715 g010
Table 1. Related works.
Table 1. Related works.
AuthorProposed MethodApplication UsedOutcomesLimitations
Meng et al. [17]Emotion-aware healthcare monitoring systemInternet of Medical Things, EEGHigh efficiency and accuracy of emotionsScalability and generalizability of the hybrid emotion-aware monitoring system, especially for diverse patient populations and real-world implementation
Dhote et al. [18]Mobile healthcare apps using distributed cloud technologiesDistributed Data Analytics and Organization Model, federated learningEffective service rollout, improved data organizationEarly service and recommendation problems
Li et al. [19]Multistep deep (MSD) emotion detection systemDeep learning, imputation methodImproved performance by eliminating invalid dataGeneralization of the multistep deep system across different IoT environments and additional validation of emotion detection under diverse conditions
Tuncer et al. [20]Automatic emotion recognition system using EEGEEG, facial patterns, iterative selectorHigh accuracy in emotion classificationThe fractal pattern feature generation method’s scalability and robustness across different EEG data sources and neurological conditions
Ahamed [21]Smart ageing with fall detection and dementia diagnosisBiometric security, fall detection, dementia diagnosis99% accuracy in fall detection, 93% accuracy in dementia diagnosisGeneralizability and effectiveness of innovative ageing solutions across diverse contexts and further validation of machine learning algorithms
Fei et al. [22]Emotion analysis framework using Deep CNNAnalysis of the emotions of patients in healthcareIncreased accuracy in predicting emotionsAdditional validation of the emotion analysis framework in clinical settings and its performance across demographic groups
Du et al. [23]Hydrogel-based wearable and implantable devicesHydrogel structure, piezoelectric capabilitiesFlexible and stretchable devices, biomedical applicationsTranslating hydrogel-based piezoelectric devices to practical biomedical applications, including challenges related to stability and biocompatibility
Subasi et al. [24]EEG-based emotion recognition with noise reductionDiscrete Wavelet Transforms, tunable Q wavelet transformMaximized classification accuracyGeneralization of EEG-based emotion recognition to real-world scenarios and diverse datasets
Kao et al. [25]Piezoelectric and triboelectric nanogenerators for wound healingPiezoelectric and triboelectric materialsPotential use in wound healing, external electric fieldChallenges in implementing self-assisted wound healing using nanogenerators, including biocompatibility and device performance
Dheeraj et al. [26]Text-based emotion recognition using multi-head attention and BCNNMulti-head attention, bidirectional convolutional neural networkIdentification of negative next-based emotions, examination of mental-health-related questionsGeneralizability of the deep learning model for negative emotion detection in mental-health-related texts, including biases in training data
Pane et al. [27]Ensemble learning and lateralization approachEEG-based emotion recognition, hybrid feature extraction, random forestImproved emotion recognition accuracyGeneralization of the EEG emotion recognition method to diverse datasets and emotion categories
Anjum et al. [28]Behavior-based response model for smart city trafficRegression model, cloud computingReal-time insights for drivers, congestion reductionScalability and real-world applicability of the behavior-based response model for traffic monitoring, including data privacy challenges
Upreti et al. [29]Cloud-based model for smart city traffic analysisCloud computing, regression modelReal-time insights for drivers, traffic assistanceGeneralizability of the IoT-assisted healthcare monitoring system to diverse settings and need for further validation in clinical environments
Table 2. Notation and definitions.
Table 2. Notation and definitions.
NotationDefinition
A t AtRecorded input and analysis time
A b Recorded input from the input device
A c Analysis of the recorded input
E k Emotion data received
x Count of the verified input sequence
y Varying emotion data sequence
P c Process condition
S A Stream similarity
S D Stream distinction
S A n State analysis
S D n Feature extraction
f Feature extraction rate
n Iterative computations
Q Recorded sequence input
Z t Mapping for maximizing accuracy
n u nuInput processing improvement
n [ R n ] Requesting users through services
f ( Q [ R n ] ) Feature-based accuracy maximization
Table 3. Similarity ratio for different streams.
Table 3. Similarity ratio for different streams.
Streams Feature Extraction Data Utilization (%) Similarity (%)
10.1663.555.82
20.2571.6963.25
30.3168.2559.87
40.3875.3674.25
50.4273.9868.25
60.3982.6978.21
70.5889.5481.36
80.6992.6489.25
90.9797.5790.81
Table 4. False rate for different inputs.
Table 4. False rate for different inputs.
Inputs State Sequences Unavailability False Rate
20390.0730.04
40960.0960.08
801280.150.21
1201530.220.38
Table 5. False rate and accuracy for different state sequences.
Table 5. False rate and accuracy for different state sequences.
EmotionState Sequence
4080120160
AccuracyFalse RateAccuracyFalse RateAccuracyFalse RateAccuracyFalse Rate
Anger59.320.0863.410.07171.40.06586.50.061
Sad/Crying61.30.0766.470.06270.060.0677.50.052
Happy/Smiling67.30.06374.60.05978.20.05690.070.054
Mood Change81.30.04386.510.03990.390.03994.190.035
Miscellaneous80.40.05883.620.05186.10.04890.390.048
Table 6. Comparison for feature extraction rate.
Table 6. Comparison for feature extraction rate.
MetricsVL + ELFPFEAMSDDIPS
Accuracy (%)69.6878.3489.4795.059
False Rate0.3650.2550.1580.0843
Approximation0.2390.1970.1420.0925
Analysis Time (s)3.412.952.461.269
Data Utilization (%)72.7881.2391.0197.624
Table 7. Comparison for inputs.
Table 7. Comparison for inputs.
MetricsVL + ELFPFEAMSDDIPS
Accuracy (%)68.4875.6785.7495.429
False Rate0.3740.2810.1950.0921
Approximation0.2310.1870.1520.0935
Analysis Time (s)3.312.892.251.254
Data Utilization (%)71.5881.5791.8197.072
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Medani, M.; Alsubai, S.; Min, H.; Dutta, A.K.; Anjum, M. Discriminant Input Processing Scheme for Self-Assisted Intelligent Healthcare Systems. Bioengineering 2024, 11, 715. https://doi.org/10.3390/bioengineering11070715

AMA Style

Medani M, Alsubai S, Min H, Dutta AK, Anjum M. Discriminant Input Processing Scheme for Self-Assisted Intelligent Healthcare Systems. Bioengineering. 2024; 11(7):715. https://doi.org/10.3390/bioengineering11070715

Chicago/Turabian Style

Medani, Mohamed, Shtwai Alsubai, Hong Min, Ashit Kumar Dutta, and Mohd Anjum. 2024. "Discriminant Input Processing Scheme for Self-Assisted Intelligent Healthcare Systems" Bioengineering 11, no. 7: 715. https://doi.org/10.3390/bioengineering11070715

APA Style

Medani, M., Alsubai, S., Min, H., Dutta, A. K., & Anjum, M. (2024). Discriminant Input Processing Scheme for Self-Assisted Intelligent Healthcare Systems. Bioengineering, 11(7), 715. https://doi.org/10.3390/bioengineering11070715

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop