Next Article in Journal
A Non-Invasive Load Identification Method Considering Feature Dimensionality Reduction and DB-LSTM
Next Article in Special Issue
Recognizing Complex Activities by Combining Sequences of Basic Motions
Previous Article in Journal
Development of an Adaptive Fuzzy-Neural Controller for Temperature Control in a Brick Tunnel Kiln
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Sleep Analysis by Evaluating the Cyclic Alternating Pattern A Phases

by
Arturo Alves
1,2,
Fábio Mendonça
1,2,*,
Sheikh Shanawaz Mostafa
2 and
Fernando Morgado-Dias
1,2
1
Faculty of Exact Sciences and Engineering, University of Madeira, 9000-082 Funchal, Portugal
2
Interactive Technologies Institute (ITI/LARSyS and ARDITI), 9020-105 Funchal, Portugal
*
Author to whom correspondence should be addressed.
Electronics 2024, 13(2), 333; https://doi.org/10.3390/electronics13020333
Submission received: 15 December 2023 / Revised: 8 January 2024 / Accepted: 11 January 2024 / Published: 12 January 2024

Abstract

:
Sleep is a complex process divided into different stages, and a decrease in sleep quality can lead to adverse health-related effects. Therefore, diagnosing and treating sleep-related conditions is crucial. The Cyclic Alternating Pattern (CAP) is an indicator of sleep instability and can assist in assessing sleep-related disorders such as sleep apnea. However, manually detecting CAP-related events is time-consuming and challenging. Therefore, automatic detection is needed. Despite their usually higher performance, the utilization of deep learning solutions may result in models that lack interpretability. Addressing this issue can be achieved through the implementation of feature-based analysis. Nevertheless, it becomes necessary to identify which features can better highlight the patterns associated with CAP. Such is the purpose of this work, where 98 features were computed from the patient’s electroencephalographic signals and used to train a neural network to identify the CAP activation phases. Feature selection and model tuning with a genetic algorithm were also employed to improve the classification results. The proposed method’s performance was found to be among the best state-of-the-art works that use more complex models.

1. Introduction

Sleep is an important physiological process for the mind and body of each individual and constitutes about one-third of the human lifespan. There is a consensus that one’s quality of sleep is a significant contributor to a good quality of life [1]. Sleep deprivation may lead to long-term side effects, such as an increased rate of mortality due to an elevated probability of conditions such as obesity, heart failure, and stroke [2]. It has also been demonstrated that sleep can affect the consolidation of memories. Furthermore, inadequate sleep can lead to significantly lower performance in memory tests, particularly those involving the encoding of emotional information [3]. Sleep quality also seems to mediate the relationship between socioeconomic status and physical health [4]. For children and adolescents, there is also a correlation between school performance, sleep quality, and insufficient sleep [5].
There are a multitude of sleep-related disorders that contribute to a poor quality of sleep. These include conditions such as bruxism, Obstructive Sleep Apnea (OSA), insomnia, restless legs syndrome, somnambulism, and sleep terror [2].
OSA is particularly significant since it is one of the most prevalent sleep disorders, characterized by choking episodes during sleep that can lead to negative consequences such as sleep fragmentation [6]. The severity of this disorder can be assessed by the apnea-hypopnea index, which indicates the number of apneas and hypopneas that take place per hour of sleep [7]. Disturbed sleep can lead to changes in the brain’s electrical activity and, thus, can be diagnosed through approaches that involve the examination of these signals.
Electroencephalography (EEG) is a non-invasive, standardized tool for diagnosing multiple neurological problems, including sleep-related disorders [8]. This technique uses electrodes that are positioned along the scalp according to international standards, such as the 10–20 system [9]. Oscillations observed through the EEG signal can then be used to characterize the sleep process and provide an understanding of how sleep functions and is structured [10].
Sleep can be described in terms of both the macrostructure and the microstructure. The sleep macrostructure is composed of blocks of sleep activity that are longer in duration and appear in sequences that repeat over time. Each block corresponds to a different level of deepness, denoted as sleep stages. Frequency and amplitude are the main descriptors of the activity in different sleep stages. Regarding the frequency analysis, several standardized bands have been proposed to evaluate the segmented structure of sleep. The most common ones are delta (0.5–4 Hz), theta (4–8 Hz), alpha (8–12 Hz), sigma (12–15 Hz), beta 1 (15–24 Hz), and beta 2 (24–30 Hz). The sleep stages include wake, rapid eye movement (REM), and non-rapid eye movement (N-REM) sleep, with N-REM being further subdivided into N1, N2, and N3 stages.
REM sleep is characterized by desynchronized low-amplitude waves and resembles wakefulness [11]. Dreaming typically occurs during REM sleep but can also appear during N-REM sleep [12]. As for N-REM sleep, each of its stages is associated with different patterns of brain activity [13].
In contrast, the sleep microstructure comprises shorter events associated with transient sleep features. These events include vertex-sharp transients, sleep spindles, k-complexes, k-alphas, intermittent alphas, delta bursts, and polyphasic bursts [14]. The concept of Cyclic Alternating Pattern (CAP) can be used to further examine the microstructure, providing a way to measure the stability of sleep during N-REM.
The CAP is a periodic activity detectable on the EEG signal that can sporadically occur during N-REM sleep (and more rarely during REM sleep), and it may indicate instability during sleep [15]. CAP cycles are composed of activation phases (A phases), where microstructure-related patterns emerge from the background activity and B phases. These B phases correspond to a period of return to the background activity between A phases that occur up to 60 s apart. A phases are branched into three different kinds: A1, where slower rhythms are most frequent; A2, containing a mix of slower and faster waves; and A3, where faster low-amplitude waves prevail.
Figure 1 displays an EEG sample containing an A phase, highlighted by a square. The A phases are characterized by fluctuations in the amplitude of the EEG signal, as depicted in the figure. A phases typically have a duration of 2 to 60 s; thus, they are short-lived events in the EEG signal. The example presented in Figure 1 lasted 3 s. For a CAP to be recognized, an A phase and a following B phase must individually have a duration between 2 and 60 s. If there is a period lower than 2 s between two A phases, both phases are joined together into one. The valid B phase must be bounded by two A phases up to 60 s apart. A CAP sequence is a group of two or more successive CAP cycles [14]. The ratio between the total CAP time and the total N-REM sleep time is known as the CAP rate. The high values of this metric are connected to a lower quality of sleep [15].
The standard procedure for a CAP analysis is based on a visual inspection of sleep recordings. This procedure is time-consuming and requires substantial domain knowledge to identify specific patterns. The sessions carried out on the patients tend to be long in duration. Consequently, the analysis of the collected EEG recordings is a tedious process that is prone to mistakes. Furthermore, the inter-scorer agreement for the same EEG results ranges between 69% and 77.5%. Hence, automatic scoring algorithms are necessary to overcome these limitations [16].
Despite CAP having immense potential for clinical application, it is still unfeasible until proper automation arises. A recent systematic review [17] between 1998 and January 2023 found that the thresholds used were either mathematical models or features to identify A phases. These were recently followed by conventional machine learning, which has been dominated by deep learning models. Deep learning models are better for accuracy; however, more are required for explainability, which is essential for clinical application. As CAP is associated with specific patterns, the hypothesis of this study is that a well-designed feature-based approach using a Machine Learning (ML) classifier can provide a suitable solution for automating the CAP analysis. However, which features are optimal for this approach are currently unknown. This issue can be addressed by incorporating domain-specific expertise into the feature engineering procedure along with an exhaustive search approach.
While deep learning models could be utilized, they lack the interpretability that a feature-based approach can provide. Moreover, feature-based methods are typically more data-efficient than deep learning. This is particularly important for this problem, as there is only one available data set with possibly insufficient data to train very deep models, even with transfer learning. Therefore, this study focuses on feature engineering to address this problem, examining features that can highlight patterns in the signal’s amplitude and frequency.
This work intends to support the detection of CAP by assembling an automatic A phase detection algorithm while focusing on explainability. At a later iteration, this algorithm could be applied to a real-world scenario with real patients or healthy personnel using EEG. Moreover, this mechanism could be beneficial for the examination of sleep-related disorders as well as aiding in the estimation of sleep-associated metrics for sleep quality measurements.

2. State-of-the-Art Overview

Different authors have proposed several solutions concerning automatic sleep analysis through CAP. Mendonça et al. [17] identified three primary methods in state-of-the-art.
The first method involves tunable thresholds for the classification procedure, utilizing patterns identified by a mathematical model or features. Barcaro et al. [18] proposed an example of this process using the normalized mean amplitude measurement to identify the A phases. This metric was calculated for the different frequency ranges and compared to the background activity through a normalization process. Navona et al. [19] also suggested a similar approach, achieving an accuracy (Acc) of 77%, a specificity (Spe) of 90%, and a sensitivity (Sen) of 84% on the F4-C4 trace.
Niknazar et al. [20] and Machado et al. [21] implemented a threshold-based analysis, with the latter using the Teager Energy Operator (TEO). This operator was calculated for different frequency bands and compared with a macro-microstructure descriptor.
Ferri et al. [22], Fantozzi et al. [23], and Largo et al. [24,25] followed the threshold-based methodology. Specifically, Largo et al. applied a discrete wavelet transform to the EEG signal to determine the frequency band features as an alternative to the classical Fourier analysis. Furthermore, an initial study by Mariani et al. [26] tried to compute various features from the EEG signal to be used for threshold classification and to evaluate the significance of these descriptors. A subsequent study by Mariani et al. [27] used variable length windows, which outperformed the traditional fixed length windows, probably due to the intrinsic properties of A phases.
While interesting results were achieved, it has been reported that threshold-based approaches may face issues with generalization capability [28]. Thus, this methodology was not used in this work. The second approach identified in the state-of-the-art involved utilizing shallow ML models that were fed with features. This work follows this methodology due to its superior interpretability potential. Such is essential for clinical analysis, unlike the third state-of-the-art approach, which uses deep learning models [29]. The works that follow deep learning-based methodologies were reviewed by Mendonça et al. [17] and are out of scope for this work.
Mariani et al. [30] tested the same features proposed by Mariani et al. [26] on four different classifiers. A similar approach was also followed by Mariani et al. [31] using a Feed Forward Neural Network (FFNN) and by Mariani et al. [32] using a Support Vector Machine (SVM). Mendez et al. [33] evaluated a K-Nearest Neighbors (KNN) approach with entropy, spectral, and other types of features to discriminate A phase subtypes.
Karimzadeh et al. [34] tried directly distinguishing the CAP from the background activity using entropy and complexity-based features on three classifiers. Feature selection was employed using Sequential Forward Selection (SFS). The SVM classifier provided the best results, proving the utility of entropy-based features for CAP detection. Dhok et al. [35] used the Wigner–Ville distribution for time-frequency analysis of two-second data segments, calculated Rényi entropy, and fed the results into an A phase classifier based on an SVM.
Mendonça et al. [36,37] analyzed different classifiers by extracting eleven features. FFNN obtained the best results. Power Spectral Density (PSD) on the beta band, Shannon entropy, and TEO were selected as the most relevant features through SFS. Sharma et al. [38] utilized an orthogonal filter bank and wavelet decomposition to divide EEG signals into six sub-bands. They computed wavelet entropy and three Hjorth parameters from each sub-band, resulting in 48 features. A similar approach was also used by Sharma et al. [39].
After analyzing the features employed in the state-of-the-art, it was determined that they focused on either studying fluctuations in the signal’s amplitude within a given period or patterns in the frequency domain. This research aimed to merge previously utilized state-of-the-art features that were identified as suitable for A phase analysis and incorporate novel features from both the time and frequency domains. Therefore, a set of statistics-based, entropy-based, and PSD-based features were examined.
Among the reviewed works, it was observed that Mendonça et al. [36] and Mariani et al. [31] attained balanced performances while using a simple shallow classifier. Therefore, a similar approach was followed in this work using an FFNN. The primary objective is not to pursue peak performance, as deep learning models do, but rather to attain a satisfactory performance level (within the range of specialist agreement) with an interpretable methodology. It was conducted in a way that comprehensively explores the largest array of features ever investigated for A phase classification.

3. Materials and Methods

The diagram of the steps followed to create the proposed solution is shown in Figure 2. Two feature selection approaches were used, namely minimum redundancy maximum relevance (mRMR), and SFS, and the model was optimized using a genetic algorithm (GA).
All the steps required for the conception of the system were implemented in MATLAB. The developed code was made available in a public repository at https://github.com/SvelaT/arturo_sleep (accessed on 1 December 2023) to allow the reproducibility of the results.

3.1. Examined Data

The approach proposed in this study involved employing an FFNN to construct an algorithm for detecting A phases. Examining the used features provides insights into the rationale behind the black box classifications of the FFNN, aligning with the explainability focus of this work. For this to be achievable, a data set with CAP A phase labels for different patients is needed to train the algorithm. Therefore, recordings from the CAP Sleep Database [14], available at https://physionet.org/content/capslpdb/1.0.0/ (accessed on 10 January 2023), were used. Specifically, EEG signals from 19 patients, four of whom were diagnosed with OSA, were used to obtain samples to be employed in the training of the neural network.
The examined subjects were 10 males and 9 females, aged between 23 and 78 years old (mean 40.6), and the EEG derivation was either C3-A2 or C4-A1. The database labels the three A phase subtypes (A1, A2, and A3) at each second of sleep. Since the goal of this work is to examine the presence of an A phase, the subtypes’ information was merged. Patients are sampled at different frequencies (from 100 Hz to 512 Hz), and, in total, 562,311 signal samples are at one’s disposal for learning. Of these, 13.7% are related to A phases, and the remaining are not-A phases.

3.2. Feature Creation

With the data on hand, numerical features were extracted from the EEG signal to be used in the classification problem. These features were proposed considering state-of-the-art work and problem knowledge. So, for each window of EEG samples, the following parameters were extracted: mean (first statistical momentum); variance (second statistical momentum); skewness (third statistical momentum); kurtosis (fourth statistical momentum); standard deviation; variation of the amplitude; amplitude range; Shannon entropy; log energy entropy; Renyi entropy; Tsallis entropy; autocovariance; autocorrelation; TEO; PSD on the delta band; PSD on the theta band; PSD on the alpha band; PSD on the sigma band; PSD on the beta 1 band; PSD on the beta 2 band.
The mean represents the average amplitude, offering insights into overall signal level changes during A phases. Variance and standard deviation measure signal spread, helping to identify stability or instability. Skewness reveals waveform asymmetry, aiding A-phase identification. Kurtosis characterizes waveform peakedness, which is useful for sharpness assessment. Variations of amplitude and the amplitude range track amplitude changes, which are valuable for detecting A phases, particularly A1 subtypes. The studied entropies assess the signal complexity (a characteristic of the A phases), while autocovariance identifies repetitive patterns. Autocorrelation detects periodic components, and TEO reveals dynamic energy changes. PSD across different frequency bands provides insights into EEG frequency composition, linking to various sleep stages and microstructure activity patterns, which is crucial for CAP analysis.
PSD for the different frequency bands was estimated using two approaches: Welch’s method and wavelet decomposition, which were both included as separate features. Wavelet decomposition decomposes the original signal into coefficients containing low- and high-frequency information. Furthermore, the first, second, third, and fourth statistical momentums, together with the entropy measures, were applied to ten of the coefficients generated by the decomposition and added as features. This resulted in a total of 98 features.
The Wavelet Decomposition Coefficient (WDC) used for the calculations were: c1 (0 to 50 Hz); c3 (0 to 25 Hz); c4 (25 to 50 Hz); c5 (0 to 12.5 Hz); c6 (12.5 to 25 Hz); c9 (0 to 6.25 Hz); c10 (6.25 to 12.5 Hz); c13 (0 to 3.125 Hz); and lastly, c14 (3.125 to 6.25 Hz).
Overlapping windows were used to calculate these descriptors, and each patient’s data were scaled before the extraction. Each computed feature was then scaled using Z-score standardization. This technique remaps the data distribution such that its mean is equal to zero and its standard deviation is equal to one. Feature scaling serves the purpose of improving classification since ML is sensitive to the magnitude of its inputs.
The FFNN used to implement this procedure takes in the features and labels for classification and tries to predict output values (labels) for each combination of features. The value returned by each output on the classification neural network can be interpreted as the probability of that sample of inputs belonging to a certain class. Samples were divided into train, validation, and test sets using 5-fold cross-validation without splitting each subject’s data between the folds (to ensure subject-independent results).
The data reveal a sturdy imbalance between samples of different classes. In particular, A phases’ samples are much less frequent than non-A samples’. This is to be expected since the occurrence of an A phase is a not-so-common event. Due to some intrinsic properties of the classification algorithm, training the model under these conditions would lead to a sub-optimal classification of A phases. Thus, balancing through standard cost-sensitive learning (using a ratio that considers the number of samples in each class and the total number of samples) was employed to overcome this hurdle without manipulating the data distribution.

3.3. Representative Features Selection

Initially, feature selection was employed with the use of the mRMR algorithm. This algorithm is not dependent on the used ML model, as it calculates scores based on the mutual information between features [40]. The identified best features were used in the subsequent optimization procedure. These can be seen as representative features, allowing optimization to be carried out with fewer features, and speeding up the procedure.
The most effective features identified by mRMR were utilized in the following optimization process: These features are considered representative of all feature sets, making it possible to optimize the use of fewer features and speeding up the procedure. Particularly, the structure of the network and the parameters associated with the learning process had to be optimized by selecting a set of best values. The selection of parameters was made using a GA. This metaheuristic optimization algorithm is frequently used when the search space is too large, making it suitable for this work.

3.4. Model Tuning

Before starting the tuning process, the cost function, the maximum number of failed validation checks, the minimum gradient value, and the maximum number of training epochs were set to fixed values. Such was carried out to reduce the number of tuning variables and thus cut the number of parameter combinations, which would, in turn, lower the amount of time necessary for tuning the model. The cost function was set to cross-entropy since it is the function most recommended for classification problems. The maximum number of failed validation checks was set to 10, the minimum gradient value was set to its default value on MATLAB, and the maximum number of training epochs was set to 5000.
Therefore, the rest of the model parameters need to be tuned, and for that, a set of possible values must first be settled for each parameter:
  • Training algorithm: Resilient Backpropagation (Rprop); scaled conjugate gradient backpropagation; BFGS quasi-Newton backpropagation; conjugate gradient backpropagation with Powell-Beale restarts; conjugate gradient backpropagation with Fletcher–Reeves updates; conjugate gradient backpropagation with Polak–Ribiére updates; gradient descent with momentum and adaptive learning rate backpropagation; one-step secant backpropagation.
  • Number of neurons in the hidden layer: 10; 30; 50; 70; 90; 110; 130; 150; 170; 190; 210; 230; 250; 270; 290; 310.
  • Number of hidden layers: 1; 2.
  • Activation function: hyperbolic tangent; sigmoid.
  • Performance ratio: 0; 0.01; 0.04; 0.07.
The included training algorithms are the most common algorithms implemented on MATLAB, and the default values were used. Exactly eight algorithms were selected to occupy three bits of data fully. The number of neurons was selected based on steps of 20 from 10 to 310, and this resulted in 16 possible values, which took precisely four bits of data to represent. The network could have either one or two layers since these are the most common amounts for shallow configurations. The same rationale was applied to the possible activation functions. The performance ratio was maintained at low values.
With these options, there are a total of 2048 possible configurations. The information containing which tuning parameters to use for each individual was encoded in a string (chromosome) of 11 bits. Individuals were selected using a tournament selection, and the best individuals were used as parents for the two-point splitting crossover [41]. A mutation could be applied to the created offspring to increase the amount of exploration the algorithm does. Furthermore, elitism was also used.
The first obtained score value for a given chromosome was maintained throughout the evolution. This means that if a chromosome appears in multiple generations, its score value will not be recalculated, but instead, its first calculated value is used. This reduces the amount of time necessary for tuning. The optimization procedure runs for a total of 50 generations, with 20 individuals for each generation. Two of the best individuals were kept between generations. The mutation change was set to 5%. Each individual was scored by training the model with the parameters associated with its chromosome, collecting the Area Under the Curve (AUC) value, and using it as the scoring metric. However, to allow the GA to perform a minimization operation, the fitness score was calculated by 1 − AUC.

3.5. Final Examination

A second feature selection was employed to select features using the configuration obtained from the tuning step. This last step used a more resource-intensive approach with SFS, where features are selected according to their performance on the model. Lastly, the performance of the models was examined using a leave-one-out cross-validation with subject independence. Specifically, data from 18 subjects was used for training, reserving the data from one subject for testing. To ensure subject independence, no data from the testing subject was used in the training process, and each subject was used once to compose the testing dataset. The analysis was repeated 19 times (once for each subject). This way, it is possible to assess the generalization capability of the model further.

4. Results and Discussion

An initial feature selection step using the mRMR algorithm collected 13 features from the 98 features. These features are the most dissimilar but not necessarily the best-performing ones and obtained an average AUC of 80.57% on the test set. The sequence of the selected features, from most to least relevant, were: Shannon entropy of the EEG signal; skewness of WDC c4; log energy entropy of WDC c4; skewness of WDC c14; Welch PSD of the theta band; kurtosis of WDC c13; variation of the amplitude; kurtosis of WDC c10; Welch PSD of the alpha band; Renyi entropy of WDC c13; mean of WDC c3; Welch PSD of the delta band; and Welch PSD of the beta 1 band. Notably, these automatically selected features cover the A phases’ time and frequency characteristics.
It is crucial to delve into the rationale behind selecting these features as the most relevant, as they can offer new clinical perspectives on the phenomena associated with the A phases. Shannon entropy of the EEG signal might provide insights into the complexity and variability of CAP A phase patterns. Skewness of WDC c4 and c14 might reveal asymmetries in high- and low-frequency EEG components. These are valuable for identifying unique features of CAP A phases, particularly those linked with the A3 and A1 subtypes.
The log energy entropy of WDC c4 can possibly offer an alternative perspective to the other entropies, emphasizing higher-frequency characteristics linked to CAP A phases. Welch PSD in the theta, alpha, delta, and beta 1 bands likely enables the examination of dissimilar frequency-specific patterns for each of the A phase subtypes. Kurtosis of WDC c13 and c10 can possibly aid in examining the lower frequency associated with the A phases. Variation of amplitude can assess changes in EEG signal amplitude oscillations. At the same time, the Renyi entropy of WDC c13 can capture unique aspects of CAP A phases at a lower frequency not addressed by other entropy-based metrics. Also, the mean of WDC c3 can provide a summary statistic of all the frequencies linked to the A phases that are encompassed by c3.
These most representative features were then used on the model tuning step using a GA, which optimized the network structure. The reduced number of inputs helped shorten the time necessary for finding the optimized parameters. The algorithm achieved its minimum fitness value at generation 15, which is equivalent to an AUC of 81.02%. The network configuration that achieved these results comprised two hidden layers of 190 neurons each. The activation function in these layers was the sigmoid, the regularization parameter was set to 0.07, and the training algorithm was the Rprop [42].
The mean and best fitness value for each generation of the GA are represented on the plots in Figure 3. This figure illustrates the average fitness value of all models in each generation and the performance of the best-performing model in each generation. This figure confirms that the search procedure reached saturation after finding the best model and supports the decision to not continue the search after generation 50.
The tuned model was then trained through SFS to select a different set of features for the final model. It is important to notice that all the simulations used models whose weights were randomly initiated. All the initial 98 features were taken into consideration during this step. The variations of the examined performance metrics as the number of used features’ increments are presented in Figure 4.
Having finished the training, a peak AUC value of 83.0% occurred at 16 features. The order of selected features (from first to last selected) was: Welch PSD of the delta band; variance of WDC c6; Tsallis entropy of WDC c4; Welch PSD of the beta 1 band; Tsallis entropy of WDC c14; variance of WDC c3; amplitude range; variance of the EEG signal; Welch PSD of the sigma band; kurtosis of WDC c10; Shannon entropy of WDC c9; Renyi entropy of WDC c9; kurtosis of WDC c4, log energy entropy of WDC c14; log energy entropy of WDC c6; and Renyi entropy of WDC c10.
The features identified as the most informative by mRMR may not necessarily be the optimal choices for model input, as illustrated by the SFS analysis, where a subset of other features exhibited superior performances. This suggests that while mRMR is effective at ranking features by their individual informativeness, it may not fully capture the synergistic interactions among features contributing to the model’s overall performance. Thus, it underscores the importance of a comprehensive feature selection approach that considers individual feature relevance and their collective impact on model performance. Thus, it became crucial to understand why the selected 16 features outperformed when combined.
These selected features include the variance of specific high-frequency components (WDC c6 and c3) that exhibit varying patterns during A phases. Thus providing insights into relevant high-frequency fluctuations. The amplitude range complements the variance by capturing amplitude oscillations indicative of signal amplitude changes during A phases. The Tsallis entropy applied to WDC c4 and c14 focuses on unique complexity aspects within low and high frequency, while the Shannon entropy of WDC c9 quantifies information content possibly linked to subtype A1 patterns.
The Renyi entropy of WDC c9 and c10 offers complementary information within the frequency band, possibly associated with A1 and A2 subtypes. The kurtosis of WDC c4 examines characteristics within the higher frequencies. The Welch PSD of the sigma band includes frequencies related to subtype A2. Finally, the log energy entropy of WDC c14 and c6 assesses complexity across frequencies typically associated with all A phase subtypes. These diverse features collectively contribute to a more comprehensive understanding of CAP A phases in EEG data.
An example that displays the response of the selected features during the occurrence of the A phase subtypes is presented in Figure 5. Although it is not possible to extrapolate the general behavior from just three examples, these serve as an illustration to depict some observed common traces of the features. However, not all features have an individual interpretation, as previously observed with the mRMR-selected features, since it might be a combination of two or more features that highlight the pattern. Nevertheless, by examining the figure, it becomes apparent that the different features have different responses to the three subtypes. This was expected, as each subtype has unique characteristics. Specifically, A1 is characterized by high-amplitude slow waves, while A3 exhibits the opposite pattern, and A2 represents an intermediate state between the two [14].
In these examples, notable trends emerge. Particularly, the variance of WDC c3 reveals a pronounced increase in the A1 subtype, while WDC c6 exhibits more variation in the A2 and A3 subtypes, which is consistent with the anticipated frequency response. Additionally, log energy entropy demonstrates an increase for WDC c6 across all subtypes, while the Shannon entropy of WDC c9 shows a clear response in subtype A1, indicating variations in the complexity of the signal.
Furthermore, the PSD on the three selected bands exhibits evident responses to subtype presence. Such includes a decrease in the delta band during A3, a variation in the beta 1 band during A1, and increases in the sigma band across all subtypes. These variations aligned with state-of-the-art findings. Lastly, both variance and amplitude range conspicuously increase in the presence of A1 subtypes. That is consistent with the high-amplitude variations characterizing this subtype. On the other hand, A3 subtypes exhibit the opposite trend in variance, as discernible in the feature responses, with A2 subtypes displaying intermediate behavior. This analysis underscores the explainability potential of the proposed features, aligning closely with the primary objective of this work.
To draw more generic conclusions, the full dataset was examined. For each subject, the variation of the output of the selected features was analyzed in comparison with the EEG that is not related to an A phase, including all sleep stages. Thus, the used expression was (YX)/X, where X is the average value of the feature for the one-second epochs unrelated to an A phase and Y is the average feature output during the subtype occurrence. The results for all examined subjects (each subject was a data point for the plot) are presented in Figure 6. It is expected to have a larger spread of results since all sleep stages were included and the population comprises healthy and sleep-disordered patients. thus allowing the authors to observe the global trends of the features’ outputs.
By examining Figure 6, it is possible to corroborate the former analysis, as the drawn conclusions still hold for this general examination. Furthermore, it is clear that most features show a substantial increase in the median compared to the EEG signals unrelated to an A phase. It is likely that features with a higher median are more relevant for A phase detection, as it means that the feature output reacts more in the presence of the A phase than to the signals of not-A phase-related periods. Furthermore, it is interesting to note that some features (such as the Welch PSD of the beta 1 band) have a similar median value for all subtypes, while others (such as the kurtosis of WDC c10) seem to be more responsive to one of the subtypes. Lastly, some features have a low median but a higher variation. They were possibly selected by SFS because they might provide complementary information to the other features in synergistic interactions.
These analyses underscore the significance of frequency-specific details in distinguishing A phases. This aligns with the CAP scoring protocol, which emphasizes the presence of both amplitude-based and frequency-based components within these phases. Including various WDC across different frequencies highlights the importance of capturing wavelet-derived attributes for effective A phase classification. It suggests that different types of features may excel in identifying information content and complexity at distinct frequencies. Additionally, incorporating statistical moments into the selected feature set highlights the interest in capturing statistical properties and amplitude fluctuations within the EEG signal. Furthermore, these analyses stress the interpretability potential of the proposed approaches, where it is possible to draw conclusions for the reasons that lead to the A phase classification. This is a fundamental characteristic of clinical analysis.
Additionally, the fact that features proposed in previous research [36] were also selected as relevant demonstrates the consistency and validity of the approach taken in this work. Furthermore, the selection of several newly proposed features, especially those based on wavelets, highlights the innovative potential of the proposed feature engineering techniques.
Table 1 compares the scoring metrics between the final and initial models with arbitrary parameters (used as a benchmark) and all 98 features. These results were attained using leave-one-out cross-validation with subject independence. This shows the individual performance for each subject (the last four subjects correspond to the sleep-disordered patients) when it was in the test data set. Furthermore, the mean, standard deviation (SD), minimum (Min), and maximum (Max) are also presented. It is noteworthy that the employed optimization and feature selection procedures (final model) substantially improved the performance when compared with the initial model. This led to an increase of almost 3% in the AUC. It also shows that the features used in this algorithm could provide useful information to classify CAP A phases, supporting their relevance for this field of study.
For comparison purposes, Table 2 describes the final results obtained by the implemented solution and the results from state-of-the-art works that follow interpretable methodologies. The attained results are aligned with the ones obtained in state-of-the-art works, where comparable approaches were used while using a simple classifier.
Table 2. Comparison between the state-of-the-art results and the presented solution.
Table 2. Comparison between the state-of-the-art results and the presented solution.
WorkNumber of SubjectsAcc (%)Sen (%)Spe (%)
[29]14675569
[35]6727769
[26]8725276
[39]6737771
[37]14757874
[28]19767577
[19]10778490
[38]777874-
[36]14797680
[31]4817683
[20]5817681
[33]5828774
[32]4847486
[30]8857387
[27]16866790
Proposed19737774
Note: - denotes a metric not reported in this study.

5. Conclusions

This study aimed to examine features that could effectively detect CAP A phases. To achieve this, a total of 98 features were analyzed, combining those presented in state-of-the-art studies with new features proposed in this study. Therefore, this study not only focused on performance but also on feature engineering, evaluating which features were the most appropriate.
Hence, using techniques such as feature selection and tuning using a GA proved essential to improving the values given by the performance metrics used to evaluate the feasibility of the method. Combining two separate selection steps improved the overall approach by reducing the time necessary to optimize the structure and selecting an optimal set of features for the final classification algorithm.
Regarding the examined features, it is notable that the ones selected by SFS comprise examinations of both time and frequency domains. Furthermore, features proposed in the state-of-the-art are selected alongside several of the new proposed features, especially the wavelet-based ones. These results support the relevance of this work in the field of feature engineering, indicating which new features can be examined in future works regarding CAP examination.
The contrary to most state-of-the-art works, this work examined subjects comprised of both healthy and sleep-related disorder patients, using leave-one-out cross-validation to validate the results. At the same time, other studies either did not report their validation procedures or used a simpler method (such as randomly dividing the data of all subjects, eliminating subject-independent results). Furthermore, this study introduced a fully automatic method that did not require manual removal of any parts of the EEG signal. This was necessary for most state-of-the-art studies to isolate non-rapid eye movement sleep (making them less suitable for real-world deployment).
Therefore, this work provides a basis for future CAP A phase analysis developments. It is important to emphasize that the agreement among specialists in this field is challenging, with a maximum of 77.5% [16], which decreases as the number of specialists involved in the analysis increases. Therefore, the results of this work fall within the range of agreement among the most experienced specialists, despite using a rigorous validation procedure (leave-one-out cross-validation with subject independence) and a fully automatic methodology.
The forthcoming steps of this study involve an in-depth analysis of a more diverse population to investigate how various sleep-related conditions may impact the features. It is conceivable that distinct disorders may give rise to patterns that are more prominently discerned by specific sets of features. Identifying which features excel at capturing the nuances of A phase patterns in these various disorders holds considerable clinical significance. Additionally, it is imperative to undertake a more detailed exploration of the A phase subtypes to ascertain the relative importance of specific features for each subtype. Furthermore, exploring the integration of the proposed algorithm with cutting-edge sensors [43], methods for getting new information insights into brain activity [44], and compatible hardware platforms [28] are imperative for a comprehensive investigation.

Author Contributions

Conceptualization, A.A., F.M. and F.M.-D.; methodology, A.A. and F.M.; software, A.A.; validation, F.M., S.S.M. and F.M.-D.; formal analysis, A.A. and F.M.; investigation, A.A., F.M., S.S.M. and F.M.-D.; resources, F.M.; data curation, F.M.; writing—original draft preparation, A.A.; writing—review and editing, F.M., S.S.M. and F.M.-D.; visualization, A.A.; supervision, F.M. and F.M.-D.; project administration, F.M. and F.M.-D.; funding acquisition, F.M.-D. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by LARSyS (Projeto—UIDP/50009/20200, DOI: https://doi.org/10.54499/UIDP/50009/2020). It was also funded by the ARDITI-Regional Agency for the Development of Research, Technology, and Innovation, grant number M1420-09-5369-FSE-000002-Post-Doctoral Fellowship, co-financed by the Madeira 14-20 Program-European Social Fund. It was also funded by MITIExcell-EXCELENCIA INTERNACIONAL DE IDT&I NAS TIC, grant number M1420-01-0145-FEDER-000002, provided by the Regional Government of Madeira.

Data Availability Statement

The used data are available at https://physionet.org/content/capslpdb/1.0.0/ (accessed on 10 January 2023).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Zeitlhofer, J.; Schmeiser-Rieder, A.; Tribl, G.; Rosenberger, A.; Bolitschek, J.; Kapfhammer, G.; Saletu, B.; Katschnig, H.; Holzinger, B.; Popovic, R.; et al. Sleep and quality of life in the austrian population. Acta Neurol. Scand. 2000, 102, 249–257. [Google Scholar] [CrossRef] [PubMed]
  2. Chokroverty, S. Overview of sleep & sleep disorders. Indian J. Med. Res. 2010, 131, 126–140. [Google Scholar] [PubMed]
  3. Walker, M.P. “The role of sleep in cognition and emotion. Ann. N. Y. Acad. Sci. 2009, 1156, 168–197. [Google Scholar] [CrossRef]
  4. Moore, P.J.; Adler, N.E.; Williams, D.R.; Jackson, J.S. Socioe-conomic status and health: The role of sleep. Psychosom. Med. 2002, 64, 337–344. [Google Scholar] [CrossRef]
  5. Dewald, J.F.; Meijer, A.M.; Oort, F.J.; Kerkhof, G.A.; Bögels, S.M. The influence of sleep quality, sleep duration and sleepiness on school performance in children and adolescents: A meta-analytic review. Sleep Med. Rev. 2010, 14, 179–189. [Google Scholar] [CrossRef] [PubMed]
  6. Strollo, P.J., Jr.; Rogers, R.M. Obstructive sleep apnea. N. Engl. J. Med. 1996, 334, 99–104. [Google Scholar] [CrossRef] [PubMed]
  7. Ruehland, W.R.; Rochford, P.D.; O’Donoghue, F.J.; Pierce, R.J.; Singh, P.; Thornton, A.T. The new aasm criteria for scoring hypopneas: Impact on the apnea hypopnea index. Sleep 2009, 32, 150–157. [Google Scholar] [CrossRef]
  8. Hughes, J.R.; John, E.R. Conventional and quantitative electroencephalography in psychiatry. J. Neuropsychiatry Clin. Neurosci. 1999, 11, 190–208. [Google Scholar] [CrossRef]
  9. Jasper, H.H. The ten-twenty electrode system of the international federation. Electroencephalogr. Clin. Neurophysiol. 1958, 10, 370–375. [Google Scholar]
  10. Rechtschaffen, A. A manual for standardized terminology, techniques and scoring system for sleep stages in human subjects. Brain Inf. Serv. 1968, 20, 246–247. [Google Scholar]
  11. Matarazzo, L.; Foret, A.; Mascetti, L.; Muto, V.; Shaffii, A.; Maquet, P. A systems-level approach to human rem sleep. Rapid Eye Mov. Sleep Regul. Funct. 2011, 8, 71. [Google Scholar]
  12. Markov, D.; Goldman, M. Normal sleep and circadian rhythms: Neurobiologic mechanisms underlying sleep and wakefulness. Psychiatr. Clin. 2006, 29, 841–853. [Google Scholar] [CrossRef] [PubMed]
  13. Brinkman, J.E.; Sharma, S. Physiology of Sleep; StatPearls: Treasure Island, FL, USA, 2019. [Google Scholar]
  14. Terzano, M.G.; Parrino, L.; Sherieri, A.; Chervin, R.; Chokroverty, S.; Guilleminault, C.; Hirshkowitz, M.; Mahowald, M.; Moldofsky, H.; Rosa, A.; et al. Atlas, rules, and recording techniques for the scoring of cyclic alternating pattern (cap) in human sleep. Sleep Med. 2001, 2, 537–553. [Google Scholar] [CrossRef] [PubMed]
  15. Parrino, L.; Ferri, R.; Bruni, O.; Terzano, M.G. Cyclic alternating pattern (cap): The marker of sleep instability. Sleep Med. Rev. 2012, 16, 27–45. [Google Scholar] [CrossRef] [PubMed]
  16. Rosa, A.; Alves, G.R.; Brito, M.; Lopes, M.C.; Tufik, S. Visual and automatic cyclic alternating pattern (cap) scoring. Arq. De Neuro-Psiquiatr. 2006, 64, 578–581. [Google Scholar] [CrossRef] [PubMed]
  17. Mendonça, F.; Mostafa, S.S.; Morgado-Dias, F.; Ravelo-García, A.G.; Rosenzweig, I. Towards automatic eeg cyclic alternating pattern analysis: A systematic review. Biomed. Eng. Lett. 2023, 13, 273–291. [Google Scholar] [CrossRef]
  18. Barcaro, U.; Navona, C.; Belloli, S.; Bonanni, E.; Gneri, C.; Murri, L. A simple method for the quantitative description of sleep microstruc- ture. Electroencephalogr. Clin. Neurophysiol. 1998, 106, 429–432. [Google Scholar] [CrossRef]
  19. Navona, C.; Barcaro, U.; Bonanni, E.; Di Martino, F.; Maestri, M.; Murri, L. An automatic method for the recognition and classification of the a-phases of the cyclic alternating pattern. Clin. Neurophysiol. 2002, 113, 1826–1831. [Google Scholar] [CrossRef]
  20. Niknazar, H.; Seifpour, S.; Mikaili, M.; Nasrabadi, A.M.; Banaraki, A.K. A novel method to detect the a phases of cyclic alternating pattern (cap) using similarity index. In Proceedings of the 2015 23rd Iranian Conference on Electrical Engineering, Tehran, Iran, 10–14 May 2015; IEEE: Piscataway Township, NJ, USA, 2015; pp. 67–71. [Google Scholar]
  21. Machado, F.; Sales, F.; Bento, C.; Dourado, A.; Teixeira, C. Automatic identification of cyclic alternating pattern (cap) sequences based on the teager energy operator. In Proceedings of the 2015 37th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Milan, Italy, 25–29 August 2015; IEEE: Piscataway Township, NJ, USA, 2015; pp. 5420–5423. [Google Scholar]
  22. Ferri, R.; Bruni, O.; Miano, S.; Smerieri, A.; Spruyt, K.; Terzano, M.G. Inter-rater reliability of sleep cyclic alternating pattern (cap) scoring and validation of a new computer-assisted cap scoring method. Clin. Neurophysiol. 2005, 116, 696–707. [Google Scholar] [CrossRef]
  23. Fantozzi, M.P.T.; Faraguna, U.; Ugon, A.; Ciuti, G.; Pinna, A. Automatic cyclic alternating pattern (cap) analysis: Local and multi-trace approaches. PLoS ONE 2021, 16, e0260984. [Google Scholar]
  24. Largo, R.; Munteanu, C.; Rosa, A. Wavelet based cap detector with ga tuning. WSEAS Trans. Inf. Sci. Appl. 2005, 2, 576–580. [Google Scholar]
  25. Largo, R.; Munteanu, C.; Rosa, A. Cap event detection by wavelets and ga tuning. In Proceedings of the IEEE International Workshop on Intelligent Signal Processing, Philadelphia, PA, USA, 18–23 March 2005; IEEE: Piscataway Township, NJ, USA, 2005; pp. 44–48. [Google Scholar]
  26. Mariani, S.; Manfredini, E.; Rosso, V.; Mendez, M.O.; Bianchi, A.M.; Matteucci, M.; Terzano, M.G.; Cerutti, S.; Parrino, L. Characterization of a phases during the cyclic alternating pattern of sleep. Clin. Neurophysiol. 2011, 122, 2016–2024. [Google Scholar] [CrossRef]
  27. Mariani, S.; Grassi, A.; Mendez, M.O.; Milioli, G.; Parrino, L.; Terzano, M.G.; Bianchi, A.M. Eeg segmentation for improving automatic cap detection. Clin. Neurophysiol. 2013, 124, 1815–1823. [Google Scholar] [CrossRef] [PubMed]
  28. Mendonça, F.; Mostafa, S.S.; Morgado-Dias, F.; Ravelo-García, A.G. A portable wireless device for cyclic alternating pattern estimation from an eeg monopolar derivation. Entropy 2019, 21, 1203. [Google Scholar] [CrossRef]
  29. Mostafa, S.S.; Mendonça, F.; Ravelo-García, A.; Morgado-Dias, F. Combination of deep and shallow networks for cyclic alternating patterns detection. In Proceedings of the 2018 13th APCA International Conference on Automatic Control and Soft Computing (CONTROLO), Ponta Delgada, Portugal, 4–6 June 2018; IEEE: Piscataway Township, NJ, USA, 2018; pp. 98–103. [Google Scholar]
  30. Mariani, S.; Manfredini, E.; Rosso, V.; Grassi, A.; Mendez, M.O.; Alba, A.; Matteucci, M.; Parrino, L.; Terzano, M.G.; Cerutti, S.; et al. Efficient automatic classifiers for the detection of a phases of the cyclic alternating pattern in sleep. Med. Biol. Eng. Comput. 2012, 50, 359–372. [Google Scholar] [CrossRef] [PubMed]
  31. Mariani, S.; Bianchi, A.M.; Manfredini, E.; Rosso, V.; O Mendez, M.; Parrino, L.; Matteucci, M.; Grassi, A.; Cerutti, S.; Terzano, M.G. Automatic detection of a phases of the cyclic alternating pattern during sleep. In Proceedings of the 2010 32nd Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC 2010), Buenos Aires, Argentina, 31 August–4 September 2010; IEEE: Piscataway Township, NJ, USA. [Google Scholar]
  32. Mariani, S.; Grassi, A.; Mendez, M.O.; Parrino, L.; Terzano, M.G.; Bianchi, A.M. Automatic detection of cap on central and fronto-central eeg leads via support vector machines. In Proceedings of the 3rd Annual International Conference of the IEEE Engineering in Medicine and Biology Society, Boston, MA, USA, 30 August–3 September 2011; IEEE: Piscataway Township, NJ, USA, 2011. [Google Scholar]
  33. Mendez, M.O.; Alba, A.; Chouvarda, I.; Milioli, G.; Grassi, A.; Terzano, M.G.; Parrino, L. On separability of a-phases during the cyclic alternating pattern. In Proceedings of the 2014 36th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, Chicago, IL, USA, 26–30 August 2014; IEEE: Piscataway Township, NJ, USA, 2014; pp. 2253–2256. [Google Scholar]
  34. Karimzadeh, F.; Seraj, E.; Boostani, R.; Torabi-Nami, M. Presenting efficient features for automatic cap detection in sleep eeg signals. In Proceedings of the 2015 38th International Conference on Telecommunications and Signal Processing (TSP), Prague, Czech Republic, 9–11 July 2015; IEEE: Piscataway Township, NJ, USA, 2015; pp. 448–452. [Google Scholar]
  35. Dhok, S.; Pimpalkhute, V.; Chandurkar, A.; Bhurane, A.A.; Sharma, M.; Acharya, U.R. Automated phase classification in cyclic alternating patterns in sleep stages using wigner-ville distribution based features. Comput. Biol. Med. 2020, 119, 103691. [Google Scholar] [CrossRef]
  36. Mendonça, F.; Fred, A.; Mostafa, S.S.; Morgado-Dias, F.; Ravelo-García, A.G. Automatic detection of cyclic alternating pattern. Neural Comput. Appl. 2018, 34, 11097–11107. [Google Scholar] [CrossRef]
  37. Mendonça, F.; Fred, A.; Shanawaz Mostafa, S.; Morgado-Dias, F.; Ravelo-García, A. Automatic detection of a phases for cap classification. In Proceedings of the 7th International Con- ference on Pattern Recognition Applications and Methods (ICPRAM), Madeira, Portugal, 16–18 January 2018. [Google Scholar]
  38. Sharma, J.T.M.; Patel, V.; Acharya, U. Automated characterization of cyclic alternating pattern using wavelet-based features and ensemble learning techniques with eeg signals. Diagnostics 2021, 11, 1380. [Google Scholar] [CrossRef]
  39. Sharma, A.B.M.; Acharya, U. An expert system for automated classification of phases in cyclic alternating patterns of sleep using opti- mal wavelet-based entropy features. Expert Syst. 2022, e12939. [Google Scholar] [CrossRef]
  40. Radovic, M.; Ghalwash, M.; Filipovic, N.; Obradovic, Z. Minimum redundancy maximum relevance feature selection approach for temporal gene expression data. BMC Bioinform. 2017, 18, 1–14. [Google Scholar] [CrossRef]
  41. Koza, J.R.; Koza, J.R. Genetic Programming: On the Programming of Computers by Means of Natural Selection; MIT Press: Cambridge, MA, USA, 1992; Volume 1. [Google Scholar]
  42. Riedmiller, M.; Braun, H. A direct adaptive method for faster backpropagation learning: The RPROP algorithm. In Proceedings of the IEEE International Conference on Neural Networks, San Francisco, CA, USA, 25–29 October 1993; pp. 586–591. [Google Scholar]
  43. Zhao, T.; Fu, X.; Zhan, J.; Chen, K.; Li, Z. Vital signs monitoring using the macrobending small-core fiber sensor. Opt. Lett. 2021, 46, 4228–4231. [Google Scholar] [CrossRef] [PubMed]
  44. Jin, T.; Qi, W.; Liang, X.; Guo, H.; Liu, Q.; Xi, L. Photoacoustic Imaging of Brain Functions: Wide Filed-of-View Functional Imaging with High Spatiotemporal Resolution. Laser Photonics Rev. 2022, 16, 2100304. [Google Scholar] [CrossRef]
Figure 1. EEG sample from the used data set containing an A phase lasting 3 s (s) as highlighted by the square, depicting the sturdy amplitude variations characteristic of this phase.
Figure 1. EEG sample from the used data set containing an A phase lasting 3 s (s) as highlighted by the square, depicting the sturdy amplitude variations characteristic of this phase.
Electronics 13 00333 g001
Figure 2. Diagram of the various steps necessary for the conception of the proposed solution.
Figure 2. Diagram of the various steps necessary for the conception of the proposed solution.
Electronics 13 00333 g002
Figure 3. Variation of the (a) mean and the (b) best fitness value for each generation of the GA.
Figure 3. Variation of the (a) mean and the (b) best fitness value for each generation of the GA.
Electronics 13 00333 g003
Figure 4. Mean values for the examined performance metrics at each step of the SFS algorithm. The values obtained by the best features are plotted for each iteration of the tests.
Figure 4. Mean values for the examined performance metrics at each step of the SFS algorithm. The values obtained by the best features are plotted for each iteration of the tests.
Electronics 13 00333 g004
Figure 5. Examples of three A phase subtype events, marked with the box, for the features selected by SFS.
Figure 5. Examples of three A phase subtype events, marked with the box, for the features selected by SFS.
Electronics 13 00333 g005
Figure 6. Variation of the output of the features selected by SFS during all A subtypes of the used database.
Figure 6. Variation of the output of the features selected by SFS during all A subtypes of the used database.
Electronics 13 00333 g006
Table 1. Leave-one-out cross-validation results of the initial and final models.
Table 1. Leave-one-out cross-validation results of the initial and final models.
PatientAUC (%)Acc (%)Sen (%)Spe (%)
InitialFinalInitialFinalInitialFinalInitialFinal
186.589.474.173.984.890.072.771.8
278.380.669.571.173.576.869.070.4
379.284.966.976.878.275.466.076.9
476.581.169.571.671.076.369.371.3
587.089.674.577.585.586.372.876.2
689.591.182.882.579.285.683.482.0
786.189.974.573.587.295.173.371.3
879.080.673.670.267.175.874.469.5
984.590.476.177.980.390.775.877.0
1073.779.064.069.472.773.862.968.9
1179.583.967.172.879.680.165.772.0
1286.086.080.776.677.282.181.275.9
1381.386.778.778.970.380.479.978.7
1488.790.683.180.577.786.783.979.6
1585.988.078.175.580.987.177.773.6
1671.672.458.466.573.865.857.066.5
1772.676.672.473.649.051.680.781.4
1863.267.260.363.154.552.063.068.3
1957.468.757.664.245.656.463.267.9
Mean79.383.071.773.573.177.372.273.6
Std10.57.69.36.612.914.69.57.5
Min57.467.257.663.145.651.657.066.5
Max89.590.483.180.587.295.183.982.0
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Alves, A.; Mendonça, F.; Mostafa, S.S.; Morgado-Dias, F. Sleep Analysis by Evaluating the Cyclic Alternating Pattern A Phases. Electronics 2024, 13, 333. https://doi.org/10.3390/electronics13020333

AMA Style

Alves A, Mendonça F, Mostafa SS, Morgado-Dias F. Sleep Analysis by Evaluating the Cyclic Alternating Pattern A Phases. Electronics. 2024; 13(2):333. https://doi.org/10.3390/electronics13020333

Chicago/Turabian Style

Alves, Arturo, Fábio Mendonça, Sheikh Shanawaz Mostafa, and Fernando Morgado-Dias. 2024. "Sleep Analysis by Evaluating the Cyclic Alternating Pattern A Phases" Electronics 13, no. 2: 333. https://doi.org/10.3390/electronics13020333

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop