Next Article in Journal
Machine Learning-Based Prediction of Mental Well-Being Using Health Behavior Data from University Students
Next Article in Special Issue
Neonatal Seizure Detection Using a Wearable Multi-Sensor System
Previous Article in Journal
Towards Novel Biomimetic In Vitro Models of the Blood–Brain Barrier for Drug Permeability Evaluation
Previous Article in Special Issue
The Power of ECG in Semi-Automated Seizure Detection in Addition to Two-Channel behind-the-Ear EEG
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Effect of Coupled Electroencephalography Signals in Electrooculography Signals on Sleep Staging Based on Deep Learning Methods

1
School of Information Science and Technology, Fudan University, Shanghai 200433, China
2
Huashan Hospital, Shanghai Medical College, Fudan University, Shanghai 200040, China
3
Academy for Engineering and Technology, Fudan University, Shanghai 200433, China
4
Human Phenome Institute, Fudan University, Shanghai 201203, China
*
Authors to whom correspondence should be addressed.
Bioengineering 2023, 10(5), 573; https://doi.org/10.3390/bioengineering10050573
Submission received: 14 March 2023 / Revised: 20 April 2023 / Accepted: 26 April 2023 / Published: 10 May 2023

Abstract

:
The influence of the coupled electroencephalography (EEG) signal in electrooculography (EOG) on EOG-based automatic sleep staging has been ignored. Since the EOG and prefrontal EEG are collected at close range, it is not clear whether EEG couples in EOG or not, and whether or not the EOG signal can achieve good sleep staging results due to its intrinsic characteristics. In this paper, the effect of a coupled EEG signal in an EOG signal on automatic sleep staging is explored. The blind source separation algorithm was used to extract a clean prefrontal EEG signal. Then the raw EOG signal and clean prefrontal EEG signal were processed to obtain EOG signals coupled with different EEG signal contents. Afterwards, the coupled EOG signals were fed into a hierarchical neural network, including a convolutional neural network and recurrent neural network for automatic sleep staging. Finally, an exploration was performed using two public datasets and one clinical dataset. The results showed that using a coupled EOG signal could achieve an accuracy of 80.4%, 81.1%, and 78.9% for the three datasets, slightly better than the accuracy of sleep staging using the EOG signal without coupled EEG. Thus, an appropriate content of coupled EEG signal in an EOG signal improved the sleep staging results. This paper provides an experimental basis for sleep staging with EOG signals.

1. Introduction

Good sleep helps the body to eliminate fatigue and maintain normal brain functioning [1]. In contrast, a lack of sleep can lead to depression, obesity, coronary heart disease, and other diseases [2,3,4,5]. However, sleep disorders are becoming an alarmingly common health problem, affecting the health status of thousands of people [6,7]. To assess sleep quality and diagnose sleep disorders, signals such as electroencephalography (EEG), electrooculography (EOG), and electromyography (EMG) collected through polysomnography (PSG) are usually used to stage sleep. An entire night’s sleep can be divided into wake (W), non-rapid eye movement (NREM, S1, S2, S3, and S4), and rapid eye movement(REM) stages, according to the Rechtschaffen and Kales (R&K) standard [8] or wake (W), non-rapid eye movement (NREM, N1, N2, and N3), and rapid eye movement(REM) stages, according to the American Academy of Sleep Medicine (AASM) standard [9]. In the clinic, sleep staging is performed manually by experienced experts. The procedure is time-consuming and labor-intensive. Meanwhile, there is a subjective element in the judgment of experts and different experts do not fully agree on the classification of sleep stages [10,11]. To relieve the burden on physicians and save medical resources, many studies have focused on automatic sleep staging using biosignals through machine learning approaches [12,13,14,15].
Automatic sleep staging methods can be divided into traditional machine learning-based methods and deep learning-based methods. Traditional machine learning-based methods usually consist of handcrafted feature extraction and traditional classification methods. Handcrafted feature extraction extracts the features of signals from the time domain, frequency domain, etc., based on medical knowledge. These extracted features are then fed into traditional classifiers, such as support vector machines (SVM) [16,17,18], random forests (RF) [19,20], etc., for automatic sleep staging. Instead of requiring medical knowledge as a prerequisite, the deep learning-based method uses networks to automatically extract features. Thus, it has been widely explored in recent research [21]. Some convolutional neural networks (CNN) [22,23,24] or recurrent neural networks (RNN) [25] models have achieved good results in automatic sleep staging. Furthermore, some studies have combined different network architectures, to incorporate their advantages, such as the combination of CNN and RNN [26,27,28], the combination of RNN and RNN [29,30], and the combination of CNN and Transformer architectures [31,32]. With the development of machine learning methods, performance in sleep staging has been greatly improved, with an excellent performance on certain public datasets [33,34,35,36]. However, both the traditional machine learning-based methods and deep learning-based methods apply an EEG signal as the main or only input signal. The process of acquiring EEG signals is very tedious and uncomfortable for the subject.
Taking into account the comfort of physiological signal acquisition, some studies have tried to use certain easy-to-collect signals for sleep staging, such as cardiopulmonary signals [37,38,39], acoustic signals [40,41], and EOG signals [42,43,44]. These signals are relatively easy and comfortable to acquire compared to EEG signals, but the sleep staging performance with cardiopulmonary and acoustic signals was not satisfactory for clinical application. Noteworthy, the accuracy of sleep staging using a single-channel EOG signal was similar to that of a single-channel EEG signal in some studies [28,42]. This suggested that an EOG signal could also be used for sleep staging with good performance, allowing comfortable sleep monitoring. Despite the good results yielded by EOG signals in automatic sleep staging, the positions of the acquisition electrodes for the EOG signal and the prefrontal EEG signal are close to each other, which means that part of the EEG signal may be coupled in the EOG signal. Comparison of an EEG signal and an EOG signal in the N3 stage revealed slow wave signals with similar frequencies to those in the Fp1-O1 channel and the E1-M2 and E2-M2 channels (Figure 1). The slow wave signal, as the main characteristic wave of N3 sleep stage, appears in an EOG signal. Therefore, it is not clear whether the sleep staging ability of an EOG signal comes from the coupled EEG signal, and how this coupled EEG could affect the sleep staging results.
To explore the above issue, we conducted experiments applying data from two public datasets and one clinical dataset. First, we processed the EEG signal with a blind source separation algorithm named second-order blind identification (SOBI) [45] to obtain an EEG signal without EOG signals. Second, the raw EOG signal and the clean EEG signal were coupled to obtain a clean EOG signal and EOG signal coupled with different contents of the EEG signal. Third, the coupled EOG signals with different EEG signal contents were fed into a hierarchical neural network named two-step hierarchical neural network (THNN), which consists of a multi-scale CNN and a bidirectional gating unit (Bi-GRU), for automatic sleep staging. We also performed automatic sleep staging using the EEG signal with the THNN, to explore the difference in performance between the EOG signal and the EEG signal. Finally, we considered the impact of EEG signal coupling in EOG signals on sleep staging.

2. Materials and Methods

In this section, we introduce the subjects selected for this exploration, the blind source separation method used, as well as the specific structure, details, and training strategy of the THNN.

2.1. Subjects and PSG Recordings

In this work, we applied two widely used public datasets and one clinical dataset to conduct the experiment. The details of the three datasets are shown in Table 1.

2.1.1. Montreal Archive of Sleep Studies (MASS) Dataset

The MASS dataset was provided by the University of Montreal and the Sacred Heart Hospital in Montreal [36]. It consists of whole night sleep recordings from 200 subjects aged from 18 years old to 76 years old (97 males and 103 females), divided into five subsets SS1–SS5. The SS1 and SS3 subsets have a length of 30 s for each sleep stage, the other subsets have a sleep stage of 20 s. Each epoch of the recordings in MASS was manually labeled according to the AASM standard or R&K standard by experts. The amplifier system for MASS was the Grass Model 12 or 15 from Grass Technologies. The reference electrodes were CLE or LER. In this experiment, the SS3 subset was used.

2.1.2. Dreams Dataset

The DREAMS dataset was collected during the DREAMS project. It has eight subsets: subject database, patient database, artifact database, sleep spindles database, K-complex database, REM database, PLM database, and apnea database [35,46]. These recordings were annotated as microevents or as sleep stages by several experts. In this work, the subject database was applied. The subject database consists of 20 whole-night PSG recordings (16 females and 4 males) derived from healthy subjects, and the sleep stages were categorized into sleep stages according to both the R&K standard and the AASM standard. The data collection instrumentation for DREAMS was a digital 32-channel polygraph (BrainnetTM System of MEDATEC, Brussels, Belgium). The reference electrode was A1.

2.1.3. Huashan Hospital Fudan University (HSFU) Dataset

The HSFU dataset is a non-public database collected in Huashan Hospital, Fudan University, Shanghai, China, during 2019–2020. Twenty-six clinical PSG recordings were collected from people who had sleep disorders. The research was approved by the Ethics Committee of Huashan Hospital (ethical permit No. 2021-811). The PSG recordings were annotated by a qualified sleep expert according to the AASM standard. The specific information of each subject is described in Table A1. The data collection instrumentation for HSFU was a COMPUMEDICS GREAL HD PSG. The reference electrodes were M1 and M2.

2.2. Blind Source Separation Algorithm

Blind source separation methods are widely used when dealing with coupled signals. Some common blind source separation methods include fast independent component analysis (FastICA), information maximization (Infomax), and second-order blind identification (SOBI); the first two methods require that each channel of the input signal be independent of each other, whereas SOBI has no such requirement for the input signal.Meanwhile, the effectiveness of SOBI for processing mixed signals is not affected by the number of signal channels [47]. Thus, in this experiment, the SOBI method was applied to remove interference signals, due to its robustness. The SOBI algorithm was proposed by Belouchrani et al. in 1997 [45]. This algorithm achieves blind source separation by joint approximate diagonalization of the delayed correlation matrix. It is a stable method for blind source separation. SOBI uses second-order statistics, so that it can estimate the components of the source signals with few data points. The pseudo-algorithmic of SOBI is as follows (Algorithm 1): Assuming that the input signal X has M channels and each channel has N samples, i.e., X R M × N . After normalization and whitening of the input signal, joint approximate diagonalization is performed using the covariance of the signal, to obtain the coupling coefficient. Finally, the original signal is obtained using a matrix inverse operation.
Algorithm 1 SOBI
Input: Input: Data, X R M × N   Output: Output: Data, S R M × N
  1:
Normalization X 0 ( t ) X ( t )
  2:
Whitening Z ( t ) M ( t ) X 0 ( t )
  3:
Calculate the covariance matrix R ( τ ) E Z ( t + τ ) Z ( t )
  4:
while coefficients not converged or maximum iterations number not reached do
  5:
   Joint approximate diagonalization algorithm U T U T R ( τ ) U = I
  6:
end while
  7:
Calculate the source signal S ( t ) U T Z ( t )

2.3. Two-Step Hierarchical Neural Network

In this work, THNN was applied to conduct automatic sleep staging. The specific structure of THNN is presented in Figure 2, and the specific parameters are shown in Table A2. THNN can be divided into two parts: the feature extraction module, and the sequence learning module. The feature extraction module uses a multi-scale convolutional neural network with two scales to extract features from different scales. The sequence learning module uses a Bi-GRU network, which can learn the temporal information in the feature matrix extracted by the feature extraction module. The feature learning module consists of a two-scale CNN network. The two scales of CNN have different sizes of convolutional kernel for extracting large-time-span features and short-time-span features in EEG signals, respectively. Specifically, if the sampling rate of an EEG signal is 128 Hz, and the convolutional kernel length of the small-scale CNN is 64, then each segment of the EEG signal is 0.5 s of the sampling signal, which corresponds to 2 Hz. The large-scale CNN has a convolutional kernel length of 640, thus each segment of the EEG signal is 5 s of the sampling signal, which corresponds to 0.2 Hz. By designing convolutional kernels of different sizes, better feature information can be extracted. Suppose the signal S R L × P is the input of THNN, where the L is the number of epochs and the P is the length of each epoch. The process of feature learning is represented as follows:
F 1 = s c a l e 1 ( S )
F 2 = s c a l e 2 ( S )
F = c o n c a t e ( F 1 , F 2 )
where the s c a l e 1 ( · ) is the small scale branch of CNN, s c a l e 2 ( · ) is the large scale branch of CNN, and c o n c a t e ( · ) is the concatenation layer. The sequence learning part consists of Bi-GRU, which handles the time-dependent sequence signals well and has a fast operation speed in RNN networks [48]. GRU has a fast operation speed, but it still takes a long time to train when running serially. Therefore, we added a residual structure to the serial learning module, to speed up the training [49]. Finally, the probabilities of each sleep stage were output through the softmax layer. The process of the sequence learning part is shown as follows:
H = G R U ( F )
O = r e s i d u a l ( H , F )
Y = D e n s e ( O )
where G R U ( · ) is the Bi-GRU network, H is the temporal feature of each sleep stage outputted by the Bi-GRU, O is the feature after superposition of the residual module, and Y is the final sleep stage probability of each epoch.

2.4. Data Preprocessing and Experiment Scheme

In this work, we adopted the Fp1 channel EEG signal, Fp2 channel EEG signal, left EOG signal, and right EOG signal from the MASS, DREAMS, and HSFU datasets to conduct the experiments. All the signals used were filtered with a 50 Hz/60 Hz notch filter and a 0.3–35 Hz band-pass filter. and then the signals were resampled to 128 Hz to fit the network, as well as to reduce the complexity of operations. Afterwards, the SOBI method was used to remove the interference signals in the EEG signals and to obtain a clean EEG signal without the EOG signal. Next, the raw EOG signal and the clean EEG signal were processed to obtain a clean EOG signal and the EOG signal coupled with different contents of the EEG signal. The steps are showed in Figure 3. The specific calculation procedure of the coupled EOG signal is shown in Equation (7).
c o u p l e d E O G = r a w E O G + a c l e a n E E G
where the c l e a n E E G is the EEG signal without EOG signals, the c o u p l e d E O G is the EOG signal coupled with the EEG signal, and a is the superposition factor. The content of EEG signal in the EOG signal was calculated using the correlation coefficient between the coupled EOG signal and the clean EEG signal on the same side.
c o r r e l a t i o n = c o r r c o e f ( c l e a n E E G , c o u p l e d E O G )
We performed experiments using EOG signals coupled with different contents of EEG signal, and the correlation coefficients were set as 0.0, 0.1, 0.2, 0.3, and 0.5. We fed each of the five EOG signals into the network for automatic sleep staging. In addition, we used the leave-one-subject-out (LOSO) method for the validation.

3. Results

In this experiment, we first performed a quantitative analysis of the coupled EOG signal. Then we performed automatic sleep staging with THNN, using EOG signals coupled with different contents of EEG signals.

3.1. Quantitative Analysis of EOG Signals

The mean absolute error (MAE) was used to evaluate the degree of change between the coupled EOG signal and the raw EOG signal. Table 2 shows the MAE value between the raw EOG signals and coupled EOG signals. In addition, the correlation coefficients of the raw EOG signal and the clean EEG signal are presented in Table 3. It can be seen that the collected EOG signal indeed coupled with the EEG signal. The method used in this experiment allowed increasing or decrease the content of EEG signal coupled in the EOG signal. In addition, the correlation coefficient between the left eye EOG signal and the EEG signal and the correlation coefficient between the right eye EOG signal and the EEG signal were not consistent. This was particularly evident for the DREAMS dataset. In the DREAMS dataset, the correlation coefficient between the raw right eye EOG signal and the EEG signal was close to 0, demonstrating that there was almost no correlation between these two signals. This might have been due to the position of the reference electrode setting during the measurements. In general, the quantitative analysis of the EOG signal indicated that a portion of the EEG signal was indeed coupled in the EOG.

3.2. Sleep Staging Performance Using Coupled EOG Signals with THNN

Table 4 shows the detailed sleep staging performance of the different coupled EOG signals with THNN, including the accuracy, kappa coefficient, F1 score, specificity, and the precision of each sleep stage. With the two public datasets, the highest accuracy of automatic sleep staging using EOG signals was over 80%, and for the clinical dataset HSFU, the highest accuracy of automatic sleep staging using EOG signals was 78.9%. These results indicated that using the EOG signal for automatic sleep staging could yield good results. Meanwhile, the accuracy of automatic sleep staging using EOG signals without coupled EEG signals was also above 77% with the three datasets.
Moreover, the experimental results showed that the EEG signal coupled in the EOG signal enhanced the automatic sleep staging results, based on the EOG signal. However, an increased coupled EEG signal in the EOG signal did not allow cause improvement of the automatic sleep staging results.Specifically, in the MASS dataset, the best sleep staging results were obtained when the coupling coefficients of EOG and EEG were 0.3 (left) and 0.3 (right). In the DREAMS dataset, the sleep staging was best when the coupling coefficients were 0.3 (left) and 0.0 (right). In the HSFU dataset, the best sleep staging effect was found when the coupling coefficients were 0.3 (left) and 0.3 (right). The best sleep staging results were obtained when the amount of coupled EEG signal was moderate. Notably, these coupling coefficients were very close to the correlation coefficients of the raw EOG and EEG signals. This suggested that the raw EOG signal was a good choice for automatic sleep staging.
In addition, the classification precision of EOG signals coupled with EEG was higher for the N1, N2, and N3 stages compared to EOG signals without EEG coupling. In Figure 1b, it can be observed that the EOG signal possessed some of the recognizable waveform features of an EEG signal, such as the slow wave signal at the N3 stage. The EOG signal was coupled, to obtain a portion of the characteristic sleep waveform that would have been present only in the EEG signal, thus improving the classification accuracy of automatic sleep staging based on the EOG signal for these sleep stages. In general, the EEG signal coupled in the EOG signal was helpful for automatic sleep staging based on the EOG signal, especially for the N1, N2, and N3 stages.
Moreover, we performed sleep staging with the raw EOG signal to show the sleep staging ability of the raw EOG signal. Table 5 presents the results obtained from automatic sleep staging using the raw left and right eye EOG signals in each dataset. The results showed that the raw EOG signal also obtained a good sleep staging performance.

3.3. Significance Analysis

In addition, we performed significance analysis for all types of signals used in the experiments, including clean EOG signals, EOG signals coupled with different contents of EEG signal, raw EOG signals, raw EEG signals, etc. We used a chi-square test to perform a significance analysis of the sleep stage and the features extracted by the network. The specific results are shown in Table 6, where the results of the significance analysis of the coupled EOG signals are the average of the results of the EOG signals with different coupling coefficients. The results showed that the p-values of all the features of the signals were less than 0.05. This indicated that there was a significant relationship between the features extracted from the signals and the classification target.

4. Discussion

This paper investigated the question of whether the sleep staging ability of an EOG signal derives from the coupled EEG signal, and what effect the EEG signal coupled in the EOG signal has on sleep staging results. The results showed that good sleep staging results could be obtained either using EOG signals without EEG signals or coupled EOG signals. The sleep staging capability of the EOG signal came from its own characteristic information. Moreover, the accuracy of the sleep staging result using an EOG signal differed from the sleep staging results using EEG signals by only approximately 2%. This result indicated that an EOG signal can be used for automatic sleep staging with good results.

4.1. The Influence of Coupled EEG Signals in EOG Signals on Sleep Staging

In manual sleep scoring, the EEG coupled in EOG is usually considered noise or an interference signal. However, automatic sleep staging methods map the coupled signals from the time domain to other spatial domains through feature extraction. This allows the coupled information to be used as additional features to further complement the sleep features included in the EOG signals. In this experiment, the results showed that compared with the clean EOG signal, the EOG signal coupled with the EEG signal had a better performance for the N1, N2, and N3 stages. Note that in manual sleep scoring, the EOG signal is mainly used to classify the wake and REM stages, whereas the N1, N2, and N3 stages are commonly classified according to the EEG signal [9]. The EEG signal coupled in the EOG signal may provide features that are not in the EOG signal but are in the EEG signal, leading to enhancement of the classification results. Figure 4 shows a correlation analysis of the EOG signals for subjects in the MASS dataset with coupling coefficients of 0.0 and 0.3 for N1, N2, and N3 stages. A chi-square test was used to examine the correlation between the features extracted by the network and the classification targets under the null hypothesis that they have no correlation. In contrast, a larger chi-square value indicated a higher correlation. The results of the correlation analysis showed that p < 0.05, i.e., the features extracted from the EOG signals with different coupling coefficients and the classification targets were significantly correlated. Moreover, comparing the chi-square test results of the EOG signals with coupling coefficients of 0.0 and 0.3, the chi-square values of the EOG signals coupled with EEG signals were significantly higher than those of the clean EOG signals at the N1 and N2 stages. At the N3 stage, the chi-square values of the EOG signal coupled with the EEG signal and the clean EOG signal were similar. This suggested that the EOG signal coupled with the EEG signal had more effective features in the N1 and N2 stages to help the classification of sleep stage. Generally, the coupled EEG signal in an EOG signal can provide additional features that can enhance the classification accuracy of the N1, N2, and N3 stages, without affecting the classification of the wake and REM stages.
Additionally, except for the right eye EOG signal in the HSFU dataset, the best automatic sleep staging results were achieved when the MAE values between the coupled EOG signal and the raw EOG signal were the smallest, i.e., the content of the coupled EEG signal in the EOG signal was close to that in the raw EOG signal. We performed automatic sleep staging using the raw EOG signal to explore whether the raw EOG signal was sufficient to obtain a good sleep staging effect without additional EEG signal removal or addition. The results in Table 5 demonstrate that automatic sleep staging using raw EOG signals was also able to achieve good results. Fine-tuning the coupling coefficients of EEG and EOG yielded better sleep staging results, whereas this improvement was not significant compared to the results using raw EOG signals. This improvement would be lower when the sleep stages are combined for a four-class or three-class classification task. Specifically, according to the rules of the AASM [9], stages N1 and N2 can be combined as light sleep stages and N3 can be considered a deep sleep stage, thus becoming a four-class task (W, light sleep, deep sleep, REM). Therefore, when using the EOG signal for automatic sleep staging, the use of the raw EOG signal can yield sufficient sleep staging results. Further fine-tuning of the EEG and EOG coupling coefficients did not significantly improve the accuracy of the automatic sleep staging results.

4.2. The Difference of the Sleep Staging Results Using EEG Signal and EOG Signal

The experiment results showed that the difference between automatic sleep staging using coupled EOG signals and EEG signals was not significant. However, the classification accuracy at the N2, N3, and REM stages between using EOG and EEG signals was significant. Figure 5 shows a comparison of the coupled EOG signal with a 0.3 coefficient and a raw EEG signal of a subject at the N2, N3, and REM stages in the MASS dataset. At the N2 stage, the amplitude of the EOG signal was lower compared to the EEG signal, which led to the values of the extracted features being smaller, resulting in a decrease in the classification accuracy. For the frequency domain, the energy of the EOG signal was also lower than that of the EEG signal. At the N3 stage, the waveforms of the EOG and EEG signals were relatively similar. In the frequency domain, the EOG signal covered a much lower frequency range and provided less information. In the REM stage, the EOG signal produced large amplitude changes in a short period of time, which is characteristic of the REM stage eye movements. These eye movements greatly enhanced the classification accuracy of the EOG signal in the REM stage. In the frequency domain, the spectral energy of the EOG signal was also much larger than that of the EEG signal, which contributed to the high classification accuracy of the EOG signal in the REM stage. Overall, the EOG signal and the EEG signal both have advantages for sleep staging. The EOG signal contains features that make it better at classifying the wake and REM stages. The EEG signal is better at classifying the N1, N2, and N3 sleep stages. The EOG signal coupled with the partial EEG signal also obtained part of the sleep staging characteristics of the EEG signal, which provides a good basis for comfortable sleep staging, as well as sleep monitoring using the EOG signal.

4.3. Limitations and Future Works

Based on this experiment, we need to point out some limitations. First, we explored sleep staging with the EOG signal in normal subjects and in subjects with sleep disorders. However, these subjects were all adults. In future work, experimental exploration of differently aged people could be attempted to extend the applicability of the findings of this paper. Second, the blind source separation method we used in this work was not the most advanced, and there are other methods such as artifact subspace reconstruction (ASR), morphological component analysis (MCA), and surrogate-based artifact removal (SuBAR). These methods could be tried in future works, to remove interference signals. Third, we only used one network for sleep analysis of the EOG signals. There are many better network models available, such as XSleepNet [30], TransSleep [50], AttnSleep [31], etc. In future work, better networks could be tried, to improve the sleep staging results of EOG signals. Fourth, in terms of signal acquisition, EMG signals are also easy to acquire. Meanwhile, the submental EMG signal is of great significance for the differential staging of wake and REM stages (being especially important for the differential diagnosis of REM sleep behavior disorder) [51]. In future work, we could try to use both EOG and EMG modalities as inputs to the network, to explore a portable sleep monitoring method with a better staging effect. Finally, EOG signals and EEG signals are complementary to each other for sleep staging. They should be combined rather than separated. Therefore, in future work, we will work on combining EEG signals and EOG signals, to improve automatic sleep staging and to investigate comfortable and efficient sleep monitoring, based on the fact that both prefrontal EEG and EOG signals are easy to acquire.

5. Conclusions

In this paper, we investigated the effect of the EEG signal coupled in an EOG signal on sleep staging results. Two publicly available datasets and one clinical dataset were used for the experiment. The SOBI method was applied to obtain a clean EEG signal. The clean EEG signal and the raw EOG signal were used to obtain a clean EOG signal and EOG signal coupled with different contents of the EEG signal. Afterwards, a THNN was used to perform automatic sleep staging with coupled EOG signals. The results showed that the sleep staging capability of the EOG signal was not derived from the coupled EEG signal but from its own feature information. Meanwhile, the EOG signal coupled with the EEG signal had better classification performance for the N1, N2, and N3 stages. The coupled EEG signal could complement the feature information lacking in the EOG signal, especially in the N1 and N2 stages. In addition, the amount of coupled EEG signal was similar to the amount of EEG signal contained in the raw EOG signal. Higher or lower levels than this could result in a certain reduction in the accuracy of the sleep staging results. This paper provided an explorative experimental analysis of automatic sleep staging using EOG signals. Moreover, it is excepted to provide an experimental basis for comfortable sleep analysis, home sleep monitoring, etc.

Author Contributions

Conceptualization, H.Z.; methodology, H.Z. and C.C.; software, H.Z.; validation, H.Z.; formal analysis, H.Z.; investigation, H.Z.; resources, H.Z., C.F. and H.Y.; data curation, C.F. and H.Y.; writing—original draft preparation, H.Z.; writing—review and editing, C.C. and W.C.; supervision, F.S., C.C. and W.C.; project administration, C.C. and W.C.; funding acquisition, C.C. and W.C. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by the National Natural Science Foundation of China under Grant No. 62001118, in part by Shanghai Municipal Science and Technology International R&D Collaboration Project (Grant No. 20510710500) , and in part by the Shanghai Committee of Science and Technology under Grant No. 20S31903900.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki, and approved by the Institutional Review Board (or Ethics Committee) of NAME OF INSTITUTE (protocol code No. 2021-811).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study. Written informed consent was obtained from the patient(s) to publish this paper.

Data Availability Statement

MASS dataset is at http://www.ceams-carsm.ca/en/MASS (accessed on 8 June 2020), DREAMS dataset is at http://www.tcts.fpms.ac.be/%7Edevuyst/#Databases (accessed on 8 June 2020) and the HSFU dataset is not available.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Table A1. Details of the subjects in the HSFU dataset.
Table A1. Details of the subjects in the HSFU dataset.
  No.GenderAgeDiagnosisWN1N2N3REMTotal Stages
1Female61AHI in REM621281824863938
2Female45Insomnia147704151762061014
3Female41Insomnia and OSA532100170131861019
4Female65RLS and insomnia782412357771044
5Female38OSA25944319157205984
6Male64PLMs1781574340188958
7Male50Insomnia and OSA174108381140181984
8Male42OSA and PLMs14911645584179983
9Male52RLS1299549165196976
10Male25OSA1487357111961972
11Male56OSA28617328543172961
12Male29OSA8184442197163967
13Male29OSA12548405185209972
14Male37RBD and OSA234199389131104938
15Male52Insomnia and OSA17725933772103948
16Male52OSA2471723051212021047
17Male62RLS and OSA3591293591041031055
18Male38OSA1721595061031951135
19Male38OSA2771023591351541029
20Male38OSA151873771672651047
21Male41Insomnia and OSA217101433941961041
22Male49OSA321194408153977
23Male52Insomnia and OSA159973683151801121
24Male49OSA264723811512061075
25Male57RLS and OSA5441611561748926
26Male58RLS and OSA329371972741651002
AHI, apnea–hypopnea index; REM, rapid eye movement; OSA, obstructive sleep apnea; RLS, restless leg syndrome; PLMs, periodic leg movements; RBD, rapid eye movement sleep behavior disorder.
Table A2. Parameters of the THNN.
Table A2. Parameters of the THNN.
Layer/BlockParametersInput ShapeOutput Shape
Input--3840*1*1
Conv1N: 128, k: 64, s: 63840*1*1640*1*128
Maxpool1k: 8, s: 8640*1*12880*1*128
Dropout1A = 0.580*1*12880*1*128
Conv1N: 128, k: 6, s: 180*1*12880*1*128
Conv1N: 128, k: 6, s: 180*1*12880*1*128
Conv1N: 128, k: 6, s: 180*1*12880*1*128
Maxpool1k: 4, s: 480*1*12820*1*128
Flatten1-20*1*1282560
Conv2N: 128, k: 640, s: 643840*1*160*1*128
Maxpool2k: 6, s: 660*1*12810*1*128
Dropout2A = 0.510*1*12810*1*128
Conv2N: 128, k: 10, s: 110*1*12810*1*128
Conv2N: 128, k: 10, s: 110*1*12810*1*128
Conv2N: 128, k: 10, s: 110*1*12810*1*128
Maxpool2k: 2, s: 210*1*1285*1*128
Flatten2-5*1*128640
Concat-2560 + 6403200
Dense-3200800
Dense-800400
Bi-GRUl: 2, h: 200, sl: 15400400
Add-400400
Dropout0.5400400
Dense-400200
DenseSoftmax2003
THNN, two-step hierarchal neural network. Conv1, Maxpool1, SE block1: represents the convolutional layer, the pooling layer and the SE block in the first CNN branch. Conv2, Maxpool2, SE block2: represents the convolutional layer, the pooling layer, and the SE block in the second CNN branch. N: number of filters in each convolution. k: kernel size. s: stride. l: number of layers in Bi-GRU. h: number of hidden layer neurons of Bi-GRU. sl: sequence length in Bi-GRU.

References

  1. Luyster, F.S.; Strollo, P.J.; Zee, P.C.; Walsh, J.K. Sleep: A Health Imperative. Sleep 2012, 35, 727–734. [Google Scholar] [CrossRef]
  2. Owens, J.; Adolescent Sleep Working Group; Committee on Adolescence. Insufficient Sleep in Adolescents and Young Adults: An Update on Causes and Consequences. Pediatrics 2014, 134, e921–e932. [Google Scholar] [CrossRef] [PubMed]
  3. St-Onge, M.P.; Grandner, M.A.; Brown, D.; Conroy, M.B.; Jean-Louis, G.; Coons, M.; Bhatt, D.L. Sleep Duration and Quality: Impact on Lifestyle Behaviors and Cardiometabolic Health: A Scientific Statement from the American Heart Association. Circulation 2016, 134, e367–e386. [Google Scholar] [CrossRef] [PubMed]
  4. Itani, O.; Jike, M.; Watanabe, N.; Kaneita, Y. Short sleep duration and health outcomes: A systematic review, meta-analysis, and meta-regression. Sleep Med. 2017, 32, 246–256. [Google Scholar] [CrossRef] [PubMed]
  5. Medic, G.; Wille, M.; Hemels, M. Short- and long-term health consequences of sleep disruption. NSS 2017, 9, 151–161. [Google Scholar] [CrossRef]
  6. Harding, K.; Feldman, M. Sleep Disorders and Sleep Deprivation: An Unmet Public Health Problem. J. Am. Acad. Child Adolesc. Psychiatry 2008, 47, 473–474. [Google Scholar] [CrossRef]
  7. Krieger, A.C. Social and Economic Dimensions of Sleep Disorders. Sleep Med. Clin. 2017, 12, 1. [Google Scholar] [CrossRef]
  8. Hobson, J.A. A manual of standardized terminology, techniques and scoring system for sleep stages of human subjects: A. Rechtschaffen and A. Kales (Editors). Electroencephalogr. Clin. Neurophysiol. 1969, 26, 644. [Google Scholar] [CrossRef]
  9. Iber, C.; Ancoli-Israel, S.; Chesson, A.L.; Quan, S.F. The AASM Manual for the Scoring of Sleep and Associated Events: Rules, Terminology and Technical Specifications; American Academy of Sleep Medicine: Westchester, IL, USA, 2007. [Google Scholar]
  10. Rosenberg, R.S.; Van Hout, S. The American Academy of Sleep Medicine Inter-scorer Reliability Program: Respiratory Events. J. Clin. Sleep Med. 2014, 10, 447–454. [Google Scholar] [CrossRef]
  11. van Gorp, H.; Huijben, I.A.M.; Fonseca, P.; van Sloun, R.J.G.; Overeem, S.; van Gilst, M.M. Certainty about uncertainty in sleep staging: A theoretical framework. Sleep 2022, 45, zsac134. [Google Scholar] [CrossRef]
  12. Sekkal, R.N.; Bereksi-Reguig, F.; Ruiz-Fernandez, D.; Dib, N.; Sekkal, S. Automatic sleep stage classification: From classical machine learning methods to deep learning. Biomed. Signal Process. Control 2022, 77, 103751. [Google Scholar] [CrossRef]
  13. Sarkar, D.; Guha, D.; Tarafdar, P.; Sarkar, S.; Ghosh, A.; Dey, D. A comprehensive evaluation of contemporary methods used for automatic sleep staging. Biomed. Signal Process. Control 2022, 77, 103819. [Google Scholar] [CrossRef]
  14. Gong, S.; Xing, K.; Cichocki, A.; Li, J. Deep Learning in EEG: Advance of the Last Ten-Year Critical Period. IEEE Trans. Cogn. Dev. Syst. 2022, 14, 348–365. [Google Scholar] [CrossRef]
  15. Loh, H.W.; Ooi, C.P.; Vicnesh, J.; Oh, S.L.; Faust, O.; Gertych, A.; Acharya, U.R. Automated Detection of Sleep Stages Using Deep Learning Tech-niques: A Systematic Review of the Last Decade (2010–2020). Appl. Sci. 2020, 10, 8963. [Google Scholar] [CrossRef]
  16. An, P.; Yuan, Z.; Zhao, J. Unsupervised multi-subepoch feature learning and hierarchical classification for EEG-based sleep staging. Expert Syst. Appl. 2021, 186, 115759. [Google Scholar] [CrossRef]
  17. Alickovic, E.; Subasi, A. Ensemble SVM Method for Automatic Sleep Stage Classification. IEEE Trans. Instrum. Meas. 2018, 67, 1258–1265. [Google Scholar] [CrossRef]
  18. Zhu, G.; Li, Y.; Wen, P. Analysis and Classification of Sleep Stages Based on Difference Visibility Graphs From a Single-Channel EEG Signal. IEEE J. Biomed. Health Inform. 2014, 18, 1813–1821. [Google Scholar] [CrossRef]
  19. Li, X.; Cui, L.; Tao, S.; Chen, J.; Zhang, X.; Zhang, G.Q. HyCLASSS: A Hybrid Classifier for Automatic Sleep Stage Scoring. IEEE J. Biomed. Health Inform. 2018, 22, 375–385. [Google Scholar] [CrossRef]
  20. Memar, P.; Faradji, F. A Novel Multi-Class EEG-Based Sleep Stage Classification System. IEEE Trans. Neural. Syst. Rehabil. Eng. 2018, 26, 84–95. [Google Scholar] [CrossRef]
  21. Fiorillo, L.; Puiatti, A.; Papandrea, M.; Ratti, P.-L.; Favaro, P.; Roth, C.; Bargiotas, P.; Bassetti, C.L.; Faraci, F.D. Automated sleep scoring: A review of the latest approaches. Sleep Med. Rev. 2019, 48, 101204. [Google Scholar] [CrossRef]
  22. Perslev, M.; Darkner, S.; Kempfner, L.; Nikolic, M.; Jennum, P.J.; Igel, C. U-Sleep: Resilient high-frequency sleep staging. NPJ Digit. Med. 2021, 4, 72. [Google Scholar] [CrossRef]
  23. Yang, Y.; Gao, Z.; Li, Y.; Wang, H. A CNN identified by reinforcement learning-based optimization framework for EEG-based state evaluation. J. Neural. Eng. 2021, 18, 046059. [Google Scholar] [CrossRef] [PubMed]
  24. Yang, B.; Zhu, X.; Liu, Y.; Liu, H. A single-channel EEG based automatic sleep stage classification method leveraging deep one-dimensional convolutional neural network and hidden Markov model. Biomed. Signal Process. Control 2021, 68, 102581. [Google Scholar] [CrossRef]
  25. Phan, H.; Andreotti, F.; Cooray, N.; Chén, O.Y.; De Vos, M. Automatic Sleep Stage Classification Using Sin-gle-Channel EEG: Learning Sequential Features with Attention-Based Recurrent Neural Networks. In Proceedings of the 2018 40th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Honolulu, HI, USA, 18–21 July 2018; pp. 1452–1455. [Google Scholar] [CrossRef]
  26. Supratak, A.; Dong, H.; Wu, C.; Guo, Y. DeepSleepNet: A Model for Automatic Sleep Stage Scoring Based on Raw Single-Channel EEG. IEEE Trans. Neural. Syst. Rehabil. Eng. 2017, 25, 1998–2008. [Google Scholar] [CrossRef] [PubMed]
  27. Sun, C.; Chen, C.; Li, W.; Fan, J.; Chen, W. A Hierarchical Neural Network for Sleep Stage Classification Based on Comprehensive Feature Learning and Multi-Flow Sequence Learning. IEEE J. Biomed. Health Inform. 2020, 24, 1351–1366. [Google Scholar] [CrossRef]
  28. Zhu, H.; Wu, Y.; Shen, N.; Fan, J.; Tao, L.; Fu, C.; Yu, H.; Wan, F.; Pun, S.H.; Chen, C.; et al. The Masking Impact of Intra-Artifacts in EEG on Deep Learning-Based Sleep Staging Systems: A Comparative Study. IEEE Trans. Neural. Syst. Rehabil. Eng. 2022, 30, 1452–1463. [Google Scholar] [CrossRef]
  29. Phan, H.; Andreotti, F.; Cooray, N.; Chen, O.Y.; De Vos, M. SeqSleepNet: End-to-End Hierarchical Recurrent Neural Network for Sequence-to-Sequence Automatic Sleep Staging. IEEE Trans. Neural. Syst. Rehabil. Eng. 2019, 27, 400–410. [Google Scholar] [CrossRef]
  30. Phan, H.; Chen, O.Y.; Tran, M.C.; Koch, P.; Mertins, A.; De Vos, M. XSleepNet: Multi-View Sequential Model for Automatic Sleep Staging. IEEE Trans. Pattern Anal. Mach. Intell. 2021, 44, 5903–5915. [Google Scholar] [CrossRef]
  31. Eldele, E.; Chen, Z.; Liu, C.; Wu, M.; Kwoh, C.K.; Li, X.; Guan, C. An Attention-Based Deep Learning Approach for Sleep Stage Classifi-cation With Single-Channel EEG. IEEE Trans. Neural. Syst. Rehabil. Eng. 2021, 29, 809–818. [Google Scholar] [CrossRef]
  32. Zhou, W.; Zhu, H.; Shen, N.; Chen, H.; Fu, C.; Yu, H.; Shu, F.; Chen, C.; Chen, W. A Lightweight Segmented Attention Network for Sleep Staging by Fusing Local Characteristics and Adjacent Information. IEEE Trans. Neural. Syst. Rehabil. Eng. 2023, 31, 238–247. [Google Scholar] [CrossRef]
  33. Goldberger, A.L.; Amaral, L.A.N.; Glass, L.; Hausdorff, J.M.; Ivanov, P.C.; Mark, R.G.; Mietus, J.E.; Moody, G.B.; Peng, C.; Stanley, H.E. PhysioBank, PhysioToolkit, and PhysioNet: Components of a New Research Resource for Complex Physiologic Signals. Circulation 2000, 101, e215–e220. [Google Scholar] [CrossRef] [PubMed]
  34. Quan, S.F.; Howard, B.V.; Iber, C.; Kiley, J.P.; Nieto, F.J.; O’Connor, G.T.; Rapoport, D.M.; Redline, S.; Robbins, J.; Samet, J.M.; et al. The Sleep Heart Health Study: Design, rationale, and methods. Sleep 1997, 20, 1077–1085. [Google Scholar] [PubMed]
  35. Devuyst, S.; Dutoit, T.; Ravet, T.; Stenuit, P.; Kerkhofs, M.; Stanus, E. Automatic Processing of EEG-EOG-EMG Artifacts in Sleep Stage Classification. In Proceedings of the 13th International Conference on Biomedical Engineering (IFMBE Proceedings); Lim, C.T., Goh, J.C.H., Eds.; Springer: Berlin/Heidelberg, Germany, 2009; Volume 23, pp. 146–150. [Google Scholar] [CrossRef]
  36. O’Reilly, C.; Gosselin, N.; Carrier, J.; Nielsen, T. Montreal Archive of Sleep Studies: An open-access resource for instrument benchmarking and exploratory research. J. Sleep Res. 2014, 23, 628–635. [Google Scholar] [CrossRef] [PubMed]
  37. Fonseca, P.; den Teuling, N.; Long, X.; Aarts, R.M. Cardiorespiratory Sleep Stage Detection Using Condi-tional Random Fields. IEEE J. Biomed. Health Inform. 2017, 21, 956–966. [Google Scholar] [CrossRef] [PubMed]
  38. Wei, R.; Zhang, X.; Wang, J.; Dang, X. The research of sleep staging based on single-lead electrocardiogram and deep neural network. Biomed. Eng. Lett. 2018, 8, 87–93. [Google Scholar] [CrossRef]
  39. Anderer, P.; Ross, M.; Cerny, A.; Radha, M.; Fonseca, P. 0436 Deep Learning for Scoring Sleep Based on Signals Available in Home Sleep Apnea Test Studies: Cardiorespiratory Sleep Staging. Sleep 2020, 43, A167. [Google Scholar] [CrossRef]
  40. Xue, B.; Deng, B.; Hong, H.; Wang, Z.; Zhu, X.; Feng, D.D. Non-Contact Sleep Stage Detection Using Canonical Correlation Analysis of Respiratory Sound. IEEE J. Biomed. Health Inform. 2020, 24, 614–625. [Google Scholar] [CrossRef]
  41. Hong, J.; Tran, H.; Jeong, J.; Jang, H.; Yoon, I.Y.; Hong, J.K.; Kim, J.W. 0348 Sleep Staging Using End-to-End Deep Learning Model Based on Nocturnal Sound for Smartphones. Sleep 2022, 45, A156–A157. [Google Scholar] [CrossRef]
  42. Fan, J.; Sun, C.; Long, M.; Chen, C.; Chen, W. EOGNET: A Novel Deep Learning Model for Sleep Stage Classification Based on Single-Channel EOG Signal. Front. Neurosci. 2021, 15, 573194. [Google Scholar] [CrossRef]
  43. Rahman, M.M.; Bhuiyan, M.I.H.; Hassan, A.R. Sleep stage classification using single-channel EOG. Comput. Biol. Med. 2018, 102, 211–220. [Google Scholar] [CrossRef]
  44. Sun, C.; Chen, C.; Fan, J.; Li, W.; Zhang, Y.; Chen, W. A hierarchical sequential neural network with feature fusion for sleep staging based on EOG and RR signals. J. Neural. Eng. 2019, 16, 066020. [Google Scholar] [CrossRef]
  45. Belouchrani, A.; Abed-Meraim, K.; Cardoso, J.F.; Moulines, E. A blind source separation technique using second-order statistics. IEEE Trans. Signal. Process. 1997, 45, 434–444. [Google Scholar] [CrossRef]
  46. Devuyst, S.; Dutoit, T.; Stenuit, P.; Kerkhofs, M. Automatic sleep spindles detection 2014; Overview and development of a standard proposal assessment method. In Proceedings of the 2011 Annual International Conference of the IEEE Engineering in Medicine and Biology Society, Boston, MA, USA, 30 August–3 September 2011; pp. 1713–1716. [Google Scholar] [CrossRef]
  47. Romero, S.; Mañanas, M.A.; Barbanoj, M.J. A comparative study of automatic techniques for ocular artifact reduction in spontaneous EEG signals based on clinical target variables: A simulation case. Comput. Biol. Med. 2008, 38, 348–360. [Google Scholar] [CrossRef]
  48. Chung, J.; Gulcehre, C.; Cho, K.; Bengio, Y. Empirical Evaluation of Gated Recurrent Neural Networks on Sequence Modeling. arXiv 2022, arXiv:1412.3555. [Google Scholar]
  49. Xie, S.; Girshick, R.; Dollar, P.; Tu, Z.; He, K. Aggregated Residual Transformations for Deep Neural Net-works. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 5987–5995. [Google Scholar] [CrossRef]
  50. Phyo, J.; Ko, W.; Jeon, E.; Suk, H.I. TransSleep: Transitioning-Aware Attention-Based Deep Neural Network for Sleep Staging. IEEE Trans. Cybern. 2022, 2022, 1–11. [Google Scholar] [CrossRef]
  51. Liu, M.; Zhu, H.; Tang, J.; Chen, H.; Chen, C.; Luo, J.; Chen, W. Overview of a Sleep Monitoring Protocol for a Large Natural Popula-tion. Phenomics 2023. [Google Scholar] [CrossRef]
Figure 1. Comparison of EOG signal and prefrontal EEG signal. (a) Comparison of EOG signal and EEG signal waveforms within one epoch. (b) One epoch of a PSG signal at stage N3, including EOG, EEG, and ECG signals.
Figure 1. Comparison of EOG signal and prefrontal EEG signal. (a) Comparison of EOG signal and EEG signal waveforms within one epoch. (b) One epoch of a PSG signal at stage N3, including EOG, EEG, and ECG signals.
Bioengineering 10 00573 g001
Figure 2. The structure of THNN. The input signal is first extracted using features at different scales by the feature extraction part. The features are concatenated and sent to the sequence learning module for further extraction of the temporal features in the signal.
Figure 2. The structure of THNN. The input signal is first extracted using features at different scales by the feature extraction part. The features are concatenated and sent to the sequence learning module for further extraction of the temporal features in the signal.
Bioengineering 10 00573 g002
Figure 3. The process of obtaining EOG signals coupled with different contents of EEG signal.
Figure 3. The process of obtaining EOG signals coupled with different contents of EEG signal.
Bioengineering 10 00573 g003
Figure 4. Results of the chi-square test for EOG signals with different coupling coefficients in the MASS dataset.
Figure 4. Results of the chi-square test for EOG signals with different coupling coefficients in the MASS dataset.
Bioengineering 10 00573 g004
Figure 5. Comparison of time and frequency domains of a coupled EOG signal with a 0.3 coupling coefficient and an EEG signal in the N2 and REM stages in the MASS dataset. (a) Comparison of the time domain and frequency domains of the EOG signal with the best results and EEG signal in the N2 stage in MASS dataset. (b) Comparison of time domain and frequency domains of the EOG signal with the best results and the EEG signal in the REM stage for the MASS dataset.
Figure 5. Comparison of time and frequency domains of a coupled EOG signal with a 0.3 coupling coefficient and an EEG signal in the N2 and REM stages in the MASS dataset. (a) Comparison of the time domain and frequency domains of the EOG signal with the best results and EEG signal in the N2 stage in MASS dataset. (b) Comparison of time domain and frequency domains of the EOG signal with the best results and the EEG signal in the REM stage for the MASS dataset.
Bioengineering 10 00573 g005
Table 1. Details of the MASS, DREAMS, and HSFU datasets.
Table 1. Details of the MASS, DREAMS, and HSFU datasets.
DatasetSubjectsAgeHealthySampling RateStages
WN1N2N3REMTotal
DREAMS2020–65Yes200 Hz3551148082513933301920,234
  MASS6218–76Yes256 Hz6442483929,802765310,58159,317
  HSFU2625–65No1024 Hz7278292695073001408227,194
Table 2. MAE value (uV) between the raw EOG signal and coupled EOG signal.
Table 2. MAE value (uV) between the raw EOG signal and coupled EOG signal.
DatasetMASSDREAMSHSFU
Correlation CoefficientLRLRLR
0.05.00 ± 1.242.98 ± 1.260.21 ± 0.080.01 ± 0.010.27 ± 0.060.32 ± 0.08
0.13.86 ± 1.192.18 ± 0.910.15 ± 0.080.02 ± 0.010.21 ± 0.060.26 ± 0.08
0.22.72 ± 1.211.42 ± 0.820.11 ± 0.070.04 ± 0.020.15 ± 0.060.20 ± 0.08
0.31.70 ± 1.130.98 ± 0.980.08 ± 0.060.06 ± 0.030.08 ± 0.060.13 ± 0.09
0.51.70 ± 1.262.59 ± 1.750.13 ± 0.090.11 ± 0.060.09 ± 0.060.08 ± 0.04
L, left eye; R, right eye.
Table 3. Correlation coefficient between the raw EOG and clean EEG signal.
Table 3. Correlation coefficient between the raw EOG and clean EEG signal.
DatasetMASSDREAMSHSFU
Right or Left EyeLRLRLR
Correlation coefficient0.43 ± 0.110.30 ± 0.230.32 ± 0.120.01 ± 0.040.40 ± 0.080.46 ± 0.10
L, left eye; R, right eye.
Table 4. Results of the MASS, DREAMS, and HSFU datasets with THNN.
Table 4. Results of the MASS, DREAMS, and HSFU datasets with THNN.
Dataset CorAccKappaBSpecPrecision of Each Stage
WakeN1N2N3REM
MASSRight EOG0.00.7900.7860.6910.8850.6540.3080.8250.7970.841
0.10.7900.7860.6890.8800.6260.3530.8480.8110.847
0.20.7990.7950.7000.8890.6740.3610.8680.8050.848
0.30.8040.7960.6880.8830.6540.3970.8480.8620.785
0.50.7650.7590.6690.8660.6050.3910.8230.7480.831
Left EOG0.00.7910.7860.6990.8850.6290.3030.8230.7920.851
0.10.7920.7870.6890.8830.6810.3270.8450.8030.866
0.20.8030.7990.6960.8840.6970.3430.8390.8260.880
0.30.8030.7980.7080.8900.6900.3580.8680.8010.839
0.50.7620.7570.6640.8650.6230.3390.8190.7420.826
EEG-0.8170.8060.6880.8980.6650.3980.9010.7410.794
DREAMSRight EOG0.00.8110.8080.7490.8960.8820.5180.7940.8170.856
0.10.8090.8060.7490.8950.8910.5780.8020.7380.744
0.20.8070.8030.7430.8930.8740.5260.7980.7770.883
0.30.7970.7930.7360.8880.8690.5240.7840.7840.850
0.50.7910.7870.7250.8850.8260.4980.7840.7930.839
Left EOG0.00.8060.8030.7590.8920.8800.5860.7840.7790.905
0.10.8000.7960.7420.8890.8700.5600.7860.7570.891
0.20.8080.8050.7500.8940.8750.5870.7950.7600.899
0.30.8110.8070.7480.8950.8960.6050.7650.7920.824
0.50.8090.8060.7470.8960.8860.5950.8100.7490.859
EEG-0.8200.8160.7430.9030.8870.5990.8400.8110.756
HSFURight EOG0.00.7740.7660.6830.8720.9160.1630.7300.7020.870
0.10.7780.7700.6820.8780.9340.2270.7630.6800.867
0.20.7850.7780.6760.8780.9290.2750.7460.7520.823
0.30.7890.7820.6760.8790.9280.2680.7410.7510.851
0.50.7820.7740.6580.8790.9360.1630.7740.7240.761
Left EOG0.00.7740.7640.6540.8780.9340.1360.7610.6450.800
0.10.7770.7690.6760.8800.9390.2460.7900.6330.843
0.20.7880.7800.6840.8840.9250.2520.7900.6560.867
0.30.7890.7810.6830.8810.9410.3330.7820.7190.816
0.50.7680.7590.6570.8730.9000.2290.7810.6650.786
EEG-0.7910.7810.6560.8840.8590.2630.8020.8410.703
Cor, correlation; Acc, accuracy; Spec, specificity; EOG, electrooculography; EEG, electroencephalography.
Table 5. Results of sleep staging using raw EOG signals.
Table 5. Results of sleep staging using raw EOG signals.
Left EOGRight EOG
  DatasetAccuracyKappaF1-ScoreAccuracyKappaF1-Score
  MASS0.7990.7930.7050.8000.7930.705
  DREAMS0.8090.8060.7490.8070.8030.739
  HSFU0.7880.7790.6660.7870.7800.689
Table 6. The p-value of the chi-square test for different signals in different datasets.
Table 6. The p-value of the chi-square test for different signals in different datasets.
LR
  DatasetRaw EOGCoupled EOGRaw EOGCoupled EOGRaw EEG
  MASS0.0110.0100.0100.0100.006
  DREAMS0.0150.0130.0140.0130.009
  HSFU0.0230.0200.0220.0210.019
L, left eye; R, right eye.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhu, H.; Fu, C.; Shu, F.; Yu, H.; Chen, C.; Chen, W. The Effect of Coupled Electroencephalography Signals in Electrooculography Signals on Sleep Staging Based on Deep Learning Methods. Bioengineering 2023, 10, 573. https://doi.org/10.3390/bioengineering10050573

AMA Style

Zhu H, Fu C, Shu F, Yu H, Chen C, Chen W. The Effect of Coupled Electroencephalography Signals in Electrooculography Signals on Sleep Staging Based on Deep Learning Methods. Bioengineering. 2023; 10(5):573. https://doi.org/10.3390/bioengineering10050573

Chicago/Turabian Style

Zhu, Hangyu, Cong Fu, Feng Shu, Huan Yu, Chen Chen, and Wei Chen. 2023. "The Effect of Coupled Electroencephalography Signals in Electrooculography Signals on Sleep Staging Based on Deep Learning Methods" Bioengineering 10, no. 5: 573. https://doi.org/10.3390/bioengineering10050573

APA Style

Zhu, H., Fu, C., Shu, F., Yu, H., Chen, C., & Chen, W. (2023). The Effect of Coupled Electroencephalography Signals in Electrooculography Signals on Sleep Staging Based on Deep Learning Methods. Bioengineering, 10(5), 573. https://doi.org/10.3390/bioengineering10050573

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop