Next Article in Journal
Design of a Bio-Inspired Untethered Soft Octopodal Robot Driven by Magnetic Field
Next Article in Special Issue
Fault Diagnosis of Planetary Gearbox Based on Dynamic Simulation and Partial Transfer Learning
Previous Article in Journal
Biased Random Walk Model of Neuronal Dynamics on Substrates with Periodic Geometrical Patterns
Previous Article in Special Issue
A 3D Printed, Bionic Hand Powered by EMG Signals and Controlled by an Online Neural Network
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Improved Colony Predation Algorithm Optimized Convolutional Neural Networks for Electrocardiogram Signal Classification

1
School of Emergency Management, Institute of Disaster Prevention, Sanhe 065201, China
2
School of Surveying and Geospatial Engineering, College of Engineering, University of Tehran, Tehran 1417935840, Iran
3
Institute of Big Data and Information Technology, Wenzhou University, Wenzhou 325000, China
4
School of Computing and Mathematical Sciences, University of Leicester, Leicester LE1 7RH, UK
*
Authors to whom correspondence should be addressed.
Biomimetics 2023, 8(3), 268; https://doi.org/10.3390/biomimetics8030268
Submission received: 13 May 2023 / Revised: 18 June 2023 / Accepted: 18 June 2023 / Published: 21 June 2023
(This article belongs to the Special Issue Bionic Artificial Neural Networks and Artificial Intelligence)

Abstract

:
Recently, swarm intelligence algorithms have received much attention because of their flexibility for solving complex problems in the real world. Recently, a new algorithm called the colony predation algorithm (CPA) has been proposed, taking inspiration from the predatory habits of groups in nature. However, CPA suffers from poor exploratory ability and cannot always escape solutions known as local optima. Therefore, to improve the global search capability of CPA, an improved variant (OLCPA) incorporating an orthogonal learning strategy is proposed in this paper. Then, considering the fact that the swarm intelligence algorithm can go beyond the local optimum and find the global optimum solution, a novel OLCPA-CNN model is proposed, which uses the OLCPA algorithm to tune the parameters of the convolutional neural network. To verify the performance of OLCPA, comparison experiments are designed to compare with other traditional metaheuristics and advanced algorithms on IEEE CEC 2017 benchmark functions. The experimental results show that OLCPA ranks first in performance compared to the other algorithms. Additionally, the OLCPA-CNN model achieves high accuracy rates of 97.7% and 97.8% in classifying the MIT-BIH Arrhythmia and European ST-T datasets.

1. Introduction

In recent years, research has shown that deep learning offers numerous benefits compared to conventional machine learning approaches [1,2]. Deep learning is known to extract features efficiently compared to traditional machine learning approaches, so many researchers tend to work with deep learning [3,4]. These methods are robust against noise, can achieve superior accuracy, and can be scaled straightforwardly to larger datasets, consequently reducing training time [5,6,7]. The main methods commonly used for deep learning are deep neural networks and generative adversarial networks. Among these networks, convolutional neural networks (CNNs) have made many contributions to the field of computer vision and are popular among researchers [8]. LeNet-5 [9] is the pioneer CNN, a convolutional neural network algorithm proposed by Yann LeCun in 1998 to solve the problem of handwriting recognition. Then, the AlexNet [10] network structure was introduced, and it won the 2012 ImageNet competition. Since then, CNNs have received widespread and enthusiastic attention worldwide, and more new network structures have been proposed, such as VGG [11], GoogleLeNet [12], ResNet [13], and DenseNet [14]. When these new network structures were created, the number of layers and parameters increased accordingly. However, tuning hyperparameters is a highly challenging task. Since the number of parameters is very large and specialized personnel are required to select the optimal parameters based on experience, this can result in the loss of a large volume of labor resources and the waste of material resources. Therefore, it is a tough task to manually tune parameters using manual labor with limited resources.
Optimization techniques range from multiobjective methods to single objective approaches to many objective techniques [15,16,17]. Each of these approaches has its own unique set of computational difficulties, making the process of optimization both challenging and rewarding [18,19]. The challenges of the optimization methods presented in the previous literature include a high computational cost, difficulty in tackling multiple local optima, lack of robustness, immature convergence, and conditionality of global optima results [20,21,22]. As one of the main classes of optimization methods, activity patterns of various groups of organisms are used to generate swarm intelligence (SI) algorithms [23]. In recent years, SIs have been able solve complex optimization problems in the real world due to their excellent optimization capabilities [24,25]. Some of the famous algorithms are the particle swarm algorithm (PSO) [26], Harris hawk optimization algorithm (HHO) [27], slime mould algorithm (SMA) [28,29], hunger games search (HGS) [30], Runge Kutta optimizer (RUN) [31], the weighted mean of vectors (INFO) [32], colony predation algorithm (CPA) [33], and rime optimization algorithm (RIME) [34]. They have been applied to solve many problems, such as bankruptcy prediction [35], economic emission dispatch [36], feature selection [37,38], constrained multiobjective optimization [39], dynamic multiobjective optimization [40], global optimization [41], medical image segmentation [42], feed-forward neural networks [43], scheduling optimization [44], large-scale complex optimization [45], multiobjective optimization [46], and numerical optimization [47]. Among them, CPA is a new algorithm proposed in recent years that is based on prey predation by animal groups in nature. CPA has a more vital optimization ability than PSO, MFO, and other algorithms [33]. Because swarm intelligence algorithms can solve complex practical problems, many researchers have proposed optimizing the parameters of deep learning network structures using SI. Researchers also need to assess the performance of swarm intelligence algorithms. IEEE CEC2017 serves as a benchmark function for testing the performance of such algorithms and comprises four categories: unimodal functions, multimodal functions, hybrid functions, and composition functions.
There are currently two types of researchers studying deep learning network structures. Some researchers use manually configured deep learning network structures, while others use hyperparameters that use SIs to optimize deep learning network structures. Pyakillya et al. [48] proposed a model for automatic classifications using a deep learning architecture that consists of a one-dimensional convolutional layer and a fully connected layer. The model achieved an accuracy of 86% on the validation dataset. Mathews et al. [49] proposed a deep learning model for electrocardiogram (ECG) classification that incorporates a restricted Boltzmann machine and a deep belief network. The experiments showed that this model performed better at low sampling rates. Sannino et al. [50] proposed a deep neural network with seven hidden layers for automatic classification and verified experimentally that this model outperforms other models in terms of accuracy, achieving a precision of 99.52%. Strodthoff et al. [51] proposed a deep learning-based time series classification algorithm that mainly uses the ResNet network model, and the experimental results proved that the performance of this algorithm is promising. Peimankar et al. [52] proposed a method for ECG signal detection composed of deep learning-based convolutional neural networks and long- and short-term memory. Different heartbeat waveforms were detected from the MITDB and QTDB datasets to test the method’s effectiveness. The F1 scores obtained on the two datasets are 99.56% and 96.78%, respectively, indicating that this method was very effective for ECG signal detection. Hasan et al. [53] proposed the use of one-dimensional convolutional neural networks for the recognition of multiple heart diseases, and the accuracy of this method on the MIT-BIH, St-Petersberg, and PTB datasets is 97.7%, 99.71%, and 98.24%, respectively. Acharya et al. [54] investigated a nine-layer convolutional neural network structure for identifying five different classes of heartbeats in ECG signals, and the experimental results showed that the accuracy was 94.03%.
Compared with manual search, automatic search using SI can be performed by improving the algorithm to find the values of suitable parameters within the selected range and finally find the most optimal values.
In the study of Houssein et al. [55], a convolutional neural network model based on an improved marine predators algorithm (IMPA-CNN) was proposed to exploit the best classification role of CNNs and to find the best hyperparameters of CNNs. This model uses the improved marine predators algorithm to select the most suitable CNN parameters automatically, and the experimental results of testing it on different ECG datasets show that this model is very effective. Khalifa et al. [56] proposed a method to optimize the parameters of a seven-layer CNN network; the first six layers of the CNN use gradient descent and the last layer uses a particle swarm algorithm to find the optimal parameters of the CNN. This model was compared with a handwritten character recognition dataset using a standard CNN architecture, and the results show that this model has a higher accuracy, with a value of 96.67%. Yamasaki et al. [57] proposed a method to improve image recognition accuracy, in which a particle swarm algorithm is implemented to automatically find the appropriate hyperparameters of the CNN. The results show higher accuracy when using this method to optimize the AlexNet structure and compare it with the standard AlexNet-CNN structure in image recognition experiments. Dey et al. [58] proposed a model integrating three network structures, VGG19, ResNet50, and DenseNet121, used to screen tuberculosis or TB images from chest X-rays. To overcome the problem of manual tuning, the training part of the model uses an optimization algorithm to set the parameters of the model, and this method was proven effective for TB classification by testing on a TB dataset.
Using convolutional neural networks, Pathan et al. [59] investigated a method to automatically identify chest X-ray images affected by COVID-19. This method achieved 98.8% and 96% accuracy for dataset 1 and dataset 2, respectively. The results of the experiments on COVID-19 images show that this method can effectively screen out patients with the disease. The hyperparameters of the DenseNet121 architecture were optimized using the gravity search algorithm in the work of Ezzat et al. [60]. This optimized model was compared with the CNN architecture of Inception-v3 in experiments for the detection of new crown pneumonia, and the experimental results indicate that this model performs exceptionally well in terms of accuracy, reaching 98.38%, significantly higher than its competitor. Most studies used optimization algorithms to tune the parameters of CNNs, while some works used optimization algorithms to tune the overall architecture of CNNs to select the most appropriate number of network layers. In Singh et al. [61]′s study, a multi-stage particle swarm was used to optimize the network structure and hyperparameters of CNNs, which is divided into two stages. In the first stage, the multi-level particle swarm algorithm is used to optimize the CNN architecture and determine the number of network layers that can better exploit the performance of the CNN. In the second stage, the hyperparameters are tuned based on this network structure. The final model was tested using an image dataset, and the results show that the performance of this model is excellent. To speed up finding the layers and parameters of CNN architecture, Fernandes et al. [62] proposed a new particle swarm velocity operator and used this new particle swarm algorithm to optimize the architecture and parameters of a CNN. Experimental tests show that this model can find an optimized CNN model based on any dataset.
Although many researchers have studied this area, there are still many challenges to be tackled. CPA faces the same challenges as other swarm intelligence algorithms, such as falling into local optima and slow convergence. To solve these problems, CPA needs to be improved. Therefore, this motivates us to propose an improved CPA and use it to optimize the parameters of a CNN.
This paper proposes an improved CPA based on an orthogonal learning strategy (OLCPA). To demonstrate the effectiveness of OLCPA in optimizing CNN parameters, it was applied to the classification of arrhythmia in ECG datasets. The main contributions of this paper are as follows:
  • An OLCPA algorithm based on the orthogonal learning strategy is proposed, and it is compared with four traditional and seven advanced algorithms on the IEEE CEC 2017 benchmark functions.
  • This paper analyzes the scalability of OLCPA and CPA on different dimensions of the IEEE CEC2017 benchmark functions.
  • A CNN-based OLCPA-CNN model for identifying abnormal ECG signals is designed.
  • The OLCPA-CNN model is compared with other methods using the MIT-BIH and the European ST-T datasets.
The rest of this paper is as follows: Section 2 describes CPA and introduces the background knowledge on CNNs. Section 3 describes the improved CPA algorithm (OLCPA). The process of the OLCPA-CNN model is described in Section 4. Section 5 describes the experimental design and results of OLCPA. Section 6 describes the application of OLCPA-CNN. Section 7 is the discussion. Finally, Section 8 is the conclusion of this paper.

2. Preliminary Work

2.1. Overview of Colony Predation Algorithm

The colony predation algorithm [33] was inspired by the fact that cooperative communicative group predation of group-housed animals increases the probability of successful predation.
Group-living animals pursue their prey by communicating with each other, and Equation (1) simulates this process.
X j i t + 1 = X j i t + 1 r · X 1 t + X 2 t 2
where X j i t is the individual currently searching for prey in the j -th dimension, j ranges from 1 to dim, and i represents the current individual. X 1 and X 2 are the positions of the two individuals closest to the prey, r is a random number between 0 and 1, and X j i t + 1 is the position of the individual in the next iteration.
Two strategies are used in the pursuit process to increase the probability of successful predation: scattering prey and surrounding prey. Prey dispersal is to drive the prey in different directions and weaken the prey group, and Equation (2) simulates this process.
X t + 1 = X b e s t S · r 1 · u b l b + l b
where S denotes the energy of the prey and its value is changed, lb and ub are the left and right values of the boundary range, r 1 is a random number between 0 and 1, X b e s t is the location of the prey, and X t + 1 is the current position of the pursuer. The variation of S is as follows:
S 0 = a t · a N
S = 2 · S 0 · r 2
a = e w 2 w 1 t M a x F e s
where r 2 varies between 0 and 1, t is the current number of evaluations, and N indicates the number of predators. S 0 varies with the number of evaluations, the value of a is related to the number of evaluations, and the value of w is 9.
After the prey has been successfully dispersed, the predators use a siege attack on them, a process shown in Equation (6):
X t + 1 = X b e s t 2 S · D · e l · tan π 4 l
D = | X b e s t X t |
where D is the interval indicating the current individual’s distance from the prey. The probability of the two strategies being executed is shown in Equation (8).
X t + 1 = X b e s t S · r 1 · u b l b + l b r 0.5 X b e s t 2 S · D · e l · tan π 4 l r < 0.5
When the predator begins to chase prey, there are two strategies: one is when the predators find prey nearby, and choose to support the closest predator to the prey; Equation (9) simulates this process. The second is when the predators do not find prey around them, they will randomly choose other locations of prey; this process is shown in Equation (11).
X t + 1 = P n e a r e s t
D 1 = 2 r 4 · X r a n d X t
X t + 1 = X r a n d S · D 1
X r a n d = r 5 · u b l b + l b
where P n e a r e s t denotes the position of the individual closest to the prey, D 1 denotes the distance moved by the random population, and X r a n d denotes the new position randomly generated by the prey individuals.
The probability of the above two strategies being executed is described in Equation (13). Algorithm 1 describes the process of implementing CPA.
X t + 1 = P n e a r e s t | r 6 | 1 X r a n d S · D 1 | r 6 | > 1
Algorithm 1 Pseudo-code for CPA
Initialize population size Num, the problem dimension dim, and the maximum number of evaluations MaxFes
While (tMaxFes)
For i = 1: Num
Calculation of individual fitness values
Update X b e s t
End for
   For j = 1: dim
   Update X 1 , X 2
   Calculate X j i using Equation (1)
   End for
   For i = 1: Num
   Update S
   If  | S | < 2 3 a
   Calculate the current agent’s position by Equation (8)
      End if
      If  | S | 2 3 a
   Calculate the current agent’s position by Equation (13)
      End if
   End for
   t = t + 1
End while
Return X b e s t

2.2. Convolutional Neural Network

Yann Lecun of New York University proposed the convolutional neural network in 1998. It differs from a multilayer perceptron (MLP) and is used in various fields, such as image processing. The difference with regard to MLP is that the way of CNN local connection and weight sharing is changed, where on the one hand, the network can be better optimized by reducing the number of weights, and on the other hand, the complexity of the model can be effectively reduced. The convolutional neural network structure is a deep neural network with a convolutional structure, and its overall architecture of network structure includes an input layer, convolutional layer, rectified linear units (ReLU) layer, pooling layer, and fully connected layer. In practical applications, the convolutional layer contains the convolutional layer and the ReLU layer. Activation functions are usually used to compute the convolutional layer, and the commonly used activation functions are the Sigmoid, Tanh, and ReLU functions. The pooling layer is generally arranged after the convolutional layer to reduce the network’s parameters and computational resources. The role of the fully connected layer in the entire network is to classify, and it is usually found in the last few layers of the CNN. The one-dimensional CNN structure of this paper is shown in Figure 1.

3. The Improved CPA

CPA is an algorithm with better optimization performance. However, when faced with complex optimization problems, it tends to fall into local optima or slow down the convergence rate due to the fact that it lacks some strategies that can flexibly address these problems. To overcome these problems, we propose an improved CPA that incorporates an orthogonal learning strategy.

3.1. Orthogonal Learning Design

An orthogonal learning design [63] is a method that uses a small number of experiments to find the best solution. The determination of the small number of experiments is mainly related to two factors in the orthogonal table L M : the K factor and the number of Q levels for each factor. L M ( Q K ) indicates that Q K sets of experiments need to be carried out, but when the values of Q and K are large, it is impossible to complete this many experiments, so it is necessary to use the orthogonal table to design the number of experiments. For experiments designed using the orthogonal table, only M combinations need to be chosen to complete the experiments, and the number of experiments M is much smaller than Q K . For example, the following orthogonal table L 9 ( 3 3 ) explains the process.
L 9 ( 3 3 ) = 1 1 1 1 2 2 1 3 3 2 1 2 2 2 3 2 3 1 3 1 3 3 2 1 3 3 2
In Equation (14), L 9 indicates that the experiment with the orthogonal table design needs to be executed only 9 times, but without the orthogonal design, this experiment needs to be executed 27 times. Therefore, the orthogonal design can significantly reduce the number of experiments, and this method is more effective when the values of Q and K are larger.

3.2. Orthogonal Learning Strategy

This study introduces the orthogonal learning strategy (OL) into CPA, which uses orthogonal tables to generate a new search mechanism to explore more regions and avoid becoming trapped in local optima. After using this strategy, the original CPA generates M + 1 search agents, which can improve the exploration ability of CPA.

3.3. The Proposed OLCPA

This subsection proposes a novel CPA algorithm based on an orthogonal learning strategy. Equation (15) describes the formation process of OLCPA. This new OLCPA algorithm exploits the orthogonal strategy’s search mechanism to expand the solution’s search scope and find high-quality solutions. The pseudo-code for OLCPA is described in Algorithm 2, and Figure 2 describes the specific process of OLCPA.
X t + 1 = X n e w F X n e w < F ( X o l d ) X o l d o t h e r s
where X n e w represents the new search agent generated using the orthogonal policy and X o l d is the search agent without the orthogonal policy. F represents the fitness function.
Algorithm 2 Pseudo-code for OLCPA
Initialize population size Num, the problem dimension dim, and the maximum number of evaluations MaxFes
While (tMaxFes)
For i = 1: Num
Calculation of individual fitness values
Update X b e s t
End for
   For j = 1: dim
   Update X 1 , X 2
   Calculate X j i by Equation (1)
   End for
   For i = 1: Num
   Update S
   If  | S | < 2 3 a
   Calculate the current agent’s position by Equation (8)
      End if
      If  | S | 2 3 a
   Calculate the current agent’s position by Equation (13)
      End if
   End for
   Execute an orthogonal strategy
   Update the current search agent
   t = t + 1
End while
Return X b e s t

4. The Design of the OLCPA-CNN Model

This section describes the OLCPA formation process, and the classification error rate of CNN is used as the fitness function. Then, the parameters of the CNN are optimized according to the optimal solution generated by OLCPA to obtain the OLCPA-CNN model and evaluate this model.

4.1. The Network Structure of CNN

The nine-layer CNN model used in this paper is displayed in Figure 1, and the overall structure consists of three sets of convolutional pooling layers and two fully connected layers, where each set of convolutional pooling layers consists of one convolutional layer and one pooling layer. Data are transmitted to the input layer, which undergoes convolution and pooling operations to achieve the mapping of different functions and simultaneously extract useful features, and then achieves the classification purpose in the fully connected layer. Figure 3 depicts the process of OLCPA-CNN being designed. The search agent of OLCPA represents the parameters of the CNN, and the optimized parameters of the CNN are obtained through iterative updates of the position of the search agent of OLCPA.

4.2. Hyperparameter Optimization

Parameter tuning occupies a crucial place in deep learning classification [64,65]. OLCPA optimizes the parameters of the CNN structure by iteratively updating the solution. In this paper, the parameters to be optimized are concentrated in the following layers: a three-layer convolutional layer and a two-layer fully connected layer. The optimization of hyperparameters in the CNN includes the number and size of convolutional kernels for each convolutional layer, the number of first fully connected layers, the learning rate, the number of epochs, and L2 regularization. The optimized parameters and their ranges are described in Table 1. The parameters of OLCPA are as follows: the number of populations is 5, the maximum number of iterations is 20, and the dimensionality is set to 10. Because the number of parameters to be optimized is closely related to the parameters of the CNN, its bounded range is related to the range of parameter variations.

4.3. Fitness Function

The fitness function is essentially an evaluation function that evaluates the goodness of an algorithmic solution by measuring the fitness of individuals in the population [66]. In this article, the fitness function is the classification error rate, closely related to classification accuracy (ACC). The correct classification rate is calculated based on the confusion matrix. Table 2 shows the expressions of the confusion matrix, where TP means that both the actual and predicted values are positive, TN means that both values are negative, FN means that the predicted value is negative and the real value is positive, and FP means that the predicted value is positive and the real value is negative. The formula for ACC is shown in Equation (16), and the corresponding fitness function is calculated as shown in Equation (17). The relationship between the value of the fitness function and the classification error rate is proportional; when the value of the fitness function becomes smaller, the classification error rate also becomes smaller, which reflects that the quality of the individual solution is good and it is beneficial to the optimization of CNN parameters.
A C C = T P + T N T P + F P + F N + T N
F i t n e s s = 1 A C C

5. Experimental Design and Results of OLCPA

In this section, we compare the OLCPA algorithm with other algorithms on the IEEE CEC2017 benchmark functions to verify the performance of OLCPA. These algorithms were tested using MATLAB R2018b with 128GB of RAM and an Intel(R) Xeon(R) Silver 4110 CPU on Windows Server 2016. The assessment of AI-based approaches through fair procedures can advance replicability, transparency, research standards, and public confidence [67,68,69]. Comparing computational methods using the same criteria allows us to established unbiased assessments [70,71]. We conducted our trials in a manner consistent with fair comparison principles. The parameters involved in the experiments are as follows: the number of populations N, dimension D, the maximum number of evaluations MaxFes, and the number of independent runs of the algorithm Num. Table 3 shows the values of these parameters.

5.1. Benchmark Function

In this subsection, the IEEE CEC2017 benchmark functions [72] are used to test the performance of the OLCPA. These functions are classified into the following categories: unimodal functions (F1–F3), multimodal functions (F4–F10), hybrid functions (F11–F20), and complex functions (F21–F30).

5.2. Scalability Test

This section tests OLCPA and CPA in 50 and 100 dimensions under the same experimental conditions. The dimensionality tests for scalability are mainly to verify the performance of OLCPA when coping with different dimensions. Table 4 shows the comparison results of these two algorithms in different dimensions. Avg and Std denote the mean and standard deviation of the experimental results, respectively. The experimental results reveal that OLCPA can also perform well in high dimensionality. The experimental results in the table show that the quality of most solutions of OLCPA is significantly stronger than that of CPA in 50 and 100 dimensions, which indicates that the improved OLCPA algorithm based on CPA is effective and the performance of OLCPA is stronger.

5.3. Comparison with Conventional and Advanced Algorithms

In this section, to test the performance of OLCPA, it is compared with 11 algorithms: CCMWOA [73], IGWO [74], CCMSCSA [75], BMWOA [76], CMFO [77], CESCA [78], GCHHO [79], DE [80], MFO [81], HGS [30], and CPA [33]. To ensure the reliability of the experiment, this experiment was conducted under the same experimental conditions. The results of comparing OLCPA with the algorithms mentioned above are listed in Table 5. The last three columns of the table are Rank, the symbol “+/=/−”, and Avg, where Rank represents the Friedman test, “+/=/−” represents the number of functions in which OLCPA is stronger than, equal to, or not stronger than the other algorithms for the 30 benchmark functions, and Avg represents the average of the benchmark function test results.
From the experimental data in Table 5, we can see that OLCPA ranks first and has a smaller mean value of 3.1322 compared to the mean value of 3.6933 for CPA, which reflects the stronger effect of OLCPA than CPA. It can be seen that OLCPA outperformed CCMWOA and CESCA for all 30 functions on CEC 2017. Although the performance of CMSCSA is close to that of OLCPA, OLCPA performs better for multimodal and mixed modes, and OLCPA is the first among these competitors.
From the results of the Wilcoxon signed-rank test in Table 6, the p-values of most algorithms are less than 0.05, proving the statistically superior performance of OLCPA compared to other algorithms.
Figure 4 depicts the convergence curves of OLCPA and the other algorithms, and it can be seen from these convergence function plots that OLCPA has the best ability to find the optimal solution compared to the other algorithms. Although the competition between HGS and OLCPA is fierce in functions F4, F12, and F19, OLCPA converges with increasing speed and accuracy in the later iterations and finally finds the optimal solution. Based on the trend of these convergence plots, it can be seen that the convergence speed and accuracy of OLCPA are better than those of the competitors.

6. Application in ECG Signal Classification

6.1. Test Datasets

This section focuses on the datasets used for training and testing. PhysioNet is a well-known research resource for studying complex physiological signals, including the MIT-BIH Arrhythmia Database [82] and the European ST-T Database [83], used for testing and training experiments.

6.1.1. MIT-BIH Arrhythmia Database

Data in the MIT-BIH database were obtained from the ECG recordings of 47 patients, 60% of whom were inpatients and 40% of whom were outpatients, tested by the Boeheim Arrhythmia Laboratory from 1975 to 1979. These records consist of 48 half-hour dual-channel ECG recordings; each record was sampled at a rate of 360 Hz, and the two channels were modified limb leads (MLII) with V channels from V1 to V5.

6.1.2. European ST-T Database

The European ST-T database contains mainly ST and t-wave variations, and the data are derived from 90 ECG recordings from 79 test subjects, who were men between 30 and 84 years of age and women between 55 and 71. Each recording spanned 2 h and contained two signals, each with a sampling rate of 250 Hz.
Due to the need to select an appropriate sample size, the MIT-BIH database was classified into four categories: N, VEBs, Q, and S, following the AAMI standard classification approach. The AAMI classification approach is described in Table 7. Due to the uneven sample distribution, four categories (N, S, Q, VEB) from the AMMI classification were selected for the MIT-BIH database for testing, and three (N, S, VEB) were selected for the European ST-T database.
In our experiment, the number of European ST-T datasets used is 3000, of which 2000 were in the training set and 1000 were in the test set, and the number of MIT-BIT datasets used is 10,000, of which 8000 were in the training set and 2000 were in the test set. Table 8 shows the number of samples per category for the datasets.

6.2. Metrics for Performance Evaluation

The evaluation metrics used in this section differ from those used in Section 5. In Section 5, mean and standard deviation were utilized to evaluate the performance of the OLCPA algorithm, which is a continuous problem. However, this section mainly focuses on utilizing the OLCPA-CNN model to solve ECG electrocardiogram signal classification problems, which are categorical problems and differ from continuous problems. Therefore, in order to effectively evaluate the performance of the OLCPA-CNN model and compare it with methods used by other researchers, common metrics in deep learning such as accuracy (ACC), precision (Pr), specificity (Sp), sensitivity (Se), and F-score (F1) are utilized. They are calculated as follows:
S e = T P T P + F N
S p = T N F P + T N
P r = T P T P + F P
F 1 = 2 T P 2 T P + F P + F N
where the meanings represented by TP, TN, FP, and FN are described in Section 5.3.
ACC indicates the rate of correct classification. The value of ACC can reflect the good or bad performance of the model classification; when the value of the former is large, accordingly, the performance of the latter is also good. The sensitivity indicates the number of positive samples as a percentage of the number of all true positive samples. The sensitivity value is proportional to the classification accuracy of positive samples, and when the value of the former increases, the latter also increases accordingly. Specificity indicates the number of samples predicted to be negative among all true negative samples. Precision indicates the proportion of true positive samples among all positive samples. The F1 integrally evaluates the performance of a classifier, and the larger the F1, the better the performance of this classifier.

6.3. Performance Analysis of OLCPA-CNN on Datasets

The proposed OLCPA-CNN model was tested and trained using two different datasets. Since OLCPA can tune the parameters of the CNN through the iterative update of the population, it allows the CNN to play a better role in classification. The classification accuracy of this model on the two datasets (MIT-BIH, ST-T) is 97.9% and 97.8%, respectively, which shows that the performance of this OLCPA-CNN model is excellent.
Figure 5 depicts the average values of the evaluation metrics for OLCPA-CNN, CPA-CNN, and a randomly generated CNN on the MIT-BIH dataset; the results in the figure indicate that the evaluation metrics of OLCPA-CNN are higher than those of CNN and CPA-CNN, which indicates that OLCPA-CNN outperforms CPA-CNN. Figure 6 shows the average performance of OLCPA-CNN, the randomly generated CNN, and CPA-CNN on the ST-T dataset. The results in the figure indicate that the performance of OLCPA-CNN is significantly stronger than that of CPA-CNN and the other CNN, which reflects that the overall optimization of OLCPA-CNN is good. Figure 7 and Figure 8 show the correct and loss rates of OLCPA-CNN on the ST-T and MIT-BIH datasets, respectively, and the trend of the curves shows that as the number of iterations increases, the correct rate approaches 100% and the loss rate converges to 0.
To highlight the effective performance of OLCPA-CNN on the MIT-BIH dataset for classification performance, OLCPA-CNN was compared with other methods. These methods include the following: Li et al. [84] proposed a model for optimizing support vector machine classifiers using a genetic algorithm and used it for the MIT-BIH dataset; Patro et al. [85] proposed optimizing machine learning using optimization algorithms and applied this to the MIT-BIH dataset, where the algorithms and classifiers involved include the support vector machine (SVM) and random forest (RF), genetic algorithm (GA) and particle swarm algorithm (PSO); Acharya et al. [29] investigated a nine-layer convolutional neural network structure for identifying five different classes of heartbeats in ECG signals. The experimental data in Table 9 reflect that OLCPA-CNN is effective compared with these other methods.

7. Discussion

The classification results demonstrate that OLCPA-CNN can automatically search for the best hyperparameters suitable for a CNN, and the proposed OLCPA-CNN model can effectively address ECG classification tasks. Moreover, the ability to automatically extract features and perform classification is precious, particularly considering the significant expense associated with manual feature annotation by specialized professionals. However, when optimizing complex network architectures and handling vast amounts of data, this technique also has the drawback of high time costs. In addition, there is currently no universally applicable solution to every problem, and it is necessary to decide on the optimization algorithm and network architecture—including hyperparameters, number of neurons, and layers—to use based on the specific problem being addressed. Therefore, to solve ECG classification problems, this study selected an improved CPA algorithm, namely the OLCPA optimization algorithm, to optimize the hyperparameters of a CNN. Of course, in future work, it can also be applied to more cases, such as the optimization of machine learning models [86], fine-grained alignment [87], computational experiments [88,89], Alzheimer’s disease identification [90], iris or retinal vessel segmentation [91,92], MRI reconstruction [93], service ecosystem [94], structured sparsity optimization [95], tensor recovery [96,97], medical image computing [98], computer-aided medical diagnosis [99], image denoising [100], renewable energy generation [101], and medical signals [102,103].

8. Conclusions and Future Works

This article presents an OLCPA algorithm based on orthogonal learning strategies and proposes an OLCPA-CNN model that optimizes the hyperparameters of a CNN using the OLCPA algorithm. The experimental results show that the OLCPA-CNN model achieves excellent classification performance, with an accuracy of 97.90% on the MIT-BIH dataset, outperforming other models proposed by researchers. As the MIT-BIH dataset is a type of time-series data, the OLCPA-CNN model presented in this paper can be used for ECG classification and other time-series datasets, such as geomagnetic data. However, the use of this model is limited and it may not be suitable for different classification problems. Therefore, specific algorithms should be designed, and appropriate network structures should be optimized for specific problems.
In future work, reducing the running time will be the focus due to the drawback of this parameter optimization process being a waste of time. Secondly, this model is also valuable for other problems, such as regression problems. Third, CPA can be combined with other network structures.

Declaration of AI and AI-Assisted Technologies in the Writing Process

During the revision of this work, the authors used ChatGPT in order to enhance the English grammar and paraphrase some sentences. After using this tool/service, the authors reviewed and edited the content as needed, and they takes full responsibility for the content of the publication.

Author Contributions

X.H.: Conceptualization, Software, Data Curation; W.S.: Conceptualization, Investigation, Writing—Original Draft, Project Administration; R.Z.: Software, Formal Analysis, Data Curation; A.A.H.: Methodology, Writing—Original Draft, Writing—Review and Editing; H.C.: Methodology, Validation, Investigation, Supervision; Y.Z.: Validation, Formal Analysis, Writing—Review and Editing, Funding Acquisition. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Natural Science Foundation of Hebei Province (D2022512001); the National Natural Science Foundation of China (42164002); MRC, UK (MC_PC_17171); Royal Society, UK (RP202G0230); BHF, UK (AA/18/3/34220); Hope Foundation for Cancer Research, UK (RM60G0680); GCRF, UK (P202PF11); Sino-UK Industrial Fund, UK (RP202G0289); LIAS, UK (P202ED10, P202RE969); Data Science Enhancement Fund, UK (P202RE237); Fight for Sight, UK (24NN201); Sino-UK Education Fund, UK (OP202006); BBSRC, UK (RM32G0178B8).

Data Availability Statement

The numerical and experimental data used to support the findings of this study are included within the article.

Conflicts of Interest

The authors declare that they have no known competing financial interests or personal relationships that could have influenced the work reported in this paper.

References

  1. Li, L.; Wang, P.; Zheng, X.; Xie, Q.; Tao, X.; Velásquez, J.D. Dual-interactive fusion for code-mixed deep representation learning in tag recommendation. Inf. Fusion 2023, 99, 101862. [Google Scholar] [CrossRef]
  2. Liu, H.; Yue, Y.; Liu, C.; Spencer Jr, B.; Cui, J. Automatic recognition and localization of underground pipelines in GPR B-scans using a deep learning model. Tunn. Undergr. Space Technol. 2023, 134, 104861. [Google Scholar] [CrossRef]
  3. Zhao, K.; Jia, Z.; Jia, F.; Shao, H. Multi-scale integrated deep self-attention network for predicting remaining useful life of aero-engine. Eng. Appl. Artif. Intell. 2023, 120, 105860. [Google Scholar] [CrossRef]
  4. Deng, Y.; Lv, J.; Huang, D.; Du, S. Combining the theoretical bound and deep adversarial network for machinery open-set diagnosis transfer. Neurocomputing 2023, 548, 126391. [Google Scholar] [CrossRef]
  5. Deng, X.; Liu, E.; Li, S.; Duan, Y.; Xu, M. Interpretable Multi-modal Image Registration Network Based on Disentangled Convolutional Sparse Coding. IEEE Trans. Image Process. 2023, 32, 1078–1091. [Google Scholar] [CrossRef] [PubMed]
  6. Guan, Z.; Jing, J.; Deng, X.; Xu, M.; Jiang, L.; Zhang, Z.; Li, Y. DeepMIH: Deep invertible network for multiple image hiding. IEEE Trans. Pattern Anal. Mach. Intell. 2022, 45, 372–390. [Google Scholar] [CrossRef]
  7. Xu, J.; Pan, S.; Sun, P.Z.H.; Park, S.H.; Guo, K. Human-Factors-in-Driving-Loop: Driver Identification and Verification via a Deep Learning Approach using Psychological Behavioral Data. IEEE Trans. Intell. Transp. Syst. 2023, 24, 3383–3394. [Google Scholar] [CrossRef]
  8. Lu, H.; Zhu, Y.; Yin, M.; Yin, G.; Xie, L. Multimodal fusion convolutional neural network with cross-attention mechanism for internal defect detection of magnetic tile. IEEE Access 2022, 10, 60876–60886. [Google Scholar] [CrossRef]
  9. Lecun, Y.; Bottou, L.; Bengio, Y.; Haffner, P. Gradient-based learning applied to document recognition. Proc. IEEE 1998, 86, 2278–2324. [Google Scholar] [CrossRef]
  10. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. ImageNet classification with deep convolutional neural networks. Commun. ACM 2017, 60, 84–90. [Google Scholar] [CrossRef]
  11. Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar]
  12. Ioffe, S.; Szegedy, C. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In Proceedings of the International Conference on Machine Learning, Lille, France, 6–11 July 2015; pp. 448–456. [Google Scholar]
  13. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
  14. Huang, G.; Liu, Z.; Van Der Maaten, L.; Weinberger, K.Q. Densely connected convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 4700–4708. [Google Scholar]
  15. Cao, B.; Zhao, J.; Gu, Y.; Fan, S.; Yang, P. Security-aware industrial wireless sensor network deployment optimization. IEEE Trans. Ind. Inform. 2019, 16, 5309–5316. [Google Scholar] [CrossRef]
  16. Zhang, K.; Wang, Z.; Chen, G.; Zhang, L.; Yang, Y.; Yao, C.; Wang, J.; Yao, J. Training effective deep reinforcement learning agents for real-time life-cycle production optimization. J. Pet. Sci. Eng. 2022, 208, 109766. [Google Scholar] [CrossRef]
  17. Cao, B.; Zhao, J.; Gu, Y.; Ling, Y.; Ma, X. Applying graph-based differential grouping for multiobjective large-scale optimization. Swarm Evol. Comput. 2020, 53, 100626. [Google Scholar] [CrossRef]
  18. Cao, B.; Zhao, J.; Lv, Z.; Gu, Y.; Yang, P.; Halgamuge, S.K. Multiobjective Evolution of Fuzzy Rough Neural Network via Distributed Parallelism for Stock Prediction. IEEE Trans. Fuzzy Syst. 2020, 28, 939–952. [Google Scholar] [CrossRef]
  19. Cao, B.; Li, M.; Liu, X.; Zhao, J.; Cao, W.; Lv, Z. Many-Objective Deployment Optimization for a Drone-Assisted Camera Network. IEEE Trans. Netw. Sci. Eng. 2021, 8, 2756–2764. [Google Scholar] [CrossRef]
  20. Cao, B.; Zhao, J.; Yang, P.; Gu, Y.; Muhammad, K.; Rodrigues, J.J.; de Albuquerque, V.H.C. Multiobjective 3-D topology optimization of next-generation wireless data center network. IEEE Trans. Ind. Inform. 2019, 16, 3597–3605. [Google Scholar] [CrossRef]
  21. Cao, B.; Fan, S.; Zhao, J.; Tian, S.; Zheng, Z.; Yan, Y.; Yang, P. Large-scale many-objective deployment optimization of edge servers. IEEE Trans. Intell. Transp. Syst. 2021, 22, 3841–3849. [Google Scholar] [CrossRef]
  22. Zhang, J.; Tang, Y.; Wang, H.; Xu, K. ASRO-DIO: Active Subspace Random Optimization Based Depth Inertial Odometry. IEEE Trans. Robot. 2022, 39, 1496–1508. [Google Scholar] [CrossRef]
  23. Li, R.; Wu, X.; Tian, H.; Yu, N.; Wang, C. Hybrid Memetic Pretrained Factor Analysis-Based Deep Belief Networks for Transient Electromagnetic Inversion. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5920120. [Google Scholar] [CrossRef]
  24. Shan, W.; He, X.; Liu, H.; Heidari, A.A.; Wang, M.; Cai, Z.; Chen, H. Cauchy mutation boosted Harris hawk algorithm: Optimal performance design and engineering applications. J. Comput. Des. Eng. 2023, 10, 503–526. [Google Scholar] [CrossRef]
  25. Shan, W.; Qiao, Z.; Heidari, A.A.; Chen, H.; Turabieh, H.; Teng, Y. Double adaptive weights for stabilization of moth flame optimizer: Balance analysis, engineering cases, and medical diagnosis. Knowl.-Based Syst. 2021, 214, 106728. [Google Scholar] [CrossRef]
  26. Tian, J.; Hou, M.; Bian, H.; Li, J. Variable surrogate model-based particle swarm optimization for high-dimensional expensive problems. Complex Intell. Syst. 2022, 1–49. [Google Scholar] [CrossRef]
  27. Gandomi, A.H.; Yang, X.-S.; Alavi, A.H. Cuckoo search algorithm: A metaheuristic approach to solve structural optimization problems. Eng. Comput. 2011, 29, 17–35. [Google Scholar] [CrossRef]
  28. Chen, H.; Li, C.; Mafarja, M.; Heidari, A.A.; Chen, Y.; Cai, Z. Slime mould algorithm: A comprehensive review of recent variants and applications. Int. J. Syst. Sci. 2022, 54, 204–235. [Google Scholar] [CrossRef]
  29. Li, S.; Chen, H.; Wang, M.; Heidari, A.A.; Mirjalili, S. Slime mould algorithm: A new method for stochastic optimization. Future Gener. Comput. Syst. 2020, 111, 300–323. [Google Scholar] [CrossRef]
  30. Yang, Y.; Chen, H.; Heidari, A.A.; Gandomi, A.H. Hunger games search: Visions, conception, implementation, deep analysis, perspectives, and towards performance shifts. Expert Syst. Appl. 2021, 177, 114864. [Google Scholar] [CrossRef]
  31. Ahmadianfar, I.; Asghar Heidari, A.; Gandomi, A.H.; Chu, X.; Chen, H. RUN Beyond the Metaphor: An Efficient Optimization Algorithm Based on Runge Kutta Method. Expert Syst. Appl. 2021, 181, 115079. [Google Scholar] [CrossRef]
  32. Ahmadianfar, I.; Asghar Heidari, A.; Noshadian, S.; Chen, H.; Gandomi, A.H. INFO: An Efficient Optimization Algorithm based on Weighted Mean of Vectors. Expert Syst. Appl. 2022, 195, 116516. [Google Scholar] [CrossRef]
  33. Tu, J.; Chen, H.; Wang, M.; Gandomi, A.H. The Colony Predation Algorithm. J. Bionic Eng. 2021, 18, 674–710. [Google Scholar] [CrossRef]
  34. Su, H.; Zhao, D.; Asghar Heidari, A.; Liu, L.; Zhang, X.; Mafarja, M.; Chen, H. RIME: A physics-based optimization. Neurocomputing 2023, 532, 183–214. [Google Scholar] [CrossRef]
  35. Zhang, Y.; Liu, R.; Heidari, A.A.; Wang, X.; Chen, Y.; Wang, M.; Chen, H. Towards augmented kernel extreme learning models for bankruptcy prediction: Algorithmic behavior and comprehensive analysis. Neurocomputing 2021, 430, 185–212. [Google Scholar] [CrossRef]
  36. Dong, R.; Chen, H.; Heidari, A.A.; Turabieh, H.; Mafarja, M.; Wang, S. Boosted kernel search: Framework, analysis and case studies on the economic emission dispatch problem. Knowl.-Based Syst. 2021, 233, 107529. [Google Scholar] [CrossRef]
  37. Xue, Y.; Cai, X.; Neri, F. A multi-objective evolutionary algorithm with interval based initialization and self-adaptive crossover operator for large-scale feature selection in classification. Appl. Soft Comput. 2022, 127, 109420. [Google Scholar] [CrossRef]
  38. Hu, H.; Shan, W.; Chen, J.; Xing, L.; Heidari, A.A.; Chen, H.; He, X.; Wang, M. Dynamic Individual Selection and Crossover Boosted Forensic-based Investigation Algorithm for Global Optimization and Feature Selection. J. Bionic Eng. 2023, 1–27. [Google Scholar] [CrossRef]
  39. Liang, J.; Qiao, K.; Yu, K.; Qu, B.; Yue, C.; Guo, W.; Wang, L. Utilizing the Relationship between Unconstrained and Constrained Pareto Fronts for Constrained Multiobjective Optimization. IEEE Trans. Cybern. 2022, 53, 3873–3886. [Google Scholar] [CrossRef]
  40. Yu, K.; Zhang, D.; Liang, J.; Chen, K.; Yue, C.; Qiao, K.; Wang, L. A Correlation-Guided Layered Prediction Approach for Evolutionary Dynamic Multiobjective Optimization. IEEE Trans. Evol. Comput. 2022, 1. [Google Scholar] [CrossRef]
  41. Deng, W.; Xu, J.; Gao, X.Z.; Zhao, H. An Enhanced MSIQDE Algorithm with Novel Multiple Strategies for Global Optimization Problems. IEEE Trans. Syst. Man Cybern. Syst. 2022, 52, 1578–1587. [Google Scholar] [CrossRef]
  42. Chen, J.; Cai, Z.; Chen, H.; Chen, X.; Escorcia-Gutierrez, J.; Mansour, R.F.; Ragab, M. Renal Pathology Images Segmentation Based on Improved Cuckoo Search with Diffusion Mechanism and Adaptive Beta-Hill Climbing. J. Bionic Eng. 2023, 1–36. [Google Scholar] [CrossRef]
  43. Xue, Y.; Tong, Y.; Neri, F. An ensemble of differential evolution and Adam for training feed-forward neural networks. Inf. Sci. 2022, 608, 453–471. [Google Scholar] [CrossRef]
  44. Wen, X.; Wang, K.; Li, H.; Sun, H.; Wang, H.; Jin, L. A two-stage solution method based on NSGA-II for Green Multi-Objective integrated process planning and scheduling in a battery packaging machinery workshop. Swarm Evol. Comput. 2021, 61, 100820. [Google Scholar] [CrossRef]
  45. Huang, C.; Zhou, X.; Ran, X.; Liu, Y.; Deng, W.; Deng, W. Co-evolutionary competitive swarm optimizer with three-phase for large-scale complex optimization problem. Inf. Sci. 2023, 619, 2–18. [Google Scholar] [CrossRef]
  46. Zhao, C.; Zhou, Y.; Lai, X. An integrated framework with evolutionary algorithm for multi-scenario multi-objective optimization problems. Inf. Sci. 2022, 600, 342–361. [Google Scholar] [CrossRef]
  47. Li, C.; Sun, G.; Deng, L.; Qiao, L.; Yang, G. A population state evaluation-based improvement framework for differential evolution. Inf. Sci. 2023, 629, 15–38. [Google Scholar] [CrossRef]
  48. Pyakillya, B.; Kazachenko, N.; Mikhailovsky, N. Deep Learning for ECG Classification. J. Phys. Conf. Ser. 2017, 913, 012004. [Google Scholar] [CrossRef]
  49. Mathews, S.M.; Kambhamettu, C.; Barner, K.E. A novel application of deep learning for single-lead ECG classification. Comput. Biol. Med. 2018, 99, 53–62. [Google Scholar] [CrossRef]
  50. Sannino, G.; De Pietro, G. A deep learning approach for ECG-based heartbeat classification for arrhythmia detection. Future Gener. Comput. Syst. 2018, 86, 446–455. [Google Scholar] [CrossRef]
  51. Strodthoff, N.; Wagner, P.; Schaeffter, T.; Samek, W. Deep Learning for ECG Analysis: Benchmarks and Insights from PTB-XL. IEEE J. Biomed. Health Inf. 2021, 25, 1519–1528. [Google Scholar] [CrossRef]
  52. Peimankar, A.; Puthusserypady, S. DENS-ECG: A deep learning approach for ECG signal delineation. Expert Syst. Appl. 2021, 165, 113911. [Google Scholar] [CrossRef]
  53. Hasan, N.I.; Bhattacharjee, A. Deep Learning Approach to Cardiovascular Disease Classification Employing Modified ECG Signal from Empirical Mode Decomposition. Biomed. Signal Process. Control 2019, 52, 128–140. [Google Scholar] [CrossRef]
  54. Acharya, U.R.; Oh, S.L.; Hagiwara, Y.; Tan, J.H.; Adam, M.; Gertych, A.; San Tan, R. A deep convolutional neural network model to classify heartbeats. Comput. Biol. Med. 2017, 89, 389–396. [Google Scholar] [CrossRef]
  55. Houssein, E.H.; Hassaballah, M.; Ibrahim, I.E.; AbdElminaam, D.S.; Wazery, Y.M. An automatic arrhythmia classification model based on improved Marine Predators Algorithm and Convolutions Neural Networks. Expert Syst. Appl. 2022, 187, 115936. [Google Scholar] [CrossRef]
  56. Khalifa, M.H.; Ammar, M.; Ouarda, W.; Alimi, A.M. Particle swarm optimization for deep learning of convolution neural network. In Proceedings of the 2017 Sudan Conference on Computer Science and Information Technology (SCCSIT), Elnuhood, Sudan, 17–19 November 2017; pp. 1–5. [Google Scholar]
  57. Yamasaki, T.; Honma, T.; Aizawa, K. Efficient optimization of convolutional neural networks using particle swarm optimization. In Proceedings of the 2017 IEEE Third International Conference on Multimedia Big Data (BigMM), Laguna Hills, CA, USA, 19–21 April 2017; pp. 70–73. [Google Scholar]
  58. Dey, S.; Roychoudhury, R.; Malakar, S.; Sarkar, R. An optimized fuzzy ensemble of convolutional neural networks for detecting tuberculosis from Chest X-ray images. Appl. Soft Comput. 2022, 114, 108094. [Google Scholar] [CrossRef]
  59. Pathan, S.; Siddalingaswamy, P.C.; Ali, T. Automated Detection of Covid-19 from Chest X-ray scans using an optimized CNN architecture. Appl. Soft Comput. 2021, 104, 107238. [Google Scholar] [CrossRef]
  60. Ezzat, D.; Hassanien, A.E.; Ella, H.A. An optimized deep learning architecture for the diagnosis of COVID-19 disease based on gravitational search optimization. Appl. Soft Comput. 2021, 98, 106742. [Google Scholar] [CrossRef] [PubMed]
  61. Singh, P.; Chaudhury, S.; Panigrahi, B.K. Hybrid MPSO-CNN: Multi-level Particle Swarm optimized hyperparameters of Convolutional Neural Network. Swarm Evol. Comput. 2021, 63, 100863. [Google Scholar] [CrossRef]
  62. Fernandes Junior, F.E.; Yen, G.G. Particle swarm optimization of deep neural networks architectures for image classification. Swarm Evol. Comput. 2019, 49, 62–74. [Google Scholar] [CrossRef]
  63. Fisher, R.A. A Mathematical Examination of the Methods of Determining the Accuracy of an Observation by the Mean Error, and by the Mean Square Error. Mon. Not. R. Astron. Soc. 1920, 80, 758–770. [Google Scholar] [CrossRef]
  64. Zhuang, Y.; Chen, S.; Jiang, N.; Hu, H. An Effective WSSENet-Based Similarity Retrieval Method of Large Lung CT Image Databases. KSII Trans. Internet Inf. Syst. 2022, 16, 2359–2376. [Google Scholar]
  65. Huang, C.-Q.; Jiang, F.; Huang, Q.-H.; Wang, X.-Z.; Han, Z.-M.; Huang, W.-Y. Dual-graph attention convolution network for 3-D point cloud classification. IEEE Trans. Neural Netw. Learn. Syst. 2022, 1–13. [Google Scholar] [CrossRef]
  66. Guo, F.; Zhou, W.; Lu, Q.; Zhang, C. Path extension similarity link prediction method based on matrix algebra in directed networks. Comput. Commun. 2022, 187, 83–92. [Google Scholar] [CrossRef]
  67. Liu, X.; He, J.; Liu, M.; Yin, Z.; Yin, L.; Zheng, W. A Scenario-Generic Neural Machine Translation Data Augmentation Method. Electronics 2023, 12, 2320. [Google Scholar] [CrossRef]
  68. Cheng, L.; Yin, F.; Theodoridis, S.; Chatzis, S.; Chang, T.-H. Rethinking Bayesian learning for data analysis: The art of prior and inference in sparsity-aware modeling. IEEE Signal Process. Mag. 2022, 39, 18–52. [Google Scholar] [CrossRef]
  69. Song, X.; Tong, W.; Lei, C.; Huang, J.; Fan, X.; Zhai, G.; Zhou, H. A clinical decision model based on machine learning for ptosis. BMC Ophthalmol. 2021, 21, 169. [Google Scholar] [CrossRef] [PubMed]
  70. Xie, X.; Huang, L.; Marson, S.M.; Wei, G. Emergency response process for sudden rainstorm and flooding: Scenario deduction and Bayesian network analysis using evidence theory and knowledge meta-theory. Nat. Hazards 2023, 1–23. [Google Scholar] [CrossRef]
  71. Wang, S.; Hu, X.; Sun, J.; Liu, J. Hyperspectral anomaly detection using ensemble and robust collaborative representation. Inf. Sci. 2023, 624, 748–760. [Google Scholar] [CrossRef]
  72. Wu, G.; Mallipeddi, R.; Suganthan, P.N. Problem Definitions and Evaluation Criteria for the CEC 2017 Competition on Constrained Real-Parameter Optimization; Technical Report; National University of Defense Technology: Changsha, China; Kyungpook National University: Daegu, Republic of Korea; Nanyang Technological University: Singapore, 2017. [Google Scholar]
  73. Luo, J.; Chen, H.; Heidari, A.A.; Xu, Y.; Zhang, Q.; Li, C. Multi-strategy boosted mutative whale-inspired optimization approaches. Appl. Math. Model. 2019, 73, 109–123. [Google Scholar] [CrossRef]
  74. Long, W.; Liang, X.; Cai, S.; Jiao, J.; Zhang, W. A modified augmented Lagrangian with improved grey wolf optimization to constrained optimization problems. Neural Comput. Appl. 2017, 28, 421–438. [Google Scholar] [CrossRef]
  75. Shan, W.; Hu, H.; Cai, Z.; Chen, H.; Liu, H.; Wang, M.; Teng, Y. Multi-strategies Boosted Mutative Crow Search Algorithm for Global Tasks: Cases of Continuous and Discrete Optimization. J. Bionic Eng. 2022, 19, 1830–1849. [Google Scholar] [CrossRef]
  76. Heidari, A.A.; Aljarah, I.; Faris, H.; Chen, H.; Luo, J.; Mirjalili, S. An enhanced associative learning-based exploratory whale optimizer for global optimization. Neural Comput. Appl. 2020, 32, 5185–5211. [Google Scholar] [CrossRef]
  77. Wang, M.; Chen, H.; Yang, B.; Zhao, X.; Hu, L.; Cai, Z.; Huang, H.; Tong, C. Toward an optimal kernel extreme learning machine using a chaotic moth-flame optimization strategy with applications in medical diagnoses. Neurocomputing 2017, 267, 69–84. [Google Scholar] [CrossRef]
  78. Lin, A.; Wu, Q.; Heidari, A.A.; Xu, Y.; Chen, H.; Geng, W.; Li, Y.; Li, C. Predicting Intentions of Students for Master Programs Using a Chaos-Induced Sine Cosine-Based Fuzzy K-Nearest Neighbor Classifier. IEEE Access 2019, 7, 67235–67248. [Google Scholar] [CrossRef]
  79. Song, S.; Wang, P.; Heidari, A.A.; Wang, M.; Zhao, X.; Chen, H.; He, W.; Xu, S. Dimension decided Harris hawks optimization with Gaussian mutation: Balance analysis and diversity patterns. Knowl.-Based Syst. 2021, 215, 106425. [Google Scholar] [CrossRef]
  80. Price, K.V. Differential Evolution. In Handbook of Optimization; Springer: Berlin/Heidelberg, Germany, 2013. [Google Scholar] [CrossRef]
  81. Mirjalili, S. Moth-flame optimization algorithm: A novel nature-inspired heuristic paradigm. Knowl.-Based Syst. 2015, 89, 228–249. [Google Scholar] [CrossRef]
  82. Moody, G.B.; Mark, R.G. The impact of the MIT-BIH arrhythmia database. IEEE Eng. Med. Biol. Mag. 2001, 20, 45–50. [Google Scholar] [CrossRef]
  83. Taddei, A.; Distante, G.; Emdin, M.; Pisani, P.; Moody, G.B.; Zeelenberg, C.; Marchesi, C. The European ST-T database: Standard for evaluating systems for the analysis of ST-T changes in ambulatory electrocardiography. Eur. Heart J. 1992, 13, 1164–1172. [Google Scholar] [CrossRef]
  84. Li, H.; Yuan, D.; Wang, Y.; Cui, D.; Cao, L. Arrhythmia classification based on multi-domain feature extraction for an ECG recognition system. Sensors 2016, 16, 1744. [Google Scholar] [CrossRef]
  85. Patro, K.K.; Jaya Prakash, A.; Jayamanmadha Rao, M.; Rajesh Kumar, P. An Efficient Optimized Feature Selection with Machine Learning Approach for ECG Biometric Recognition. IETE J. Res. 2020, 68, 2743–2754. [Google Scholar] [CrossRef]
  86. Zhao, C.; Wang, H.; Chen, H.; Shi, W.; Feng, Y. JAMSNet: A Remote Pulse Extraction Network Based on Joint Attention and Multi-Scale Fusion. IEEE Trans. Circuits Syst. Video Technol. 2022, 33, 2783–2797. [Google Scholar] [CrossRef]
  87. Wang, S.; Wang, B.; Zhang, Z.; Heidari, A.A.; Chen, H. Class-aware sample reweighting optimal transport for multi-source domain adaptation. Neurocomputing 2023, 523, 213–223. [Google Scholar] [CrossRef]
  88. Xue, X.; Yu, X.-N.; Zhou, D.-Y.; Wang, X.; Zhou, Z.-B.; Wang, F.-Y. Computational Experiments: Past, Present and Future. arXiv 2022, arXiv:2202.13690. [Google Scholar]
  89. Xue, X.; Yu, X.; Zhou, D.; Peng, C.; Wang, X.; Liu, D.; Wang, F.-Y. Computational Experiments for Complex Social Systems—Part III: The Docking of Domain Models. IEEE Trans. Comput. Soc. Syst. 2023, 1–15. [Google Scholar] [CrossRef]
  90. Yan, B.; Li, Y.; Li, L.; Yang, X.; Li, T.-q.; Yang, G.; Jiang, M. Quantifying the impact of Pyramid Squeeze Attention mechanism and filtering approaches on Alzheimer’s disease classification. Comput. Biol. Med. 2022, 148, 105944. [Google Scholar] [CrossRef]
  91. Chen, Y.; Gan, H.; Chen, H.; Zeng, Y.; Xu, L.; Heidari, A.A.; Zhu, X.; Liu, Y. Accurate iris segmentation and recognition using an end-to-end unified framework based on MADNet and DSANet. Neurocomputing 2023, 517, 264–278. [Google Scholar] [CrossRef]
  92. Li, Y.; Zhang, Y.; Cui, W.; Lei, B.; Kuang, X.; Zhang, T. Dual Encoder-Based Dynamic-Channel Graph Convolutional Network with Edge Enhancement for Retinal Vessel Segmentation. IEEE Trans. Med. Imaging 2022, 41, 1975–1989. [Google Scholar] [CrossRef]
  93. Lv, J.; Li, G.; Tong, X.; Chen, W.; Huang, J.; Wang, C.; Yang, G. Transfer learning enhanced generative adversarial networks for multi-channel MRI reconstruction. Comput. Biol. Med. 2021, 134, 104504. [Google Scholar] [CrossRef] [PubMed]
  94. Xue, X.; Li, G.; Zhou, D.; Zhang, Y.; Zhang, L.; Zhao, Y.; Feng, Z.; Cui, L.; Zhou, Z.; Sun, X. Research Roadmap of Service Ecosystems: A Crowd Intelligence Perspective. Int. J. Crowd Sci. 2022, 6, 195–222. [Google Scholar] [CrossRef]
  95. Zhang, X.; Zheng, J.; Wang, D.; Tang, G.; Zhou, Z.; Lin, Z. Structured Sparsity Optimization with Non-Convex Surrogates of ℓ2,0ℓ2,0-Norm: A Unified Algorithmic Framework. IEEE Trans. Pattern Anal. Mach. Intell. 2023, 45, 6386–6402. [Google Scholar] [CrossRef]
  96. Zhang, X.; Wang, D.; Zhou, Z.; Ma, Y. Robust Low-Rank Tensor Recovery with Rectification and Alignment. IEEE Trans. Pattern Anal. Mach. Intell. 2021, 43, 238–255. [Google Scholar] [CrossRef]
  97. Zhang, X.; Zheng, J.; Zhao, L.; Zhou, Z.; Lin, Z. Tensor Recovery with Weighted Tensor Average Rank. IEEE Trans. Neural Netw. Learn. Syst. 2022, 1–15. [Google Scholar] [CrossRef]
  98. Nabavi, S.; Ejmalian, A.; Moghaddam, M.E.; Abin, A.A.; Frangi, A.F.; Mohammadi, M.; Rad, H.S. Medical imaging and computational image analysis in COVID-19 diagnosis: A review. Comput. Biol. Med. 2021, 135, 104605. [Google Scholar] [CrossRef] [PubMed]
  99. Faruqui, N.; Yousuf, M.A.; Whaiduzzaman, M.; Azad, A.K.M.; Barros, A.; Moni, M.A. LungNet: A hybrid deep-CNN model for lung cancer diagnosis using CT and wearable sensor-based medical IoT data. Comput. Biol. Med. 2021, 139, 104961. [Google Scholar] [CrossRef] [PubMed]
  100. Zhang, X.; Zheng, J.; Wang, D.; Zhao, L. Exemplar-Based Denoising: A Unified Low-Rank Recovery Framework. IEEE Trans. Circuits Syst. Video Technol. 2020, 30, 2538–2549. [Google Scholar] [CrossRef]
  101. Sun, X.; Cao, X.; Zeng, B.; Zhai, Q.; Guan, X. Multistage Dynamic Planning of Integrated Hydrogen-Electrical Microgrids under Multiscale Uncertainties. IEEE Trans. Smart Grid 2022, 1. [Google Scholar] [CrossRef]
  102. Dai, Y.; Wu, J.; Fan, Y.; Wang, J.; Niu, J.; Gu, F.; Shen, S. MSEva: A musculoskeletal rehabilitation evaluation system based on EMG signals. ACM Trans. Sens. Netw. 2022, 19, 1–23. [Google Scholar] [CrossRef]
  103. Zhou, J.; Zhang, X.; Jiang, Z. Recognition of Imbalanced Epileptic EEG Signals by a Graph-Based Extreme Learning Machine. Wirel. Commun. Mob. Comput. 2021, 2021, 5871684. [Google Scholar] [CrossRef]
Figure 1. The architecture of a nine-layer CNN.
Figure 1. The architecture of a nine-layer CNN.
Biomimetics 08 00268 g001
Figure 2. Flowchart of OLCPA.
Figure 2. Flowchart of OLCPA.
Biomimetics 08 00268 g002
Figure 3. The formation process of the OLCPA-CNN model.
Figure 3. The formation process of the OLCPA-CNN model.
Biomimetics 08 00268 g003
Figure 4. Convergence curves of OLCPA and other algorithms. In F4, OLCPA competes fiercely with HGS, but OLCPA finally finds the optimal value. In other benchmark functions, OLCPA’s convergence speed is relatively fast compared with the other algorithms. Additionally, with the increase in iterations, its exploration ability is enhanced, which means it can explore more valuable areas and finally find the optimal value.
Figure 4. Convergence curves of OLCPA and other algorithms. In F4, OLCPA competes fiercely with HGS, but OLCPA finally finds the optimal value. In other benchmark functions, OLCPA’s convergence speed is relatively fast compared with the other algorithms. Additionally, with the increase in iterations, its exploration ability is enhanced, which means it can explore more valuable areas and finally find the optimal value.
Biomimetics 08 00268 g004
Figure 5. Overview of the average performance metrics of OLCPA-CNN, CPA-CNN, and the randomly generated CNN on the MIT-BIH dataset.
Figure 5. Overview of the average performance metrics of OLCPA-CNN, CPA-CNN, and the randomly generated CNN on the MIT-BIH dataset.
Biomimetics 08 00268 g005
Figure 6. Overview of the average performance metrics of OLCPA-CNN, CPA-CNN, and the randomly generated CNN on the ST-T dataset.
Figure 6. Overview of the average performance metrics of OLCPA-CNN, CPA-CNN, and the randomly generated CNN on the ST-T dataset.
Biomimetics 08 00268 g006
Figure 7. The accuracy and loss curve of OLCPA-CNN on the ST-T dataset.
Figure 7. The accuracy and loss curve of OLCPA-CNN on the ST-T dataset.
Biomimetics 08 00268 g007
Figure 8. The accuracy and loss of OLCPA-CNN on the MIT-BIH dataset.
Figure 8. The accuracy and loss of OLCPA-CNN on the MIT-BIH dataset.
Biomimetics 08 00268 g008
Table 1. Hyperparameter range of CNN structure.
Table 1. Hyperparameter range of CNN structure.
ArchitectureHyperparameterRange
CNNNumber of convolution kernels[1, 15]
Size of the convolution kernels[1, 128]
Number of nodes of the first fully connected layer[0, 5000]
Number of epochs[1, 40]
Learning rate[0.0001, 0.01]
L2 regularization[0.001, 0.01]
Table 2. Confusion matrix.
Table 2. Confusion matrix.
Actual PositiveActual Negative
Predicted positiveTPFP
Predicted negativeFNTN
Table 3. Parameters of an algorithm comparison experiment.
Table 3. Parameters of an algorithm comparison experiment.
NDMaxFesNum
3030300,00030
Table 4. Comparison of OLCPA and CPA dimensions.
Table 4. Comparison of OLCPA and CPA dimensions.
FAlgorithmDim = 50Dim = 100
AvgStdAvgStd
F1CPA3.918 × 10³5.613 × 10³1.0134 × 1041.3351 × 104
OLCPA2.045 × 10³2.522 × 10³7.4512 × 10³6.3808 × 10³
F2CPA3.212 × 10³9.123 × 10³1.2924 × 10136.9606 × 1013
OLCPA2.804 × 1021.873 × 1022.7373 × 10101.0467 × 1011
F3CPA3.000 × 1021.723 × 10−62.3131 × 10³1.0879 × 10³
OLCPA3.000 × 1021.425 × 10−93.0000 × 1022.1451 × 10−8
F4CPA5.069 × 1025.491 × 106.2239 × 1023.7274 × 10
OLCPA4.769 × 1023.367 × 106.3116 × 1025.2452 × 10
F5CPA7.157 × 1022.635 × 101.0656 × 10³7.0536 × 10
OLCPA7.393 × 1023.785 × 101.1193 × 10³5.8359 × 10
F6CPA6.000 × 1022.072 × 10−46.0000 × 1023.5520 × 10−3
OLCPA6.000 × 1021.946 × 10−136.0000 × 1022.6953 × 10−13
F7CPA9.926 × 1024.689 × 101.4700 × 10³9.1916 × 10
OLCPA1.012 × 10³4.161 × 101.5541 × 10³1.2035 × 102
F8CPA1.017 × 10³3.768 × 101.3892 × 10³7.3342 × 10
OLCPA1.018 × 10³3.874 × 101.4321 × 10³6.7090 × 10
F9CPA6.632 × 10³1.662 × 10³1.6982 × 1041.9060 × 10³
OLCPA6.566 × 10³1.940 × 10³1.6940 × 1041.8023 × 10³
F10CPA5.857 × 10³6.643 × 1021.2755 × 1041.0274 × 10³
OLCPA5.598 × 10³7.011 × 1021.2856 × 1041.2503 × 10³
F11CPA1.226 × 10³3.445 × 101.5628 × 10³1.0329 × 102
OLCPA1.228 × 10³2.865 × 101.5100 × 10³1.1924 × 102
F12CPA3.216 × 1062.379 × 1068.5478 × 1063.7663 × 106
OLCPA2.110 × 1061.443 × 1064.4447 × 1062.1365 × 106
F13CPA8.034 × 10³8.404 × 10³6.9597 × 10³5.4080 × 10³
OLCAP5.223 × 10³3.526 × 10³4.9575 × 10³3.4951 × 10³
F14CPA4.017 × 1042.045 × 1041.0750 × 1053.2124 × 104
OLCAP8.857 × 10³5.974 × 10³3.4510 × 1046.5897 × 10³
F15CPA8.161 × 10³5.507 × 10³4.0322 × 10³3.0081 × 10³
OLCPA9.451 × 10³5.953 × 10³2.7858 × 10³1.2726 × 10³
F16CPA3.647 × 10³4.486 × 1026.1075 × 10³6.1143 × 102
OLCPA3.453 × 10³3.315 × 1025.9226 × 10³5.8144 × 102
F17CPA3.034 × 10³3.329 × 1024.9720 × 10³5.3734 × 102
OLCPA3.109 × 10³3.106 × 1024.8910 × 10³6.0655 × 102
F18CPA1.320 × 1052.658 × 1042.5015 × 1059.0799 × 104
OLCPA4.246 × 1041.368 × 1041.3636 × 1052.4613 × 104
F19CPA2.097 × 1049.306 × 10³5.9069 × 10³4.3932 × 10³
OLCPA2.660 × 1048.118 × 10³3.8745 × 10³1.7514 × 10³
F20CPA3.055 × 10³3.027 × 1025.1363 × 10³3.9298 × 102
OLCPA2.891 × 10³2.578 × 1025.1443 × 10³4.6463 × 102
F21CPA2.515 × 10³3.993 × 102.8858 × 10³8.8322 × 10
OLCPA2.527 × 10³4.671 × 102.8781 × 10³7.3269 × 10
F22CPA7.882 × 10³1.718 × 10³1.6525 × 1041.2773 × 10³
OLCPA7.908 × 10³1.320 × 10³1.6529 × 1049.8827 × 102
F23CPA2.979 × 10³4.502 × 103.1604 × 10³6.6818 × 10
OLCPA3.008 × 10³4.940 × 103.1599 × 10³7.3210 × 10
F24CPA3.498 × 10³1.719 × 1023.8236 × 10³8.3982 × 10
OLCPA3.566 × 10³1.472 × 1023.8711 × 10³8.9281 × 10
F25CPA3.048 × 10³4.905 × 103.2936 × 10³7.0202 × 10
OLCPA3.052 × 10³3.882 × 103.2933 × 10³7.0629 × 10
F26CPA4.078 × 10³2.100 × 10³1.2307 × 1043.3495 × 10³
OLCPA5.254 × 10³2.960 × 10³1.3663 × 1042.7772 × 10³
F27CPA3.519 × 10³1.029 × 1023.5698 × 10³8.1696 × 10
OLCPA3.532 × 10³9.428 × 103.6442 × 10³8.7973 × 10
F28CPA3.296 × 10³2.671 × 103.3820 × 10³3.4844 × 10
OLCPA3.290 × 10³2.094 × 103.3647 × 10³4.4484 × 10
F29CPA4.233 × 10³2.991 × 1026.7623 × 10³4.7534 × 102
OLCPA4.072 × 10³2.955 × 1026.9277 × 10³5.1706 × 102
F30CPA9.734 × 1052.383 × 1051.4286 × 1044.3046 × 10³
OLCPA8.788 × 1051.518 × 1051.3530 × 1044.6178 × 10³
Table 5. Comparison of OLCPA with other algorithms.
Table 5. Comparison of OLCPA with other algorithms.
AlgorithmF1 F2 F3
AvgStdAvgStdAvgStd
OLCPA2.6417 × 1032.6210 × 1032.0000 × 1026.1889 × 10−63.0000 × 1023.3039 × 10−10
CCMWOA2.0529 × 10104.7896 × 1093.9233 × 10381.8806 × 10397.7098 × 1046.7169 × 103
IGWO1.6989 × 1068.5027 × 1052.0146 × 10138.4520 × 10131.4554 × 1036.7162 × 102
CCMSCSA3.1354 × 1033.1637 × 1037.5324 × 10102.3962 × 10113.4410 × 1023.2511 × 101
BMWOA2.0649 × 1088.6683 × 1075.7378 × 10222.6681 × 10237.0802 × 1049.5290 × 103
CMFO2.0503 × 1084.7086 × 1083.8481 × 10372.0987 × 10381.1542 × 1054.6442 × 104
CESCA5.7624 × 10104.6938 × 1095.0711 × 10451.0293 × 10461.0551 × 1051.4524 × 104
GCHHO4.6746 × 1035.9329 × 1034.0569 × 1059.1569 × 1055.5167 × 1021.6374 × 102
DE2.2872 × 1033.7753 × 1031.3602 × 10213.6101 × 10211.9826 × 1044.4953 × 103
MFO1.3517 × 10108.8111 × 1091.1274 × 10396.1676 × 10391.1049 × 1058.7936 × 104
HGS1.3861 × 1077.5867 × 1071.3521 × 10165.1457 × 10162.6774 × 1035.6921 × 103
CPA5.3939 × 1035.9328 × 1032.0098 × 1023.7267 × 1003.0000 × 1021.4672 × 10−7
F4 F5 F6
AvgStdAvgStdAvgStd
OLCPA4.4457 × 1023.6133 × 1016.2701 × 1022.3849 × 1016.0000 × 1023.5452 × 10−13
CCMWOA3.7003 × 1031.4526 × 1038.3497 × 1023.3283 × 1016.7164 × 1027.8115 × 100
IGWO5.0643 × 1022.3656 × 1016.1178 × 1021.6784 × 1016.2273 × 1025.5501 × 100
CCMSCSA4.9943 × 1022.7486 × 1015.8212 × 1022.3957 × 1016.0043 × 1022.9648 × 10−1
BMWOA6.0019 × 1023.8438 × 1017.7892 × 1025.5275 × 1016.6611 × 1021.1204 × 101
CMFO5.6429 × 1026.6364 × 1017.2496 × 1025.0447 × 1016.5202 × 1029.3009 × 100
CESCA1.5015 × 1042.3052 × 1039.6832 × 1022.4033 × 1017.0496 × 1025.2640 × 100
GCHHO4.9548 × 1022.8019 × 1017.1025 × 1024.2271 × 1016.5178 × 1026.9046 × 100
DE4.9088 × 1029.5252 × 1006.0806 × 1029.1168 × 1006.0000 × 1020.0000 × 100
MFO1.3689 × 1038.5540 × 1027.0259 × 1026.0823 × 1016.4206 × 1021.1753 × 101
HGS4.7827 × 1022.7144 × 1016.3080 × 1022.8629 × 1016.0152 × 1021.9161 × 100
CPA4.8346 × 1022.5598 × 1016.2974 × 1022.6272 × 1016.0000 × 1021.0003 × 10−7
F7 F8 F9
AvgStdAvgStdAvgStd
OLCPA8.5580 × 1023.3574 × 1019.0241 × 1021.6026 × 1012.6955 × 1035.2050 × 102
CCMWOA1.2785 × 1037.1917 × 1011.0445 × 1032.5360 × 1017.7640 × 1031.4347 × 103
IGWO9.0405 × 1025.2744 × 1018.9353 × 1022.1787 × 1012.8209 × 1038.7020 × 102
CCMSCSA8.0574 × 1021.7076 × 1019.0305 × 1022.9632 × 1019.8573 × 1027.2563 × 101
BMWOA1.1733 × 1031.0644 × 1021.0081 × 1033.0517 × 1017.2354 × 1038.8766 × 102
CMFO1.2808 × 1031.5349 × 1029.5931 × 1023.8407 × 1014.7987 × 1031.1702 × 103
CESCA1.5498 × 1035.0905 × 1011.1778 × 1031.9507 × 1011.5424 × 1041.2474 × 103
GCHHO1.0821 × 1031.0330 × 1029.4369 × 1022.0730 × 1014.7578 × 1035.8635 × 102
DE8.4129 × 1021.0450 × 1019.0777 × 1028.5623 × 1009.0000 × 1021.0765 × 10−13
MFO1.1498 × 1032.0356 × 1021.0177 × 1034.6613 × 1017.1329 × 1031.7975 × 103
HGS8.9314 × 1025.1096 × 1019.0545 × 1022.2953 × 1013.5491 × 1038.3458 × 102
CPA8.4269 × 1022.7130 × 1019.0484 × 1022.1313 × 1012.3193 × 1036.1060 × 102
F10 F11 F12
AvgStdAvgStdAvgStd
OLCPA3.7605 × 1033.8850 × 1021.1756 × 1033.4123 × 1016.5992 × 1054.6976 × 105
CCMWOA7.0372 × 1036.1016 × 1023.1558 × 1035.8702 × 1022.0126 × 1091.4565 × 109
IGWO4.4687 × 1035.9061 × 1021.2642 × 1032.8641 × 1011.5414 × 1071.5084 × 107
CCMSCSA4.6758 × 1036.1466 × 1021.1870 × 1033.1743 × 1011.1060 × 1068.7007 × 105
BMWOA7.4949 × 1035.9325 × 1021.6517 × 1031.6384 × 1027.8078 × 1075.9556 × 107
CMFO7.3777 × 1031.2921 × 1034.6678 × 1033.4534 × 1034.0157 × 1071.2585 × 108
CESCA8.7430 × 1032.2735 × 101.0664 × 1041.6523 × 1031.5622 × 1001.5369 × 109
GCHHO5.1344 × 1036.1384 × 1021.2339 × 1035.1150 × 1019.6394 × 1057.5930 × 105
DE5.9154 × 1033.1146 × 1021.1611 × 1032.2327 × 1011.6551 × 1068.2025 × 105
MFO5.6084 × 1038.5316 × 1024.6351 × 1034.6023 × 1031.9357 × 1083.4217 × 108
HGS3.9194 × 1034.7298 × 1021.2032 × 1033.5853 × 1017.1069 × 1055.9578 × 105
CPA3.6850 × 1034.6305 × 1021.1700 × 1033.4461 × 1011.6148 × 1061.2540 × 106
F13 F14 F15
AvgStdAvgStdAvgStd
OLCPA4.4219 × 1032.9844 × 1032.9128 × 1031.0472 × 1033.2767 × 1032.4501 × 103
CCMWOA1.5038 × 1082.1857 × 1081.2576 × 1068.9707 × 1055.8975 × 1069.3662 × 106
IGWO2.8168 × 1053.9417 × 1055.3978 × 1043.4567 × 1045.6870 × 1042.9361 × 104
CCMSCSA1.3367 × 1041.1171 × 1041.6896 × 1041.5715 × 1043.0463 × 1032.0005 × 103
BMWOA4.3710 × 1057.0418 × 1059.4482 × 1058.0898 × 1051.7030 × 1052.7468 × 105
CMFO3.7917 × 1071.9343 × 1083.5943 × 1058.2977 × 1053.0292 × 1043.2825 × 104
CESCA1.3433 × 10103.9068 × 1096.5888 × 1062.8850 × 1064.5428 × 1081.8134 × 108
GCHHO1.2843 × 1041.5165 × 1043.4759 × 1042.5482 × 1046.3001 × 1036.5459 × 103
DE2.9103 × 1041.6893 × 1044.9826 × 1042.5793 × 1048.6203 × 1035.3792 × 103
MFO3.5810 × 1061.3021 × 1072.3452 × 1056.2121 × 1056.7501 × 1046.6285 × 104
HGS2.6168 × 1042.4091 × 1045.3803 × 1044.0736 × 1041.7192 × 1041.5459 × 104
CPA5.8096 × 1031.1246 × 1047.0490 × 1034.9848 × 1032.2795 × 1039.4388 × 102
F16 F17 F18
AvgStdAvgStdAvgStd
OLCPA2.5673 × 1033.5254 × 1022.0234 × 1031.5350 × 1023.3044 × 1041.4827 × 104
CCMWOA3.9526 × 1036.7820 × 1022.7756 × 1033.8592 × 1021.0532 × 1071.0505 × 107
IGWO2.5650 × 1033.5963 × 1022.0210 × 1031.4303 × 1024.9309 × 1054.1194 × 105
CCMSCSA2.5013 × 1032.6859 × 1022.0794 × 1031.8933 × 1021.6964 × 1051.5010 × 105
BMWOA3.4890 × 1035.2235 × 1022.4671 × 1032.1627 × 1023.2300 × 1063.6494 × 106
CMFO2.9376 × 1035.1792 × 1022.4441 × 1033.0889 × 1022.8221 × 1065.1454 × 106
CESCA5.9581 × 1034.8739 × 1024.4216 × 1034.3712 × 1025.7643 × 1072.6176 × 107
GCHHO2.7374 × 1032.7503 × 1022.3088 × 1032.6833 × 1022.5047 × 1053.0345 × 105
DE2.0652 × 1031.4220 × 1021.8272 × 1034.6232 × 1013.2091 × 1051.8003 × 105
MFO3.1336 × 1034.4336 × 1022.5667 × 1033.1189 × 1021.6242 × 1063.0380 × 106
HGS2.6782 × 1033.3225 × 1022.2166 × 1032.5020 × 1022.8938 × 1052.7168 × 105
CPA2.7639 × 1032.9671 × 1022.1603 × 1032.6412 × 1021.0822 × 1056.6646 × 104
F19 F20 F21
AvgStdAvgStdAvgStd
OLCPA4.2596 × 1032.0678 × 1032.3366 × 1031.1975 × 1022.4210 × 1032.6344 × 101
CCMWOA5.5422 × 1069.1972 × 1062.7663 × 1031.8365 × 1022.6152 × 1036.4209 × 101
IGWO2.2651 × 1052.5430 × 1052.3539 × 1031.2977 × 1022.3977 × 1032.4054 × 101
CCMSCSA6.5091 × 1035.1531 × 1032.3399 × 1031.2859 × 1022.3752 × 1031.8620 × 101
BMWOA8.1030 × 1051.1393 × 1062.7627 × 1031.8733 × 1022.5221 × 1035.0213 × 101
CMFO4.4672 × 1047.9129 × 1042.7796 × 1031.8148 × 1022.4958 × 1033.7613 × 101
CESCA1.3527 × 1094.4096 × 1083.1735 × 1031.3671 × 1022.7653 × 1033.4531 × 101
GCHHO6.1960 × 1035.0850 × 1032.5673 × 1031.8589 × 1022.4895 × 1035.0529 × 101
DE8.0940 × 1035.1894 × 1032.1309 × 1038.2781 × 1012.4037 × 1039.0200 × 100
MFO1.1628 × 1073.7701 × 1072.7001 × 1032.2561 × 1022.5056 × 1034.5377 × 101
HGS1.2762 × 1041.5843 × 1042.4769 × 1031.7556 × 1022.4252 × 1032.8533 × 101
CPA5.3098 × 1031.9655 × 1032.4748 × 1031.4970 × 1022.4060 × 1037.0170 × 101
F22 F23 F24
AvgStdAvgStdAvgStd
OLCPA3.9509 × 1031.9526 × 1032.7595 × 1033.2074 × 1013.1311 × 1039.8525 × 101
CCMWOA7.3798 × 1031.3564 × 1033.1950 × 1031.1166 × 1023.3448 × 1031.1703 × 102
IGWO2.3179 × 1033.7459 × 1012.7712 × 1033.0804 × 1012.9433 × 1033.3180 × 101
CCMSCSA2.3011 × 1031.8012 × 1002.7389 × 1032.2920 × 1012.9120 × 1032.9458 × 101
BMWOA6.0884 × 1033.1127 × 1032.9482 × 1037.9106 × 1013.1150 × 1037.4716 × 101
CMFO5.4333 × 1032.9116 × 1032.9734 × 1037.2165 × 1013.1313 × 1031.1677 × 102
CESCA9.5457 × 1035.7616 × 1023.4764 × 1034.8311 × 1013.4817 × 1033.3877 × 101
GCHHO4.1361 × 1032.1478 × 1032.9327 × 1036.8914 × 1013.0983 × 1037.3773 × 101
DE3.7200 × 1031.7703 × 1032.7561 × 1038.0338 × 1002.9580 × 1031.1083 × 101
MFO6.4721 × 1031.7866 × 1032.8414 × 1033.5967 × 1012.9872 × 1033.6604 × 101
HGS4.6540 × 1031.5057 × 1032.7673 × 1032.8088 × 1013.0210 × 1034.9364 × 101
CPA3.1890 × 1031.6475 × 1032.7663 × 1033.4140 × 1013.0664 × 1036.5262 × 101
F25 F26 F27
AvgStdAvgStdAvgStd
OLCPA2.8894 × 1037.7206 × 1004.4389 × 1031.1528 × 1033.2423 × 1031.9934 × 101
CCMWOA3.3774 × 1031.1165 × 1028.7361 × 1039.8681 × 1023.5982 × 1031.5901 × 102
IGWO2.9064 × 1031.6945 × 1014.8594 × 1032.8776 × 1023.2367 × 1031.3129 × 101
CCMSCSA2.9035 × 1031.6558 × 1013.5575 × 1031.1866 × 1033.2607 × 1032.5012 × 101
BMWOA3.0206 × 1033.3567 × 1016.8057 × 1031.2593 × 1033.3084 × 1035.7831 × 101
CMFO2.9541 × 1033.6301 × 1016.8104 × 1037.5311 × 1023.4320 × 1031.5391 × 102
CESCA5.5207 × 1034.9939 × 1021.1158 × 1045.9604 × 1023.6926 × 1037.6258 × 101
GCHHO2.8956 × 1031.6050 × 1016.0549 × 1031.2718 × 1033.2638 × 1032.8447 × 101
DE2.8874 × 1033.0941 × 10−14.6573 × 1037.3190 × 1013.2061 × 1033.4861 × 100
MFO3.2124 × 1033.9290 × 1025.8620 × 1034.3155 × 1023.2565 × 1032.4574 × 101
HGS2.8915 × 1031.3659 × 1014.9433 × 1033.3033 × 1023.2306 × 1031.5047 × 101
CPA2.8988 × 1031.8851 × 1014.3956 × 1031.0423 × 1033.2447 × 1032.3567 × 101
F28 F29 F30
AvgStdAvgStdAvgStd
OLCPA3.1166 × 1033.6219 × 1013.5490 × 1031.4289 × 1027.7032 × 1032.1026 × 103
CCMWOA4.5490 × 1035.1054 × 1025.3770 × 1037.8372 × 1027.2310 × 1076.3627 × 107
IGWO3.2621 × 1033.0754 × 1013.8055 × 1031.8503 × 1023.8641 × 1063.0498 × 106
CCMSCSA3.2293 × 1032.6311 × 1013.7148 × 1031.9800 × 1021.5266 × 1048.5355 × 103
BMWOA3.3944 × 1034.6263 × 1014.7907 × 1033.5255 × 1025.9215 × 1063.3072 × 106
CMFO3.3422 × 1035.8653 × 1014.5953 × 1033.6267 × 1021.8825 × 1065.1423 × 106
CESCA7.1979 × 1033.7582 × 1026.0902 × 1032.2499 × 1022.5420 × 1098.2166 × 108
GCHHO3.2262 × 1032.5508 × 1014.0266 × 1032.1016 × 1021.1463 × 1044.0577 × 103
DE3.1752 × 1034.9966 × 1013.5037 × 1036.6936 × 1011.3382 × 1043.7353 × 103
MFO4.5833 × 1039.6836 × 1024.2258 × 1032.8249 × 1029.2011 × 1051.1203 × 106
HGS3.2078 × 1035.5946 × 1013.7668 × 1031.5864 × 1029.8961 × 1041.2709 × 105
CPA3.1488 × 1035.0178 × 1013.7367 × 1031.9795 × 1021.1526 × 1044.4883 × 103
Overall rank
RankAvg+/=/−
OLCPA13.1322~
CCMSCSA23.602215/8/7
CPA33.693315/13/2
DE43.787812/9/9
HGS54.693319/9/2
IGWO65.480018/8/4
GCHHO75.756724/0/0
CMFO88.233329/1/0
MFO98.261129/0/1
BMWOA109.042229/1/0
CCMWOA111.040930/0/0
CESCA121.190930/0/0
Table 6. The p-value of OLCPA and other algorithms.
Table 6. The p-value of OLCPA and other algorithms.
CCMWOAIGWOCCMSCSABMWOACMFOCESCA
F11.7344 × 10−61.7344 × 10−67.1889 × 10−11.7344 × 10−61.7344 × 10−61.7344 × 10−6
F21.7344 × 10−61.7344 × 10−61.7344 × 10−61.7344 × 10−61.7344 × 10−61.7344 × 10−6
F31.7344 × 10−61.7344 × 10−61.7344 × 10−61.7344 × 10−61.7344 × 10−61.7344 × 10−6
F41.7344 × 10−64.2857 × 10−64.7292 × 10−61.7344 × 10−61.7344 × 10−61.7344 × 10−6
F51.7344 × 10−61.1748 × 10−21.1265 × 10−51.7344 × 10−61.7344 × 10−61.7344 × 10−6
F61.7344 × 10−61.7344 × 10−61.7344 × 10−61.7344 × 10−61.7344 × 10−61.7344 × 10−6
F71.7344 × 10−63.8811 × 10−42.8786 × 10−61.7344 × 10−61.7344 × 10−61.7344 × 10−6
F81.7344 × 10−63.8723 × 10−28.9364 × 10−11.7344 × 10−62.3534 × 10−61.7344 × 10−6
F91.7344 × 10−64.4052 × 10−11.7344 × 10−61.7344 × 10−62.6033 × 10−61.7344 × 10−6
F101.7344 × 10−63.4053 × 10−59.3157 × 10−61.7344 × 10−61.7344 × 10−61.7344 × 10−6
F111.7344 × 10−61.7344 × 10−61.7138 × 10−11.7344 × 10−61.7344 × 10−61.7344 × 10−6
F121.7344 × 10−61.9209 × 10−63.1603 × 10−21.7344 × 10−64.7292 × 10−61.7344 × 10−6
F131.7344 × 10−61.7344 × 10−63.3173 × 10−41.7344 × 10−66.3391 × 10−61.7344 × 10−6
F141.7344 × 10−61.9209 × 10−68.4661 × 10−61.7344 × 10−61.7344 × 10−61.7344 × 10−6
F151.7344 × 10−61.7344 × 10−69.2626 × 10−11.7344 × 10−62.1630 × 10−51.7344 × 10−6
F161.7344 × 10−65.8571 × 10−14.1653 × 10−13.5152 × 10−65.7924 × 10−51.7344 × 10−6
F171.7344 × 10−68.9364 × 10−13.8203 × 10−12.1266 × 10−69.3157 × 10−61.7344 × 10−6
F181.7344 × 10−61.7344 × 10−61.7344 × 10−61.7344 × 10−61.7344 × 10−61.7344 × 10−6
F191.7344 × 10−61.7344 × 10−63.8723 × 10−21.7344 × 10−65.2872 × 10−41.7344 × 10−6
F201.7344 × 10−66.8836 × 10−18.6121 × 10−12.6033 × 10−61.7344 × 10−61.7344 × 10−6
F211.7344 × 10−62.4147 × 10−31.2381 × 10−51.7344 × 10−62.1266 × 10−61.7344 × 10−6
F221.2381 × 10−51.0201 × 10−12.5637 × 10−24.9916 × 10−32.3038 × 10−21.7344 × 10−6
F231.7344 × 10−61.5286 × 10−12.8486 × 10−21.7344 × 10−61.7344 × 10−61.7344 × 10−6
F246.9838 × 10−62.1266 × 10−61.7344 × 10−62.9894 × 10−16.5833 × 10−11.7344 × 10−6
F251.7344 × 10−61.1499 × 10−49.7110 × 10−51.7344 × 10−61.7344 × 10−61.7344 × 10−6
F261.7344 × 10−62.2102 × 10−19.8421 × 10−31.9729 × 10−51.7344 × 10−61.7344 × 10−6
F271.7344 × 10−65.0383 × 10−19.8421 × 10−33.5152 × 10−62.6033 × 10−61.7344 × 10−6
F281.7344 × 10−61.7344 × 10−61.7344 × 10−61.7344 × 10−61.7344 × 10−61.7344 × 10−6
F291.7344 × 10−61.4936 × 10−56.6392 × 10−41.7344 × 10−61.7344 × 10−61.7344 × 10−6
F301.7344 × 10−61.7344 × 10−61.7988 × 10−51.7344 × 10−61.7344 × 10−61.7344 × 10−6
GCHHODEMFOHGSCPA
F14.1653 × 10−11.5886 × 10−11.7344 × 10−68.1878 × 10−54.0702 × 10−2
F21.7344 × 10−61.7344 × 10−61.7344 × 10−61.7344 × 10−61.9209 × 10−6
F31.7344 × 10−61.7344 × 10−61.7344 × 10−61.7344 × 10−61.7344 × 10−6
F43.4053 × 10−51.9729 × 10−51.7344 × 10−61.1499 × 10−44.4493 × 10−5
F53.8822 × 10−68.3071 × 10−49.3157 × 10−64.7795 × 10−14.5281 × 10−1
F61.7344 × 10−64.3205 × 10−81.7344 × 10−61.7344 × 10−61.0000 × 100
F71.7344 × 10−67.5213 × 10−21.7344 × 10−66.6392 × 10−41.4704 × 10−1
F86.3391 × 10−61.3591 × 10−11.7344 × 10−64.0483 × 10−16.2884 × 10−1
F91.7344 × 10−61.7344 × 10−61.7344 × 10−61.6046 × 10−41.7518 × 10−2
F102.8786 × 10−61.7344 × 10−61.7344 × 10−61.7791 × 10−14.2843 × 10−1
F111.3595 × 10−48.2206 × 10−21.7344 × 10−68.2167 × 10−33.0861 × 10−1
F128.2206 × 10−23.4053 × 10−51.7344 × 10−67.1889 × 10−11.2866 × 10−3
F131.2866 × 10−31.7344 × 10−61.7344 × 10−61.9209 × 10−65.3044 × 10−1
F141.7344 × 10−61.7344 × 10−61.7344 × 10−62.1266 × 10−61.2381 × 10−5
F151.9646 × 10−38.9187 × 10−51.7344 × 10−61.2506 × 10−41.1093 × 10−1
F165.7096 × 10−21.7344 × 10−61.0570 × 10−41.2044 × 10−12.1827 × 10−2
F175.2872 × 10−41.3601 × 10−52.1266 × 10−64.6818 × 10−31.1748 × 10−2
F181.7344 × 10−61.7344 × 10−61.7344 × 10−61.9209 × 10−64.7292 × 10−6
F192.4308 × 10−21.3820 × 10−33.5152 × 10−61.1138 × 10−32.0671 × 10−2
F204.0715 × 10−51.9209 × 10−61.9209 × 10−61.0357 × 10−31.8910 × 10−4
F214.7292 × 10−62.9575 × 10−32.1266 × 10−64.4052 × 10−11.9861 × 10−1
F223.8203 × 10−19.9179 × 10−11.1499 × 10−41.5886 × 10−13.4908 × 10−1
F231.7344 × 10−65.5774 × 10−12.1266 × 10−61.4704 × 10−14.5281 × 10−1
F241.7138 × 10−11.7344 × 10−66.3391 × 10−62.5967 × 10−59.2710 × 10−3
F256.5641 × 10−26.1431 × 10−12.3534 × 10−69.5899 × 10−13.5009 × 10−2
F267.1570 × 10−49.0993 × 10−16.9838 × 10−64.9498 × 10−29.2626 × 10−1
F272.4147 × 10−31.9209 × 10−63.5009 × 10−21.7518 × 10−27.1889 × 10−1
F281.9209 × 10−63.7172 × 10−51.7344 × 10−61.6394 × 10−58.1574 × 10−4
F291.9209 × 10−61.5286 × 10−11.7344 × 10−68.1878 × 10−51.7088 × 10−3
F304.5336 × 10−43.5152 × 10−61.7344 × 10−61.7344 × 10−62.5967 × 10−5
Table 7. Relationship between AMMI and MIT-BIH in terms of categories.
Table 7. Relationship between AMMI and MIT-BIH in terms of categories.
AAMI ClassesSupraventricular Ectopic Beat (S)Normal (N)Ventricular Ectopic Beats (VEBs)Unknown Beat (Q)Fusion (F)
MIT-BIH classesAberrated atrial premature beat (a)Normal beat (N)Ventricular flutter wave (!)Paced beat (/)Fusion of ventricular and normal beat (F)
Supraventricular premature beat (S)Left bundle branch block beat (L)Ventricular escape beat (E)Unclassifiable beat (Q)
Atrial premature beat (A)Right bundle branch block beat (R)Premature ventricular contraction (V)
Nodal (junctional) premature beat (J)
Nodal (junctional) escape beat (j)
Atrial escape beat (e)
Table 8. Number of samples per category for the datasets.
Table 8. Number of samples per category for the datasets.
ST-TMIT-BIH
N10002500
S10002500
VEB10002500
Q-2500
Table 9. Comparison of OLCPA-CNN with other methods on the MIT-BIH dataset.
Table 9. Comparison of OLCPA-CNN with other methods on the MIT-BIH dataset.
ReferenceMethodAccSe
Acharya et al. [54]CNN94.03%96.71%
Li et al. [84]SVM97.30%97.40%
Patro et al. [85]PSO, GA, SVM, and RF95.30%94.00%
ProposedOLCPA-CNN97.90%97.90%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

He, X.; Shan, W.; Zhang, R.; Heidari, A.A.; Chen, H.; Zhang, Y. Improved Colony Predation Algorithm Optimized Convolutional Neural Networks for Electrocardiogram Signal Classification. Biomimetics 2023, 8, 268. https://doi.org/10.3390/biomimetics8030268

AMA Style

He X, Shan W, Zhang R, Heidari AA, Chen H, Zhang Y. Improved Colony Predation Algorithm Optimized Convolutional Neural Networks for Electrocardiogram Signal Classification. Biomimetics. 2023; 8(3):268. https://doi.org/10.3390/biomimetics8030268

Chicago/Turabian Style

He, Xinxin, Weifeng Shan, Ruilei Zhang, Ali Asghar Heidari, Huiling Chen, and Yudong Zhang. 2023. "Improved Colony Predation Algorithm Optimized Convolutional Neural Networks for Electrocardiogram Signal Classification" Biomimetics 8, no. 3: 268. https://doi.org/10.3390/biomimetics8030268

APA Style

He, X., Shan, W., Zhang, R., Heidari, A. A., Chen, H., & Zhang, Y. (2023). Improved Colony Predation Algorithm Optimized Convolutional Neural Networks for Electrocardiogram Signal Classification. Biomimetics, 8(3), 268. https://doi.org/10.3390/biomimetics8030268

Article Metrics

Back to TopTop