Next Article in Journal
Residuated Lattices with Noetherian Spectrum
Next Article in Special Issue
Survey of Lévy Flight-Based Metaheuristics for Optimization
Previous Article in Journal
Complete Balancing of the Six-Bar Mechanism Using Fully Cartesian Coordinates and Multiobjective Differential Evolution Optimization
Previous Article in Special Issue
Enhanced Brain Storm Optimization Algorithm Based on Modified Nelder–Mead and Elite Learning Mechanism
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Neural Network Algorithm with Dropout Using Elite Selection

School of Computer Science and Technology, Ocean University of China, Qingdao 266100, China
*
Author to whom correspondence should be addressed.
Mathematics 2022, 10(11), 1827; https://doi.org/10.3390/math10111827
Submission received: 26 April 2022 / Revised: 18 May 2022 / Accepted: 22 May 2022 / Published: 26 May 2022
(This article belongs to the Special Issue Evolutionary Computation 2022)

Abstract

:
A neural network algorithm is a meta-heuristic algorithm inspired by an artificial neural network, which has a strong global search ability and can be used to solve global optimization problems. However, a neural network algorithm sometimes shows the disadvantage of slow convergence speed when solving some complex problems. In order to improve the convergence speed, this paper proposes the neural network algorithm with dropout using elite selection. In the neural network algorithm with dropout using elite selection, the neural network algorithm is viewed from the perspective of an evolutionary algorithm. In the crossover phase, the dropout strategy in the neural network is introduced: a certain proportion of the individuals who do not perform well are dropped and they do not participate in the crossover process to ensure the outstanding performance of the population. Additionally, in the selection stage, a certain proportion of the individuals of the previous generation with the best performance are retained and directly enter the next generation. In order to verify the effectiveness of the improved strategy, the neural network algorithm with dropout using elite selection is used on 18 well-known benchmark functions. The experimental results show that the introduced dropout strategy improves the optimization performance of the neural network algorithm. Moreover, the neural network algorithm with dropout using elite selection is compared with other meta-heuristic algorithms to illustrate it is a powerful algorithm in solving optimization problems.

1. Introduction

Optimization algorithms are applied to many fields to obtain the optimal results to improve performance or reduce cost. Deterministic approaches need to use a large amount of gradient information and are highly dependent on the selected initial point, which is easy to fall into a local minimum [1,2,3]. However, meta-heuristic algorithms do not rely on gradient information and are not easy to fall into local optimization [3,4,5], which shows strong search-ability.
The meta-heuristic algorithm is the product of the combination of a random algorithm and a local search algorithm. It mainly solves the global optimization problem by simulating the evolution law of nature or the wisdom of the group [6,7]. To a certain extent, it can search globally and find the approximate solution of the optimal solution. The process of a meta-heuristic algorithm is mainly divided into the following steps [8,9]. (1) Randomly generate candidate solutions as initial values. (2) Calculate the objective function values of the initial values. (3) According to the existing information, update the candidate solutions by crossover, mutation, and other methods to generate a new generation of candidate solutions. (4) The new candidate solutions enter the next iteration until the shutdown criterion is met. It is an iterative generation process. Through random initialization and crossover and mutation of candidate solutions, the exploration and development of the whole search space can be realized, and the optimal solution can be gradually searched [10,11].
Classical meta-heuristic algorithms include genetic algorithm (GA) [12], simulated annealing (SA) [13], particle swarm optimization (PSO) [14,15,16], harmony search (HS) [17], differential evolution (DE) [18,19], ant colony optimization (ACO) [20], and artificial bee colony optimization (ABC) [21]. These algorithms follow the principle of a meta-heuristic algorithm. For example, in a genetic algorithm, each independent variable is represented by a gene, and each individual is represented by a chromosome. Starting from an initial population, new chromosomes are generated through the process of chromosome crossover and mutation. After calculating the fitness, the individuals with poor fitness are eliminated, so as to promote the population evolution to produce better and better approximate solutions.
No meta-heuristic algorithm can be suitable for all types of optimization problems, so new meta-heuristic algorithms are constantly proposed, such as the neural network algorithm (NNA) [22], spotted hyena optimizer (SHO) [23], seagull optimization algorithm (SOA) [24], tunicate swarm algorithm (TSA) [25], elephant herding optimization (EHO) [26], sooty tern optimization algorithm (STOA) [27], chaotic neural network algorithm with competitive learning (CCLNNA) [28], monarch butterfly optimization (MBO) [29,30], earthworm optimization algorithm (EWA) [31], and moth search algorithm (MSA) [32]. Furthermore, in the literature [33], the Forest Optimization Algorithm (FOA) is proposed, inspired by nature’s process in the forest and it shows quite good accuracy compared with GA and PSO on the path generating four-bar mechanism in [34]. Additionally, the virus optimization algorithm (VOA) is an iteratively population-based algorithm inspired by the behavior of viruses attacking a living cell [35] and it is applied to the identification of elastoplastic properties of materials [36]. All the algorithms provide new solutions for solving different types of optimization problems.
Based on NNA, several improved algorithms are proposed and they are also applied to the practical problems. NNA is applied to improve the overall competitiveness of the single mixed refrigerant (SMR) process for synthetic natural gas (SNG) liquefaction, which saves energy and cost [37] and it is used to optimize parameters of the Fractional-Order-Proportional-Integral-Derivative (FOPID) controller [38]. An effective hybrid method named TLNNA, based on teaching–learning-based optimization (TLBO) is proposed to solve engineering optimization problems [39]. Grey wolf optimization with neural network algorithm (GNNA) is proposed by combining the improved grey wolf optimizer (GWO) and NNA, which significantly improves the performance [40]. A modified neural network algorithm (M-NNA) is adopted as the optimization algorithm in the Complex Fracture Network (CFN) optimization framework, with an optimization searching accuracy far better than the original algorithm [41]. A new methodology based on the combination of symbiosis organism search (SOS) and NNA is proposed for the optimal planning and operation of distributed generations (DGs) and capacitor banks (CBs) in the radial distribution networks (RDNs), with the results obtained helping to improve the annual energy loss mitigation and cost savings [42]. In the literature [43], a quasi-oppositional chaotic neural network algorithm (QOCNNA) is developed by combining NNA with chaotic local search (CLS) and quasi-oppositional-based learning (QOBL) approaches and it is more effective in improving the performance of RDNs.
This paper proposes a global optimization algorithm called the neural network algorithm with dropout using elite selection (DESNNA), which is a variant of NNA. NNA is an algorithm inspired by an artificial neural network. In NNA, each individual is regarded as a pattern solution. Firstly, the initial pattern solution and initial coefficient matrix are generated, and the coefficient matrix is applied to all pattern solutions each time, which is equivalent to the crossover operation between pattern solutions. However, the convergence speed of NNA is slow and sometimes falls into local optimization in complex situations. Therefore, the DESNNA is proposed to improve the convergence speed and performance of NNA. The main contributions of this paper are as follows:
(1)
NNA is analyzed from the perspective of an evolutionary algorithm, including crossover, mutation, and selection processes, which correspond to every step of NNA. It shows that NNA belongs to an evolutionary algorithm.
(2)
In the crossover stage of the DESNNA, similar to dropout in the neural network, the dropout strategy is applied to NNA: a certain proportion of the individuals are dropped and do not participate in the crossover process, which ensures the superiority of the individuals participating in the crossover process.
(3)
In the selection process of the DESNNA, some individuals who performed well in the previous generation are directly retained when updating the population, which increases the optimization ability of the algorithm without losing the diversity of the population.
The rest of this paper is organized as follows. The neural network algorithm is introduced in Section 2. The proposed DESNNA is introduced in detail in Section 3. The experiment and results of the DESNNA on the benchmark functions are presented in Section 4 and the conclusion and future work are stated in Section 5.

2. Neural Network Algorithm

2.1. Artificial Neural Network

The artificial neural network (ANN) is a complex structure based on biological neurons. ANN consists of neurons, which are simple processing units and weighted connections between these neurons. A typical structure is a multilayer perceptron (MLP), as shown in Figure 1. ANN receives a data set, starts the training process, and adjusts the connection weights between neurons [44]. The artificial neural network with dropout makes the activation value of a neuron stop working with a certain probability during the forward propagation, which can make the model more generalized because it does not rely too much on some local features [45,46,47]. The artificial neural network with dropout is shown in Figure 2, in which the dotted circle represents the dropped neurons. ANN has many advantages in the fields of medicine, robot, image processing, and so on [48,49,50,51]. The NNA draws on the idea of ANN forward propagation and its weight matrix is updated in each iteration [52,53].

2.2. The Introduction of Neural Network Algorithm

According to the NNA [22], a pattern solution is an array of 1 × D defined as x = [ x 1 , x 2 , , x D ] . The population of pattern solutions is a matrix with a size N × D, which can be defined as
X = [ x 1 1 x 2 1 x D 1 x 1 2 x 2 2 x D 2 x 1 N x 2 N x D N ] .
The cost of each pattern solution can be obtained by a fitness function. For example, the cost of the ith pattern solution is
C i = f ( x 1 i , x 2 i , , x D i ) .
In NNA, weights are a square matrix with the size N × N, defined as
W = [ W 1 , W 2 , , W N p o p ] = [ w 1 1 w 1 2 w 1 N w 2 1 w 2 2 w 2 N w N 1 w N 2 w N N ] = [ w 11 w 21 w N 1 w 12 w 22 w N 2 w 1 N w 2 N w N N ] .
NNA is described in two stages as the following.
(1)
Initialization stage
Firstly, the number of the pattern solutions (N) and the maximum number of iterations are set. Then the initial population containing N pattern solutions is randomly generated between LB and UB. The cost of each individual in the population is obtained by the fitness function. The weight matrix is generated randomly, which satisfies the constraints:
j = 1 N w i j = 1 , i = 1 , 2 , , N .
w i j U ( 0 , 1 ) , i , j = 1 , 2 , , N .
According to the cost, the target solution (XTarget) and the corresponding target weight (WTarget) should be set.
(2)
Cycle stage
Similar to the crossover process, the weight matrix multiplying pattern solutions generates the new pattern solutions using the following equation:
X j N e w ( t + 1 ) = i = 1 N w i j ( t ) × X i ( t ) , j = 1 , 2 , , N .
X i ( t + 1 ) = X i ( t ) + X i N e w ( t + 1 ) , i = 1 , 2 , , N .
where t is an iteration index. Then the weight matrix should be updated, following Equation (8):
W i ( t + 1 ) = W i ( t ) + 2 × r a n d × ( W T a r g e t ( t ) W i ( t ) ) , i = 1 , 2 , , N .
where the weight matrix (W) should always satisfy the constraints (4) and (5).
After the updating process, according to the modification factor β, check the bias condition and choose to either perform the bias operator according to Equations (9) and (10) or perform the transfer function operator by Equation (10). The Equation (9) is given below:
X ( i , j ) = L B + ( U B L B ) × r a n d , i = 1 , 2 , , N b .
where Nb = round(D × β) is the number of biased variables in the population of the new pattern solution and j is a random integer between 0 and D. The Equation (10) is given below:
W ( i , j ) = U ( 0 , 1 ) , i = 1 , 2 , , N w b .
where Nwb = round(N × β) is the number of biased variables in the updated weight matrix. The transfer function operator makes new pattern solutions transfer from their positions to another position to generate better solutions. The transfer function operator on pattern solutions is defined as:
X i * ( t + 1 ) = X i ( t + 1 ) + 2 × r a n d × ( X Target ( t ) X i ( t ) ) , i = 1 , 2 , , N .
The bias operator is similar to the mutation operator, which can prevent premature convergence.
The cost of every pattern solution for the population is calculated and the minimum is chosen as the optimal value. The target solution (XTarget) and the target weight (WTarget) corresponding to the optimal value should be updated. Finally, β is reduced according to Equation (12):
β ( t + 1 ) = 0.99 × β ( t ) , t = 1 , 2 , , M a x _ i t e r a t i o n .
If the stopping condition is satisfied, the NNA will stop. Otherwise, go back to the beginning of the cycle stage. The process of NNA is as following Algorithm 1:
Algorithm 1. The implementation of the neural network algorithm (NNA).
01 Create random initial population X and weights W with constraints by Equations (4) and (5)
02 Calculate the cost of every pattern solution and set the target solution and target weight
03 For i = 1:max_iteration
04  Generate new pattern solutions Xt+1 by Equations (6) and (7)
05  Update the weights by Equation (8)
06   If rand ≤ β
07    Perform the bias operator for pattern solutions Xt+1 and weights Wt+1 by Equations (9) and (10)
08   Else
09    Perform the transfer function operator on Xt+1 by Equation (11)
10   End if
11  Calculate the cost of every pattern solution and find the optimal solution and weight
12  Reduce the modification factor β by Equation (12)
13 End for

3. The Neural Network Algorithm with Dropout Using Elite Selection

In order to improve the convergence performance of NNA, the DESNNA is proposed. This section is divided into three subsections, including viewing NNA from the perspective of an evolutionary algorithm, the introduced dropout strategy in the DESNNA, and the elite selection in the DESNNA.

3.1. NNA from the Perspective of Evolutionary Algorithm

Firstly, we view NNA from the perspective of an evolutionary algorithm. The main steps of an evolutionary algorithm include initialization, crossover, mutation, and selection. In NNA, each pattern solution is viewed as an individual of the population and, therefore, the pattern solution matrix is seen as a population.
Similar to the crossover process, when generating new pattern solutions, the weight matrix multiplies pattern solutions by Equation (6), which ties each pattern solution together. For instance, four pattern solutions generating the first new pattern solution can be expressed as:
X 1 N e w ( t + 1 ) = w 11 X 1 ( t ) + w 21 X 2 ( t ) + w 31 X 3 ( t ) + w 41 X 4 ( t ) .
What can be seen from Equation (13) is that when the new pattern solution is generated, the process of linear combination between individuals using the values of the weight matrix is regarded as a crossover process. Figure 3 presents how NNA generates its new population of pattern solutions.
Resembling the mutation process, after checking the bias condition, the bias operator for the new pattern solution or updated weight matrix by Equation (9) or Equation (10) and the transfer function operator for pattern solution by Equation (11) perform random offset operation, which is regarded as a mutation process. Similar to the selection process, after calculating the cost of each pattern solution, the minimum is chosen as the optimal value. In conclusion, the whole process of NNA corresponds to the framework of the evolutionary algorithm.

3.2. The Introduced Dropout Strategy in the DESNNA

In order to overcome the low convergence speed problem, the idea of dropout in the neural network is applied to the crossover process. The principle of dropout in deep learning is to set a probability when samples are input into the neural network for training so that each neuron has a certain probability of death and does not participate in the network training. Dropout is similar to sexual reproduction in biological evolution. The power of genes lies in the ability to mix rather than the ability of a single gene. Sexual reproduction can not only pass down excellent genes but also reduce the joint adaptability between genes.
Inspired by dropout in the neural network, in the crossover process of the DESNNA, when calculating and generating new pattern solutions, the linear combination of different individuals in the population confirms new pattern solutions. Then, 10% of individuals who do not perform well are dropped and do not participate in the process. In order to achieve this, the pattern solutions that do not participate in the crossover process are set to zero. The implementation of concrete details is reflected in the following Equations (14) and (15):
X d r o p ( t ) = 0 .
X j N e w ( t + 1 ) = i = 1 N w i j ( t ) × X i ( t ) , j = 1 , 2 , , N .
where X1, X2, …, XN includes 10% of Xdrop. Therefore, after applying the dropout strategy to NNA, the crossover process is improved from Equation (6) to Equations (14) and (15). In this way, the individuals with poor performance corresponding to the pattern solutions are set to zero, which is equivalent to them not participating in the crossover process. This ensures the superiority of individuals in the crossover process and then ensures the superiority of the whole population. Consequently, introducing the dropout strategy can improve the convergence speed of the algorithm.

3.3. The Elite Selection in the DESNNA

To improve the convergence speed, the elite selection strategy is applied to the selection process. When calculating and generating new pattern solutions, the linear combination of different individuals in the population confirms new pattern solutions. Adding new pattern solutions to the old pattern solutions obtains a new population. Then the cost of every pattern solution of the new population is calculated, and then they are sorted. In the selection process, the top 15% of individuals with the highest fitness are saved for the next generation. Therefore, some of the better individuals are preserved in every evolutionary process, which makes the level of the whole population higher. In this way, the convergence performance of the algorithm is better. Meanwhile, the bias operator and the transfer function operator are performed as usual without losing the diversity of the population.
The process of the DESNNA is as following Algorithm 2:
Algorithm 2. The implementation of the neural network algorithm with dropout using elite selection (DES-NNA).
01 Create random initial population X and weights W with constraints by Equations (4) and (5)
02 Calculate the cost of every pattern solution and set the target solution and target weight
03 For i = 1:max_iteration
04  10% of individuals with the worst fitness corresponding pattern solution Xworst is set 0
   Generate new pattern solutions Xt+1 by Equations (14), (15) and (7)
05  Update the weights by Equation (8)
06   If rand ≤ β
07    Perform the bias operator for pattern solutions Xt+1 and weights Wt+1 by Equations (9) and (10)
08   Else
09    Perform the transfer function operator on Xt+1 by Equation (11)
10   End if
11  Calculate the cost of every pattern solution and find the optimal solution and weight
12  Sort the cost of each pattern solution in the new population
13  Save the top 15% of individuals with the highest fitness to the next generation
14  Reduce the modification factor β by Equation (12)
15 End for

4. DESNNA for Global Optimization

In order to verify the performance of the DESNNA in solving numerical optimization, 18 benchmark functions extracted from the literature [54] are used for the experiment. The experimental results from the DESNNA are compared with those from NNA, which reflects the effectiveness of our improved strategy. Additionally, the results are compared with other meta-heuristics optimization algorithms.

4.1. Benchmark Functions

The test functions provided in the literature [54,55] have been applied to the experimental research of meta-heuristic algorithms. The definitions of benchmark functions F1 to F11 are shown in Table 1 and the definitions of hybrid composition functions F12 to F18 are presented in Table 2, which have higher complexity compared to the functions F1 to F11. The detailed procedure used to hybridize the first function with the second function is shown in the literature [54]. The properties of the benchmark functions are shown in Table 3, and the optimal solutions of all benchmark functions are known.
In order to verify the performance of the DESNNA, we compare the DESNNA with the other six algorithms on these functions, including the NNA, CCLNNA, TSA, PSO, GA, and HS. The parameters of these applied algorithms are listed in Table 4. For a fair comparison, the dimension of the benchmark function is set to 50 in this experiment. The maximum number of function evaluations (NFES) is used as the shutdown condition, which is set to 5000 times the dimension (D). Each algorithm runs independently of benchmark functions 30 times, taking the best error, the average error, the worst error, and the error standard deviation. The population size is uniformly set to 50. All the optimizers in this paper are coded in MATLAB R2019b.

4.2. Comparison between Improved DESNNA and NNA

In order to verify the effectiveness of the improvement strategy, this section focuses on comparing optimization results between the NNA and DESNNA. The experimental results of the NNA and DESNNA running on benchmark functions are shown in Table 5. The better results are highlighted in bold type. From Table 5, in terms of the best error, the average error, the worst error, and the error standard deviation, the DESNNA is superior to the NNA on all 18 benchmark functions. That is to say, the DESNNA has better search-ability and stability than the NNA, which means the applied improvement strategy improves the performance of the NNA.
In addition, the convergence performance between the DESNNA and NNA is compared. Several typical curves of the convergence process on the functions F2, F9, F11, and F14 are provided in Figure 4 to show the convergence performance. From Figure 4, the DESNNA converges faster than the NNA on functions. It can be seen from the characteristics of these curves that in the initial stage, the DESNNA has better convergence performance than the NNA and in fewer iterations, the DESNNA tends to find the value closer to the optimal value. With the increase of iterations, the DESNNA tends to converge to the optimal value. Therefore, the DESNNA has better convergence performance compared to the NNA.

4.3. Comparisons between the Improved DESNNA and Other Algorithms

The optimization performance between the DESNNA and five other algorithms containing the CCLNNA, TSA, PSO, GA, and HS are compared in this section. Table 6 shows the experimental results obtained by the DESNNA and other methods.
From Table 6, in terms of the best error, the DESNNA outperforms CCLNNA on all functions except for F2. The DESNNA is superior to the TSA on functions F1, F3, F4, F5, F6, F9, F10, F11, F13, F14, F15, F16, F17, and F18. The DESNNA beats the PSO, GA, and HS on all functions. From Table 6, in terms of the average error, the DESNNA performs better than the CCLNNA on all functions except for F2, F4, and F13. TSA beats the DESNNA only on functions F2, F7, and F8. The DESNNA is superior to the PSO, GA, and HS on all functions. As for the worst error, the DESNNA outperforms CCLNNA on functions F1, F3, F5, F6, F7, F8, F9, F10, F11, F12, F13, F14, F15, and F16 and outperforms TSA on functions F1, F3, F4, F5, F6, F9, F10, F11, F12, F13, F14, F15, F16, F17, and F18. The DESNNA is superior to the PSO, GA, and HS on all functions. In terms of error standard deviation, the DESNNA outperforms CCLNNA on functions F1, F3, F5, F6, F7, F8, F9, F10, F11, F12, F14, F15, and F16. TSA beats the DESNNA only on functions F1, F2, F7, and F8, and GA beats the DESNNA only on F17. The DESNNA is superior to the PSO and HS on all functions. Clearly, the DESNNA shows better performance than other compared methods.

5. Conclusions and Future Work

In this paper, a new meta-heuristic algorithm called the neural network algorithm with dropout using elite selection strategy (DESNNA) is proposed, which is a new variant of the neural network algorithm (NNA). In the DESNNA, when a new population is generated from the previous population, a certain percentage of individuals who perform the worst are dropped and a certain proportion of the individuals of the previous generation with the best performance are retained and directly enter the next generation to ensure the outstanding performance of the population. In order to verify the effectiveness of the improved strategy, the DESNNA is used on 18 well-known benchmark functions. The experimental results showed that the DESNNA outperforms the NNA on all 18 benchmark functions and the DESNNA beats each other compared algorithm on more than 80% of benchmark functions. Therefore, the introduced dropout and elite selection strategy improved the optimization performance of the NNA, and the DESNNA is a powerful algorithm for solving optimization problems. This work can advance the state-of-the-art. The improved DESNNA can be applied to the single mixed refrigerant process for synthetic natural gas liquefaction or the optimal planning and operation of distributed generations and capacitor banks in the radial distribution networks, which can help to improve the annual energy loss mitigation and cost savings.
As for future research, because the NNA is inspired by the neural network, it can be optimized from the perspective of a neural network. Other new variants of the NNA should be proposed in future research, such as introducing back-propagation and gradient descent to update the weight matrix. The hybridization of the NNA with other meta-heuristic algorithms may form a better optimization algorithm. What is more, the DESNNA can be applied to solve real-world optimization problems in engineering, such as constrained engineering design problems.

Author Contributions

Conceptualization, investigation, G.W. and Y.W.; methodology K.W.; software, Y.W.; validation, G.W. and Y.W.; data curation, K.W.; writing—original draft preparation, K.W.; writing—review and editing, Y.W. and G.W.; supervision, G.W. and Y.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

The authors are thankful to the anonymous for their valuable suggestions during the review process.

Conflicts of Interest

The authors declare no conflict of interest.

Nomenclature

DThe dimension of optimization problem
NPopulation size
LBThe lower limit of variables
UBThe upper limit of variables
Max_iterationThe maximum number of iterations

References

  1. Sergeyev, Y.D.; Kvasov, D.E. A deterministic global optimization using smooth diagonal auxiliary functions. Commun. Nonlinear Sci. Numer. Simul. 2015, 21, 99–111. [Google Scholar] [CrossRef] [Green Version]
  2. Magoulas, G.D.; Vrahatis, M.N. Adaptive algorithms for neural network supervised learning: A deterministic optimization approach. Int. J. Bifurc. Chaos 2006, 16, 1929–1950. [Google Scholar] [CrossRef] [Green Version]
  3. Kvasov, D.E.; Mukhametzhanov, M.S. Metaheuristic vs. deterministic global optimization algorithms: The univariate case. Appl. Math. Comput. 2018, 318, 245–259. [Google Scholar] [CrossRef]
  4. Sergeyev, Y.D.; Kvasov, D.E.; Mukhametzhanov, M.S. Operational zones for comparing metaheuristic and deterministic one-dimensional global optimization algorithms. Math. Comput. Simul. 2017, 141, 96–109. [Google Scholar] [CrossRef]
  5. Ma, Y.; Wang, Z.; Yang, H.; Yang, L. Artificial intelligence applications in the development of autonomous vehicles: A survey. IEEE/CAA J. Autom. Sin. 2020, 7, 315–329. [Google Scholar] [CrossRef]
  6. Zhao, Z.; Liu, S.; Zhou, M.; Abusorrah, A. Dual-objective mixed integer linear program and memetic algorithm for an industrial group scheduling problem. IEEE/CAA J. Autom. Sin. 2020, 8, 1199–1209. [Google Scholar] [CrossRef]
  7. Zhang, Z.; Cao, Y.; Cui, Z.; Zhang, W.; Chen, J. A Many-Objective Optimization Based Intelligent Intrusion Detection Algorithm for Enhancing Security of Vehicular Networks in 6G. IEEE Trans. Veh. Technol. 2021, 70, 5234–5243. [Google Scholar] [CrossRef]
  8. Dokeroglu, T.; Sevinc, E.; Kucukyilmaz, T.; Cosar, A. A survey on new generation metaheuristic algorithms. Comput. Ind. Eng. 2019, 137, 106040. [Google Scholar] [CrossRef]
  9. Wang, G.-G.; Cai, X.; Cui, Z.; Min, G.; Chen, J. High performance computing for cyber physical social systems by using evolutionary multi-objective optimization algorithm. IEEE Trans. Emerg. Top. Comput. 2020, 8, 20–30. [Google Scholar] [CrossRef]
  10. Wang, G.-G.; Tan, Y. Improving metaheuristic algorithms with information feedback models. IEEE Trans. Cybern. 2019, 49, 542–555. [Google Scholar] [CrossRef]
  11. Wang, G.-G.; Gao, D.; Pedrycz, W. Solving multi-objective fuzzy job-shop scheduling problem by a hybrid adaptive differential evolution algorithm. IEEE Trans. Ind. Inform. 2022, 1. [Google Scholar] [CrossRef]
  12. Holland, J.H. Genetic algorithms. Sci. Am. 1992, 267, 66–73. [Google Scholar] [CrossRef]
  13. Kirkpatrick, S.; Gelatt, C.D., Jr.; Vecchi, M.P. Optimization by simulated annealing. Science 1983, 220, 671–680. [Google Scholar] [CrossRef]
  14. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the ICNN’95-International Conference on Neural Networks, Perth, Australia, 27 November–1 December 1995; pp. 1942–1948. [Google Scholar]
  15. Cui, Z.; Zhang, J.; Wu, D.; Cai, X.; Wang, H.; Zhang, W.; Chen, J. Hybrid many-objective particle swarm optimization algorithm for green coal production problem. Inf. Sci. 2020, 518, 256–271. [Google Scholar] [CrossRef]
  16. Zhang, W.; Hou, W.; Li, C.; Yang, W.; Gen, M. Multidirection Update-Based Multiobjective Particle Swarm Optimization for Mixed No-Idle Flow-Shop Scheduling Problem. Complex Syst. Model. Simul. 2021, 1, 176–197. [Google Scholar] [CrossRef]
  17. Geem, Z.W.; Kim, J.H.; Loganathan, G.V. A new heuristic optimization algorithm: Harmony search. Simulation 2001, 76, 60–68. [Google Scholar] [CrossRef]
  18. Storn, R.; Price, K. Differential evolution—A simple and efficient heuristic for global optimization over continuous spaces. J. Glob. Optim. 1997, 11, 341–359. [Google Scholar] [CrossRef]
  19. Gao, D.; Wang, G.-G.; Pedrycz, W. Solving fuzzy job-shop scheduling problem using DE algorithm improved by a selection mechanism. IEEE Trans. Fuzzy Syst. 2020, 28, 3265–3275. [Google Scholar] [CrossRef]
  20. Dorigo, M.; Maniezzo, V.; Colorni, A. Ant system: Optimization by a colony of cooperating agents. IEEE Trans. Syst. Man Cybern. Part B (Cybern.) 1996, 26, 29–41. [Google Scholar] [CrossRef] [Green Version]
  21. Karaboga, D.; Basturk, B. A powerful and efficient algorithm for numerical function optimization: Artificial bee colony (ABC) algorithm. J. Glob. Optim. 2007, 39, 459–471. [Google Scholar] [CrossRef]
  22. Sadollah, A.; Sayyaadi, H.; Yadav, A. A dynamic metaheuristic optimization model inspired by biological nervous systems: Neural network algorithm. Appl. Soft Comput. 2018, 71, 747–782. [Google Scholar] [CrossRef]
  23. Dhiman, G.; Kumar, V. Spotted hyena optimizer: A novel bio-inspired based metaheuristic technique for engineering applications. Adv. Eng. Softw. 2017, 114, 48–70. [Google Scholar] [CrossRef]
  24. Dhiman, G.; Kumar, V. Seagull optimization algorithm: Theory and its applications for large-scale industrial engineering problems. Knowl.-Based Syst. 2019, 165, 169–196. [Google Scholar] [CrossRef]
  25. Kaur, S.; Awasthi, L.K.; Sangal, A.; Dhiman, G. Tunicate Swarm Algorithm: A new bio-inspired based metaheuristic paradigm for global optimization. Eng. Appl. Artif. Intell. 2020, 90, 103541. [Google Scholar] [CrossRef]
  26. Wang, G.-G.; Deb, S.; Coelho, L.d.S. Elephant herding optimization. In Proceedings of the 2015 3rd International Symposium on Computational and Business Intelligence (ISCBI), Bali, Indonesia, 7–9 December 2015; pp. 1–5. [Google Scholar]
  27. Dhiman, G.; Kaur, A. STOA: A bio-inspired based optimization algorithm for industrial engineering problems. Eng. Appl. Artif. Intell. 2019, 82, 148–174. [Google Scholar] [CrossRef]
  28. Zhang, Y. Chaotic neural network algorithm with competitive learning for global optimization. Knowl.-Based Syst. 2021, 231, 107405. [Google Scholar] [CrossRef]
  29. Wang, G.-G.; Deb, S.; Cui, Z. Monarch butterfly optimization. Neural Comput. Appl. 2019, 31, 1995–2014. [Google Scholar] [CrossRef] [Green Version]
  30. Lakshminarayanan, S.; Abdulgader, M.; Kaur, D. Scheduling energy storage unit with GWO for smart home integrated with renewable energy. Int. J. Artif. Intell. Soft Comput. 2020, 7, 146–163. [Google Scholar]
  31. Wang, G.-G.; Deb, S.; Coelho, L.D.S. Earthworm optimisation algorithm: A bio-inspired metaheuristic algorithm for global optimisation problems. Int. J. Bio-Inspired Comput. 2018, 12, 1–22. [Google Scholar] [CrossRef]
  32. Wang, G.-G. Moth search algorithm: A bio-inspired metaheuristic algorithm for global optimization problems. Memetic Comput. 2018, 10, 151–164. [Google Scholar] [CrossRef]
  33. Ghaemi, M.; Feizi-Derakhshi, M.-R. Forest optimization algorithm. Expert Syst. Appl. 2014, 41, 6676–6687. [Google Scholar] [CrossRef]
  34. Grabski, J.K.; Walczak, T.; Buśkiewicz, J.; Michałowska, M. Comparison of some evolutionary algorithms for optimization of the path synthesis problem. In Proceedings of the AIP Conference Proceedings, Lublin, Poland, 13–16 September 2017; p. 020006. [Google Scholar]
  35. Liang, Y.-C.; Cuevas Juarez, J.R. A novel metaheuristic for continuous optimization problems: Virus optimization algorithm. Eng. Optim. 2016, 48, 73–93. [Google Scholar] [CrossRef]
  36. Grabski, J.K.; Mrozek, A. Identification of elastoplastic properties of rods from torsion test using meshless methods and a metaheuristic. Comput. Math. Appl. 2021, 92, 149–158. [Google Scholar] [CrossRef]
  37. Qadeer, K.; Ahmad, A.; Naquash, A.; Qyyum, M.A.; Majeed, K.; Zhou, Z.; He, T.; Nizami, A.-S.; Lee, M. Neural network-inspired performance enhancement of synthetic natural gas liquefaction plant with different minimum approach temperatures. Fuel 2022, 308, 121858. [Google Scholar] [CrossRef]
  38. Bhullar, A.K.; Kaur, R.; Sondhi, S. Design and Comparative Analysis of Optimized Fopid Controller Using Neural Network Algorithm. In Proceedings of the 2020 IEEE 15th International Conference on Industrial and Information Systems (ICIIS), Rupnagar, India, 26–28 November 2020; pp. 91–96. [Google Scholar]
  39. Zhang, Y.; Jin, Z.; Chen, Y. Hybrid teaching–learning-based optimization and neural network algorithm for engineering design optimization problems. Knowl.-Based Syst. 2020, 187, 104836. [Google Scholar] [CrossRef]
  40. Zhang, Y.; Jin, Z.; Chen, Y. Hybridizing grey wolf optimization with neural network algorithm for global numerical optimization problems. Neural Comput. Appl. 2020, 32, 10451–10470. [Google Scholar] [CrossRef]
  41. Zhang, H.; Sheng, J.J. Complex fracture network simulation and optimization in naturally fractured shale reservoir based on modified neural network algorithm. J. Nat. Gas Sci. Eng. 2021, 95, 104232. [Google Scholar] [CrossRef]
  42. Nguyen, T.P.; Nguyen, T.A.; Phan, T.V.-H.; Vo, D.N. A comprehensive analysis for multi-objective distributed generations and capacitor banks placement in radial distribution networks using hybrid neural network algorithm. Knowl.-Based Syst. 2021, 231, 107387. [Google Scholar] [CrossRef]
  43. Van Tran, T.; Truong, B.-H.; Nguyen, T.P.; Nguyen, T.A.; Duong, T.L.; Vo, D.N. Reconfiguration of Distribution Networks With Distributed Generations Using an Improved Neural Network Algorithm. IEEE Access 2021, 9, 165618–165647. [Google Scholar] [CrossRef]
  44. Marugán, A.P.; Márquez, F.P.G.; Perez, J.M.P.; Ruiz-Hernández, D. A survey of artificial neural network in wind energy systems. Appl. Energy 2018, 228, 1822–1836. [Google Scholar] [CrossRef] [Green Version]
  45. Srivastava, N.; Hinton, G.; Krizhevsky, A.; Sutskever, I.; Salakhutdinov, R. Dropout: A simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. 2014, 15, 1929–1958. [Google Scholar]
  46. Bhandari, D.; Paul, S.; Narayan, A. Deep neural networks for multimodal data fusion and affect recognition. Int. J. Artif. Intell. Soft Comput. 2020, 7, 130–145. [Google Scholar]
  47. Agrawal, A.; Barratt, S.; Boyd, S. Learning Convex Optimization Models. IEEE/CAA J. Autom. Sin. 2021, 8, 1355–1364. [Google Scholar] [CrossRef]
  48. Hirasawa, T.; Aoyama, K.; Tanimoto, T.; Ishihara, S.; Shichijo, S.; Ozawa, T.; Ohnishi, T.; Fujishiro, M.; Matsuo, K.; Fujisaki, J. Application of artificial intelligence using a convolutional neural network for detecting gastric cancer in endoscopic images. Gastric Cancer 2018, 21, 653–660. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  49. Paoletti, M.; Haut, J.; Plaza, J.; Plaza, A. A new deep convolutional neural network for fast hyperspectral image classification. ISPRS J. Photogramm. Remote Sens. 2018, 145, 120–147. [Google Scholar] [CrossRef]
  50. Devin, C.; Gupta, A.; Darrell, T.; Abbeel, P.; Levine, S. Learning modular neural network policies for multi-task and multi-robot transfer. In Proceedings of the 2017 IEEE International Conference on Robotics and Automation (ICRA), Singapore, 29 May–3 June 2017; pp. 2169–2176. [Google Scholar]
  51. Parashar, S.; Senthilnath, J.; Yang, X.-S. A novel bat algorithm fuzzy classifier approach for classification problems. Int. J. Artif. Intell. Soft Comput. 2017, 6, 108–128. [Google Scholar] [CrossRef]
  52. Laudani, A.; Lozito, G.M.; Riganti Fulginei, F.; Salvini, A. On training efficiency and computational costs of a feed forward neural network: A review. Comput. Intell. Neurosci. 2015, 2015, 818243. [Google Scholar] [CrossRef] [Green Version]
  53. Cui, Z.; Xue, F.; Cai, X.; Cao, Y.; Wang, G.-G.; Chen, J. Detection of malicious code variants based on deep learning. IEEE Trans. Ind. Inform. 2018, 14, 3187–3196. [Google Scholar] [CrossRef]
  54. Herrera, F.; Lozano, M.; Molina, D. Test Suite for the Special Issue of Soft Computing on Scalability of Evolutionary Algorithms and Other Metaheuristics for Large Scale Continuous Optimization Problems. Available online: http://150.214.190.154/sites/default/files/files/TematicWebSites/EAMHCO/functions1-19.pdf (accessed on 25 April 2022).
  55. Liang, J.J.; Qu, B.Y.; Suganthan, P.N. Problem definitions and evaluation criteria for the CEC 2014 special session and competition on single objective real-parameter numerical optimization. Comput. Intell. Lab. Zhengzhou Univ. Zhengzhou China Technol. Rep. Nanyang Technol. Univ. Singap. 2013, 635, 490. [Google Scholar]
Figure 1. Structure of an Artificial Neural Network.
Figure 1. Structure of an Artificial Neural Network.
Mathematics 10 01827 g001
Figure 2. Structure of an Artificial Neural Network with dropout.
Figure 2. Structure of an Artificial Neural Network with dropout.
Mathematics 10 01827 g002
Figure 3. Schematic view of generating new pattern solutions.
Figure 3. Schematic view of generating new pattern solutions.
Mathematics 10 01827 g003
Figure 4. Several typical convergence curves obtained by the NNA and DESNNA. (a) F2. (b) F9. (c) F11. (d) F14.
Figure 4. Several typical convergence curves obtained by the NNA and DESNNA. (a) F2. (b) F9. (c) F11. (d) F14.
Mathematics 10 01827 g004
Table 1. Benchmark functions F1 to F11.
Table 1. Benchmark functions F1 to F11.
FunctionNameDefinition
F1Shifted Sphere Function i = 1 D z i 2 + f _ b i a s , z = x o
F2Shifted Schwefel Problem 2.21 max i { | z i | , 1 i D } + f _ b i a s , z = x o
F3Shifted Rosenbrock’s Function i = 1 D 1 ( 100 ( z i 2 + z i + 1 ) 2 + ( z i 1 ) 2 ) + f _ b i a s , z = x o
F4Shifted Rastrigin’s Function i = 1 D ( z i 2 10 cos ( 2 π z i ) + 10 ) + f _ b i a s , z = x o
F5Shifted Griewank’s Function i = 1 D z i 2 4000 i = 1 D cos ( z i i ) + 1 + f _ b i a s , z = x o
F6Shifted Ackley’s Function 20 exp ( 0.2 1 D i = 1 D z i 2 exp ( 1 D i = 1 D cos ( 2 π z i ) + 20 + e + f _ b i a s
F7Schwefel’s Problem 2.22 i = 1 D | x i | + i = 1 D | x i |
F8Schwefel’s Problem 1.2 i = 1 D ( j = 1 i x j ) 2
F9Extended f10 ( i = 1 D 1 f 10 ( x i , x i + 1 ) ) + f 10 ( x D , x 1 ) f 10 = ( x 2 + y 2 ) 0.25 ( sin 2 ( 50 ( x 2 + y 2 ) 0.1 ) + 1 )
F10Bohachevsky i = 1 D ( x i 2 + 2 x i + 1 2 0.3 cos ( 3 π x i ) 0.4 cos ( 4 π x i + 1 ) + 0.7
F11Schaffffer i = 1 D ( x i 2 + x i + 1 2 ) 0.25 ( sin 2 ( 50 ( x i 2 + x i + 1 2 ) 0.1 ) + 1 )
Table 2. Hybrid composition functions.
Table 2. Hybrid composition functions.
FunctionFirst FunctionSecond FunctionWeight Factor
F12F9F10.25
F13F9F40.25
F14F5F10.5
F15F3F40.5
F16F9F10.75
F17F9F30.75
F18F9F40.75
Table 3. Properties of F1 to F18.
Table 3. Properties of F1 to F18.
FunctionRangeOptimumUnimodal/
Multimodal
SeparableShiftedf_bias
F1[−100, 100]D0UYY−450
F2[−100, 100]D0UNY−450
F3[−100, 100]D0MYY390
F4[−5, 5]D0MYY−330
F5[−600, 600]D0MNY−180
F6[−32, 32]D0MYY−140
F7[−10, 10]D0UYN
F8[−65.536, 65.536]D0UNN
F9[−100, 100]D0UNN
F10[−15, 15]D0UYN
F11[−100, 100]D0UYN
F12[−100, 100]D0UNY−450
F13[−5, 5]D0MNY−330
F14[−100, 100]D0UNY−630
F15[−10, 10]D0MYY60
F16[−100, 100]D0UNY−450
F17[−100, 100]D0MNY390
F18[−5, 5]D0MNY−330
Table 4. Optimal values of user parameters used in the reported optimizers.
Table 4. Optimal values of user parameters used in the reported optimizers.
MethodsParametersOptimal Values
GAN50
Pc0.8
Pm0.2
PSON50
C1, C22
w0.9
HSN50
HMCR0.95
PAR0.3
CCLNNAN50
NN50
NN50
rateOfSelect0.15
rateOfDropt0.10
Table 5. Experimental results obtained by the NNA and DESNNA.
Table 5. Experimental results obtained by the NNA and DESNNA.
FunctionMethodsBest ErrorAverage ErrorWorst ErrorError Standard Deviation
F1NNA
DESNNA
5.684 × 10−145.684 × 10−143.411 × 10−131.339 × 10−13
001.137 × 10−135.971 × 10−14
F2NNA
DESNNA
7.209 × 10−23.251 × 10−11.425 × 1002.901 × 10−1
2.626 × 10−23.152 × 10−11.010 × 1002.389 × 10−1
F3NNA
DESNNA
4.822 × 1015.860 × 1012.094 × 1023.953 × 101
4.822 × 1014.822 × 1014.822 × 1013.493 × 10−13
F4NNA
DESNNA
4.547 × 10−131.825 × 1007.960 × 1002.653 × 100
05.978 × 10−14.975 × 1001.470 × 100
F5NNA
DESNNA
5.684 × 10−145.754 × 10−49.865 × 10−32.213 × 10−3
005.684 × 10−143.077 × 10−14
F6NNA
DESNNA
2.154 × 10−117.012 × 10−111.863 × 10−103.857 × 10−11
5.684 × 10−142.842 × 10−141.705 × 10−135.853 × 10−14
F7NNA
DESNNA
3.132 × 10−123.177 × 10−111.101 × 10−103.063 × 10−11
6.335 × 10−156.297 × 10−145.688 × 10−131.072 × 10−13
F8NNA
DESNNA
9.448 × 10−45.364 × 10−32.151 × 10−25.002 × 10−3
7.039 × 10−51.794 × 10−31.380 × 10−22.701 × 10−3
F9NNA
DESNNA
3.105 × 10−14.330 × 1002.096 × 1014.678 × 100
1.418 × 10−23.460 × 1001.369 × 1014.206 × 100
F10NNA
DESNNA
03.701 × 10−172.220 × 10−168.417 × 10−17
0000
F11NNA
DESNNA
2.143 × 10−15.928 × 1002.416 × 1016.384 × 100
1.787 × 10−22.126 × 1001.356 × 1013.941 × 100
F12NNA
DESNNA
4.151 × 10−52.217 × 10−26.596 × 10−11.204 × 10−1
5.449 × 10−101.710 × 10−88.094 × 10−81.962 × 10−8
F13NNA
DESNNA
1.906 × 10−51.727 × 1009.720 × 1002.649 × 100
2.314 × 10−81.017 × 1007.469 × 1002.367 × 100
F14NNA
DESNNA
2.274 × 10−138.102 × 10−32.061 × 10−13.772 × 10−2
03.196 × 10−35.157 × 10−21.120 × 10−2
F15NNA
DESNNA
2.346 × 1012.356 × 1012.645 × 1015.449 × 10−1
2.346 × 1012.346 × 1012.346 × 1015.357 × 10−10
F16NNA
DESNNA
3.345 × 10−22.657 × 1008.910 × 1002.854 × 100
1.699 × 10−52.869 × 10−14.255 × 1008.419 × 10−1
F17NNA
DESNNA
1.096 × 1014.957 × 1012.714 × 1025.833 × 101
1.061 × 1012.680 × 1011.434 × 1023.733 × 101
F18NNA
DESNNA
1.152 × 10−22.101 × 1001.293 × 1013.218 × 100
7.856 × 10−61.070 × 1001.092 × 1012.881 × 100
Table 6. Experimental results obtained by the DESNNA and other methods.
Table 6. Experimental results obtained by the DESNNA and other methods.
MethodsBest ErrorAverage ErrorWorst ErrorError Standard Deviation
F1
DESNNA001.137 × 10−135.971 × 10−14
CCLNNA7.135 × 10−93.697 × 10−81.215 × 10−72.172 × 10−8
TSA5.684 × 10−145.684 × 10−141.137 × 10−135.382 × 10−14
PSO1.137 × 10−131.137 × 10−135.116 × 10−132.635 × 10−13
GA1.503 × 10−83.612 × 10−81.002 × 10−71.858 × 10−8
HS1.694 × 1032.893 × 1033.857 × 1035.336 × 102
F2
DESNNA2.626 × 10−23.152 × 10−11.010 × 1002.389 × 10−1
CCLNNA8.691 × 10−32.016 × 10−23.789 × 10−26.979 × 10−3
TSA1.070 × 10−81.568 × 10−67.838 × 10−62.323 × 10−6
PSO2.019 × 1012.019 × 1012.696 × 1013.218 × 100
GA1.367 × 1002.145 × 1003.068 × 1003.789 × 10−1
HS4.165 × 1014.724 × 1015.265 × 1012.566 × 100
F3
DESNNA4.822 × 1014.822 × 1014.822 × 1013.493 × 10−13
CCLNNA4.822 × 1015.123 × 1011.308 × 1021.507 × 101
TSA4.841 × 1014.863 × 1014.882 × 1011.484 × 10−1
PSO2.430 × 10102.430 × 10103.757 × 10107.454 × 109
GA4.822 × 1014.825 × 1014.861 × 1018.374 × 10−2
HS8.008 × 1071.502 × 1082.711 × 1084.366 × 107
F4
DESNNA05.978 × 10−14.975 × 1001.470 × 100
CCLNNA1.020 × 10−83.317 × 10−29.950 × 10−11.817 × 10−1
TSA2.270 × 1023.014 × 1023.827 × 1024.299 × 101
PSO5.530 × 1025.530 × 1026.947 × 1025.642 × 101
GA8.955 × 1002.030 × 1015.373 × 1019.087 × 100
HS2.478 × 1022.783 × 1023.106 × 1021.740 × 101
F5
DESNNA005.684 × 10−143.077 × 10−14
CCLNNA3.530 × 10−87.538 × 10−35.867 × 10−21.363 × 10−2
TSA2.842 × 10−143.489 × 10−32.495 × 10−26.073 × 10−3
PSO2.170 × 10−12.170 × 10−19.131 × 10−13.363 × 10−1
GA3.800 × 10−101.802 × 10−33.680 × 10−26.969 × 10−3
HS1.897 × 1013.000 × 1013.834 × 1014.877 × 100
F6
DESNNA5.684 × 10−142.842 × 10−141.705 × 10−135.853 × 10−14
CCLNNA1.605 × 10−53.366 × 10−55.397 × 10−57.631 × 10−6
TSA8.527 × 10−141.060 × 1003.385 × 1001.436 × 100
PSO1.919 × 10−71.919 × 10−71.809 × 10−63.368 × 10−7
GA9.199 × 10−51.027 × 1001.945 × 1006.774 × 10−1
HS7.334 × 1009.207 × 1001.034 × 1016.722 × 10−1
F7
DESNNA6.335 × 10−156.297 × 10−145.688 × 10−131.072 × 10−13
CCLNNA6.837 × 10−51.053 × 10−41.493 × 10−42.521 × 10−5
TSA1.162 × 10−1322.264 × 10−1276.129 × 10−1261.117 × 10−126
PSO1.097 × 10−101.097 × 10−101.058 × 10−91.998 × 10−10
GA4.951 × 10−12.597 × 1004.642 × 1001.114 × 100
HS1.917 × 1012.156 × 1012.353 × 1011.276 × 100
F8
DESNNA7.039 × 10−51.794 × 10−31.380 × 10−22.701 × 10−3
CCLNNA3.661 × 10−28.557 × 10−21.540 × 10−12.742 × 10−2
TSA6.565 × 10−581.529 × 10−343.362 × 10−336.436 × 10−34
PSO3.630 × 1043.630 × 1044.899 × 1045.672 × 103
GA9.505 × 10−19.095 × 1004.847 × 1011.405 × 101
HS3.420 × 1045.157 × 1046.727 × 1049.005 × 103
F9
DESNNA1.418 × 10−23.460 × 1001.369 × 1014.206 × 100
CCLNNA7.099 × 1001.795 × 1013.166 × 1014.477 × 100
TSA2.168 × 1015.648 × 1011.543 × 1022.840 × 101
PSO4.944 × 1024.944 × 1025.464 × 1023.333 × 101
GA2.670 × 1013.818 × 1014.814 × 1014.947 × 100
HS1.469 × 1021.863 × 1022.114 × 1021.564 × 101
F10
DESNNA0000
CCLNNA1.485 × 10−84.157 × 10−81.051 × 10−72.222 × 10−8
TSA04.246 × 1003.191 × 1019.976 × 100
PSO3.892 × 1033.892 × 1035.628 × 1037.097 × 102
GA5.249 × 1008.968 × 1001.727 × 1012.900 × 100
HS1.519 × 1021.987 × 1022.723 × 1023.068 × 101
F11
DESNNA1.787 × 10−22.126 × 1001.356 × 1013.941 × 100
CCLNNA8.918 × 1001.739 × 1013.008 × 1014.385 × 100
TSA1.440 × 1015.089 × 1011.683 × 1023.334 × 101
PSO4.653 × 1024.653 × 1025.113 × 1022.339 × 101
GA2.475 × 1013.564 × 1014.380 × 1014.903 × 100
HS1.603 × 1021.807 × 1022.132 × 1021.307 × 101
F12
DESNNA5.449 × 10−101.710 × 10−88.094 × 10−81.962 × 10−8
CCLNNA8.100 × 10−22.453 × 10−11.333 × 1002.410 × 10−1
TSA1.137 × 10−132.020 × 1001.651 × 1014.570 × 100
PSO3.922 × 1043.922 × 1047.287 × 1041.119 × 104
GA7.491 × 10−24.669 × 1001.343 × 1013.597 × 100
HS7.270 × 1021.231 × 1032.018 × 1033.087 × 102
F13
DESNNA2.314 × 10−81.017 × 1007.469 × 1002.367 × 100
CCLNNA1.724 × 10−23.492 × 10−12.526 × 1006.498 × 10−1
TSA1.577 × 1022.325 × 1023.012 × 1023.723 × 101
PSO4.077 × 1024.077 × 1025.031 × 1024.723 × 101
GA1.084 × 1012.201 × 1013.664 × 1015.255 × 100
HS1.602 × 1021.989 × 1022.269 × 1021.666 × 101
F14
DESNNA03.196 × 10−35.157 × 10−21.120 × 10−2
CCLNNA6.372 × 10−82.275 × 10−29.816 × 10−22.461 × 10−2
TSA2.274 × 10−137.897 × 10−34.276 × 10−21.055 × 10−2
PSO2.045 × 1042.045 × 1043.515 × 1047.255 × 103
GA2.863 × 10−71.402 × 10−25.987 × 10−21.993 × 10−2
HS7.064 × 1011.534 × 1022.943 × 1026.223 × 101
F15
DESNNA2.346 × 1012.346 × 1012.346 × 1015.357 × 10−10
CCLNNA2.346 × 1012.347 × 1012.356 × 1011.750 × 10−2
TSA8.121 × 1011.363 × 1022.428 × 1023.715 × 101
PSO6.650 × 1056.650 × 1051.519 × 1063.700 × 105
GA2.546 × 1014.732 × 1011.409 × 1022.594 × 101
HS1.258 × 1031.565 × 1031.953 × 1032.054 × 102
F16
DESNNA1.699 × 10−52.869 × 10−14.255 × 1008.419 × 10−1
CCLNNA5.691 × 1001.168 × 1012.231 × 1014.565 × 100
TSA8.421 × 1004.459 × 1011.386 × 1022.386 × 101
PSO6.533 × 1036.533 × 1031.763 × 1044.735 × 103
GA1.568 × 1012.582 × 1013.621 × 1015.827 × 100
HS1.653 × 1022.057 × 1022.295 × 1021.711 × 101
F17
DESNNA1.061 × 1012.680 × 1011.434 × 1023.733 × 101
CCLNNA1.610 × 1013.373 × 1018.777 × 1011.650 × 101
TSA2.165 × 1016.243 × 1014.272 × 1027.386 × 101
PSO2.160 × 1092.160 × 1099.648 × 1092.406 × 109
GA3.540 × 1015.233 × 1019.555 × 1011.336 × 101
HS5.139 × 1022.355 × 1039.487 × 1032.902 × 103
F18
DESNNA7.856 × 10−61.070 × 1001.092 × 1012.881 × 100
CCLNNA5.046 × 10−12.065 × 1004.341 × 1001.014 × 100
TSA8.064 × 1011.093 × 1021.443 × 1021.659 × 101
PSO1.737 × 1021.737 × 1022.260 × 1021.931 × 101
GA2.264 × 1013.461 × 1014.626 × 1016.337 × 100
HS8.716 × 1019.495 × 1011.038 × 1024.256 × 100
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Wang, Y.; Wang, K.; Wang, G. Neural Network Algorithm with Dropout Using Elite Selection. Mathematics 2022, 10, 1827. https://doi.org/10.3390/math10111827

AMA Style

Wang Y, Wang K, Wang G. Neural Network Algorithm with Dropout Using Elite Selection. Mathematics. 2022; 10(11):1827. https://doi.org/10.3390/math10111827

Chicago/Turabian Style

Wang, Yong, Kunzhao Wang, and Gaige Wang. 2022. "Neural Network Algorithm with Dropout Using Elite Selection" Mathematics 10, no. 11: 1827. https://doi.org/10.3390/math10111827

APA Style

Wang, Y., Wang, K., & Wang, G. (2022). Neural Network Algorithm with Dropout Using Elite Selection. Mathematics, 10(11), 1827. https://doi.org/10.3390/math10111827

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop