*3.4. Genetic Algorithm, Back Propagation (GA-BP)*

The Back Propagation (BP) neural network, a classic model in ANN (Artificial Neural Networks), was first proposed by Hecht-Nielsen et al. [51]. This network comprises an input layer, hidden layers, and an output layer, with neurons connecting each layer. The output of a neuron depends on its input values, activation function, and threshold. The BP neural network consists of two steps: forward propagation of information and backward propagation of errors. Although the BP neural network has excellent selflearning, adaptability, and self-organization capabilities and can effectively handle nonlinear problems, it has some limitations: Firstly, in order to reduce error and improve accuracy, an appropriate number of neurons in the hidden layer need to be selected. However, there is a lack of a clear method for this selection. Secondly, the BP neural network randomly generates initial weights and thresholds. This results in adaptive and global approximation processes that are time-consuming, thereby slowing the network's convergence rate. Lastly, the use of gradient descent by the BP neural network can often lead to it becoming trapped in local minima.

The Genetic Algorithm (GA) is a global optimization probabilistic search method based on the principles of biological inheritance and evolution [52]. The GA mainly includes three operations: (1) Selection operation: The probability of an individual entering the next generation population is determined based on the fitness value. The higher the fitness, the greater the chance of inheritance. (2) Crossover operation: This is a key part of the algorithm. Two individuals are selected from the population, and a portion of their genes are exchanged to produce more optimal individuals in the new generation. (3) Mutation operation: An individual is randomly selected from the population, and a mutation is performed at a certain locus of its chromosome to produce a more optimal individual. Combining crossover and mutation operations can achieve optimal search performance. The GA has the characteristics of global search and parallel computation, but it lacks learning ability. The application of GA can optimize the BP neural network. This combines the GA's global search traits with the BP's learning and non-linear mapping abilities. As a result, the network's output accuracy improves.
