*2.5. Devise of BPNN–PID Controller*

In this study, a BPNN–PID controller was designed according to the control requirements of an alternating–quantity scatter fertility control system for liquid manure. Figure 5 exhibits a flow chart of BPNN–PID control system.

**Figure 5.** BPNN–PID dominant system flow chart.

The BPNN is a three–layer front-feed network. It includes three layers of network structure: the input layer, hidden layer, and output layer. The quantities of panel points in the import and export layers are decided by the dimensions of the input and output vectors, respectively. The concealed layer has a significant role in the ability as well as the composition of the system [21].

The BPNN–PID regulator was applied with three layers to define neurons with proportional, integral, and differential functions to integrate the regulation of PID control into the network. Through a teacher-free learning mode and online self-learning, adjustment could be carried out according to the control effect to ensure that the system performed well.

The first layer was the input layer, wherein the input was the preset fertilizer demand. The hidden layer was the second layer, wherein the entry was as follows (Equation (10)):

$$\begin{cases} net\_i^2(k) = \sum\_{j=1}^3 N\_j^{(1)}(k) \cdot \theta\_{ij}^{(2)} \\\ N\_j^{(2)}(k) = f\left[net\_i^{(2)}(k)\right] \\\ (i = 1, 2, \dots, 9) \end{cases} \tag{10}$$

where *θij* is the hidden layer weight divisor. *f* [·] is the activation function in the hidden layer. The output layer was selected as third layer, with the following input and output: *f*(*x*) = *tanr*(*x*).

The output layer was the last layer, and the input and output can be seen from Equation (11):

$$\begin{cases} net\_{cj}^{(3)}(k) = \sum\_{i=1}^{9} \theta\_{ci}^{(3)} N\_i^{(2)}(k) \\ N\_c^{(3)}(k) = h \left[ net\_c^{(3)}(k) \right] \\ \quad (c = 1, 2, 3) \\ N\_1^{(3)}(k) = k\_p \\ N\_2^{(3)}(k) = k\_i \\ N\_3^{(3)}(k) = k\_d \end{cases} \tag{11}$$

Among *θ* (3) *ci* , Li is the weight factor of the export layer. *F*[·] is the activation function of the output layer, wherein *h*(*x*) = *i*(*x*)/(*i* − *x* + *ix*) was selected.

After the BPNN–PID control outputs, KP, KI, and KD were determined; these were substituted for the bulking PID expression and calculated. Substitute PID control was used to memorize and maintain the system condition during above moments to minimize the impact of possible errors. Therefore, the optimal control signal was obtained by using incremental PID, as follows (Equation (12)):

$$\begin{cases} \begin{array}{l} \Mu(k) &= K\_I e(k) + K\_P [e(k) - e(k-1)] \\ &+ K\_D [e(k) - 2e(k-1) + e(k-2)] \\ &\nu(k) = \mu(k-1) + M\_\nu \end{array} \end{cases} \tag{12}$$

where *Mu*(*k*) is the pilot signal spike, time is K, the scale factor is *Kp*, the integration factor is *KI*, and the differentiation factor is *KD*.

### *2.6. Neural Network PID Control Algorithm Optimized by Genetic Algorithm*

The study procedure of the BPNN mainly involves constantly adjusting the weight arguments of every network layer, followed by use of the weight ratio to count the optimum dominant arguments. Hence, the BPNN must study and constantly renew the weighting ratio array of every network layer.

#### 2.6.1. Updating BPNN Weight Factor

In light of the BPNN study method, we delimit the target cost function L:

$$L = \frac{1}{2}\varepsilon^2(k) = \frac{1}{2}[d(k) - O(k)]^2\tag{13}$$

When the target cost function L was determined, a search was performed in the direction of the negative gradient. The objective was to minimize the target function L and add an inertial term to accelerate seek convergence to the global minima, as below:

$$
\Delta\theta\_{cj}^{(3)}(k+1) = \beta\Delta\theta\_{cj}^{(3)}(k) - \eta \frac{\partial\mu(k)}{\partial\theta\_{cj}^{(3)}}\tag{14}
$$

where θ is the study rate and β is the ratio of inertia.

Thus, it was further concluded that the modified formula for the weight factor of the export layer of BPNN is:

$$\begin{cases} \Delta \theta\_{\epsilon j}^{(3)}(k+1) = \beta \Delta \theta\_{\epsilon j}^{(3)}(k) + \eta \gamma\_{\epsilon}^{(3)} D\_{i}^{(2)}(k) \\ \gamma\_{\epsilon}^{(3)} = \frac{\partial u(k)}{\partial D\_{i}^{(3)}(k)} \cdot f' \left[ \operatorname{net}\_{i}^{(3)}(k) \cdot e(k+1) \operatorname{sgn} \left( \frac{\partial v(k+1)}{\partial u} \right) \right] \\ \qquad (c = 1, 2, 3) \end{cases} \tag{15}$$

According to over–calculation, the right formula for the hidden layer weight ratio was obtained as below:

$$\begin{cases} \Delta \theta\_{ij}^{(2)}(k+1) = \beta \Delta \theta\_{ij}^{(2)}(k) + \eta \gamma\_i^{(2)} D\_j^{(1)}(k) \\ \qquad \gamma\_i^{(2)} = \sum\_{c=1}^3 \gamma\_c^{(3)} \theta\_{ci}^{(3)}(k) \cdot \mathbf{g}' \left[net\_i^{(2)}(k)\right] \\ \qquad \quad (i = 1, 2, 3, 4, \dots, 9) \end{cases} \tag{16}$$

where we set:

$$f\_0(\mathbf{x}) = f(\mathbf{x})[1 - f(\mathbf{x})]; \\ f\_0(\mathbf{x}) = 1 - f\_2(\mathbf{x})/2 \tag{17}$$

Therefore, the BPNN–PID control may be summarized as below: setting initialization of the three–layer system of BPNN arguments, together with inertia factor and study rate; using a BPNN control to process obtained error value; inputting the determined liquid fertilizer flow into the algorithm; and, after that, calculating the input/output of every layer via BPNN. Calculation of the three–layer arguments kp, ki, as well as kd of the PID control was performed using Expression (10). The export control signal is denoted by q (k). The systematic sustained renewal of q (k) along with the weight ratio of network on basis of the renew error value. A schematic of the control strategy is displayed in Figure 6.

**Figure 6.** Control schematic diagram of BPNN–PID regulator.

2.6.2. BPNN–PID Algorithm Is Optimized using Genetic Algorithm Recommending the Genetic Algorithm

A genetic algorithm screens individuals based on the chosen adaptation function combined with choice, intersection, and variation within the genetic process, retains those with good fitness value and eliminates those with inadaptable ones. The recent generation inherits information from the former generation and is better than the former generation. This loop was duplicated until the situation was met.
