2.2.1. Simulated Annealing Algorithm
The simulated annealing algorithm (SA) was initially proposed by Metropolis, drawing inspiration from the annealing process of solid materials and addressing general combinatorial optimization problems. This versatile optimization algorithm adopts an iterative solution strategy akin to the Monte Carlo algorithm, introducing a probability for the solution process to escape locally optimal solutions and converge on globally optimal solutions.
Figure 2 is the flow chart of the simulated annealing algorithm. Firstly, the model hyperparameters and initialization are set and then the iterative process is used to solve the problem. The physical annealing process consists of three key steps:
- (1)
Heating step: This phase intensifies the thermal movement of particles, disrupting the system’s equilibrium. With a sufficiently high temperature, the solid transitions into a liquid state, effectively eliminating the original system inhomogeneities.
- (2)
Isothermal step: At a constant temperature, the system’s state evolves in response to free energy, ultimately reaching equilibrium at the minimum free energy value.
- (3)
Cooling step: This phase decelerates the thermal motion of particles, reducing the system’s energy and yielding a crystalline structure.
When a simulated annealing algorithm is used, the main control parameters such as cooling rate
q, initial temperature
T0, terminal temperature
Tend, and chain length
L need to be set. Its control core is the Metropolis criterion: if the objective optimization function is
f (
S), then the result of the current solution is
f (
S1), the result of the new solution is
f (
S2), the difference of the result is
df =
f (
S2) −
f (
S1), then the Metropolis criterion is as follows:
If df < 0, then the new result is accepted with probability 1; otherwise, the new result is accepted. The cooling is performed using the cooling rate q, which is T= qT, and if T is less than the terminal temperature then the iteration is stopped and the current state is output; otherwise, the iteration continues. In CT-TDLAS 2D reconstruction solving, the simulated annealing algorithm is solved directly with the temperature field T and the concentration field X as the unknown quantities, hence its name as a nonlinear solving method.
2.2.2. Harris’s Hawk Optimization Algorithm
Harris’s Hawks Optimization (HHO) is a population optimization algorithm proposed by Aaha et al., which simulates the predatory behavior of the Harris’s Hawk (a raptor in southern Arizona, USA), and is divided into an exploration phase, an exploration-exploitation conversion phase, and an exploitation phase [
22].
- (1)
Exploration phase
The Harris’s Hawk relies on its keen eyesight to track and spot prey, although there are instances where the prey remains elusive. Consequently, the hawk dedicates hours to waiting, observing, and monitoring the desert landscape. Within the algorithmic framework, the Harris’s Hawk is conceptualized across various scenarios, with the iterative search for prey representing the optimal or near-optimal strategy for each state. These hawks establish random perches in trees within their territory and employ two strategic choices based on their discernment. When the probabilities (
q) associated with each perching strategy are equal, the Harris’s Hawk selects a perch considering the positions of other members and the prey for
q < 0.5. Conversely, for
q > 0.5, the hawk randomly opts for a sizable tree as its perch, as modeled in the algorithm.
where
X(
t + 1) is the location of the Harris’s Hawk in the iterative process to be carried out,
Xrabbit(
t) represents the optimal solution location (i.e., the location where the optimal solution is located) in the current state,
X(t) is the location of the Harris’s Hawk in the current iteration,
r1, r2, r3, r4, and
q are the random numbers in (0, 1),
LB and
UB represent the algorithm’s unknowns of the demand solution of the upper and lower boundaries,
Xrand is the randomly chosen location of the Harris’s Hawk during this iteration, and
Xm(
t) is the average amount of the location of the Harris’s Hawk in the iteration.
- (2)
Transition from exploration to exploitation
The HHO algorithm shifts from ‘exploration’ to ‘exploitation’ and switches from ‘exploration’ to ‘exploitation’ according to the size of the escape energy of the ‘prey’. The HHO algorithm switches from ‘exploration’ to ‘exploitation’, and switches from ‘exploitation’ to ‘exploitation’ according to the amount of escape energy. When the prey escapes, their energy is greatly reduced. The exiting energy of the prey is calculated as follows:
where
E denotes the exergy of the predator (rabbit),
T is the maximum number of iterations required for the iterative process, and
E0 is the energy that the predator initially has.
The initial energy is random in the iterative process and its range is within (−1, 1). When the value of the initial energy decreases from 0 to −1, the prey records its current physical position, while when the value of the initial energy increases from 0 to 1, the rabbit starts to strengthen its active state. Overall, the exsanguination energy decreases during the iterative process. When the exergy , the Harris’s Hawk performs a global search to determine the coordinates where the rabbit is located, and therefore, the algorithm enters the exploration phase at this point; when , the algorithm tries to find the neighborhood of the solution in the exploration phase. In summary, performs exploration and exploitation (exploitation).
- (3)
Exploitation stage
In this stage, the predator attacks the target prey identified in the previous stage. However, the prey usually manages to escape. Therefore, different types of chasing behavior may arise in real-life environments. Based on the different predation strategies of Harris’s hawks and the behavioral patterns of rabbits, four possible strategies for simulating the predation phase are proposed.
The prey always wants to escape from danger. Let r be the chance of the prey escaping before a sudden attack, and the chances of both success and failure are split in half; when r = 0.5, it represents a failure to escape. Regardless of what the prey does, Harris’s Hawks always catch it in a strong or gently encircling manner. This means that they will gently or aggressively encircle their prey from different directions, depending on the physical strength of the prey. Harris’s Hawks are smart in conducting the hunt, gradually approaching the prey’s position based on the judgment of the existing situation and echoing with their companions to kill the prey together; then, Harris’s Hawks intensify the encirclement so that the exhausted prey can be easily captured. The parameter E is defined to model this strategy, in a model that enables the algorithm to emulate the Harris’s Hawk’s choice of soft or hard siege depending on the situation. When E ≥ 0.5, the soft siege is performed; when E < 0.5, the hard siege is performed.
The primary advantage of the Harris’s Hawks Optimization (HHO) algorithm is its independence from external parameters during the optimization process, allowing it to accurately identify the optimal solution for the objective function. The steps for optical path optimization based on the HHO algorithm are as follows:
- (1)
Initialization: Define initial parameters such as population size, number of iterations, and the objective function. Randomly generate variables and calculate the objective value of the initial solution. Concurrently, iterates the algorithm by generating random variables again to update the global optimal solution, storing iteration data for subsequent comparison.
- (2)
Exploration: Update the random light path coordinate positions according to the algorithm’s rules and perturb these coordinates based on the computation’s scale.
- (3)
Exploitation: perform hard and soft sieges sequentially, updating the coordinate positions and target values in each iteration according to the computational formulas.
- (4)
Progressive Fast Swooping Siege Strategy: Update the target value and corresponding coordinates as per the formula. At the end of each iteration, update the global best solution, recording the best solution for each generation. Repeat the iterations for the specified number of times to ultimately obtain the optimal optical path and the highest reconstruction accuracy.
For the iteration of the whole model, which in this simulation is essentially a regression problem in the optimization model problem, the loss function
L is as follows:
where
N is the total number of grids after the one-dimensional unfolding of a single temperature matrix, the temperature is calculated at each iteration of the
Tm optimization algorithm, and
Tr is the actual set temperature field.