This section outlines the integration of a Dual-Stream Multi-Dependency Graph Neural Network (DMGNN) with the Groupers and Moray Eels (GMEO) for optimizing PID controller parameters. GMEO is used to optimize PID gains (
,
and
), while the Dual-Stream Multi-Dependency Graph Neural Network (DMGNN) predicts and locally adjusts these parameters to enhance performance. GMEO explores large solution spaces, maintains diversity to avoid local optima, and handles nonlinear systems to improve PID controller performance by enhancing stability, reducing overshoot, and optimizing response time. DMGNN further refines optimization by capturing complex dependencies and learning both global and local patterns, which accelerates convergence and improves performance in dynamic systems like buck-boost converters. The combination of GMEO’s global search with DMGNN’s local adjustments optimizes PID parameters more efficiently, improving system stability, response time, and adaptability while ensuring faster convergence in complex dynamic systems.
Figure 3 depicts the flowchart of the GMEO-DMGNN approach.
4.1. Optimization Using Groupers and Moray Eels (GMEO)
In this section, the GMEO is described [
30] and utilized to optimize the controller parameters of PID gains, such as,
,
, and
. The GMEO algorithm offers a robust global search mechanism, efficiently optimizing PID parameters to enhance performance in nonlinear dynamic systems, ensuring improved stability, faster response, and better adaptability in buck-boost converters. In buck-boost converters, it increases overall system stability and performance by improving the PID controller’s capacity to adjust to changing system circumstances, lowering overshoot, settling time, and steady-state error. GMEO was chosen for its ability to effectively handle the complexities of nonlinear dynamic systems by offering a balanced global search approach, optimizing PID parameters for improved control and performance in buck-boost converters.
Step 1: Initialization
Set the input variables to first values. In this instance, the input variables are the PID parameters, which are specified as , , and .
Step 2: Random Generation
In matrix form, the input variables were generated at random.
where
indicates the random generation,
indicates the system parameters, and
indicates the count of decision variables.
Step 3: Fitness Function
The fitness was evaluated, which was described by
where
refers to an Integral of Time-Weighted Absolute Error (ITAE),
specifies the time variable, and
specifies the error signal at time
.
Step 4: Primary Search (PS) Phase
GMEO agents explore the search space for optimal PID controller parameters (
,
, and
), mimicking the zigzag swimming pattern of groupers hunting prey. This random exploration ensures thorough coverage of the solution space, aiming to find the optimal controller gains that minimize performance errors like transient response, steady-state error, and overshoot.
Here, specifies the first location of search agent of dimension, specify the search space’s upper and lower bounds, specifies the overall count of dimensions, specifies the number of search agents, and specifies a random vector that follows a uniform distribution, with values ranging from 0 to 1.
Step 5: Pair Association (PA) Phase
In this phase, the best-performing agents (groupers) collaborate with other high-quality agents (moray eels) to improve search efficiency. This cooperative interaction enhances the exploration of promising regions in the solution space. By leveraging the strengths of both agents, the search process becomes more targeted, accelerating convergence toward the optimal PID parameters. The agents dynamically adjust their positions based on the most promising solutions, ensuring a balance between global exploration and local refinement for improved accuracy in optimizing the PID controller.
Step 6: Encircling or Extended Search (ES) Phase
Agents refine their search by adaptively adjusting their positions toward promising regions. This phase enhances local exploration, allowing agents to dynamically focus on areas with higher potential for optimal PID parameters. The cooperative movement mimics the coordinated behavior of groupers and moray eels, ensuring a balance between exploitation and exploration. This adaptive search approach improves the likelihood of finding the global optimum and helps avoid premature convergence.
Here, denotes the coordinates of the prey in each dimension, specifies the location of a grouper, specifies the location of an eel, specifies the separation between the prey and the grouper, and specifies the distance between the grouper and the eel.
Step 7: Attacking and Catching Phase
Agents converge on the best solution by intensifying the search around the optimal PID gains. This phase improves convergence accuracy by gradually reducing the search radius, ensuring precise identification of the optimal controller parameters. The shrinking mechanism enables a finer search around the most promising solution, refining the PID gains for better system performance. This stage also helps to reduce the steady-state error and enhances system stability by continuously updating the solution based on the best-performing agents.
where
and
specifies the shrinking ratio and
refers to the radius.
Step 8: Termination Criteria
The procedure ends if the answer is ideal; if not, it goes back to step 3 for fitness assessments and keeps processing the next steps until the best answer is discovered. Thus, GMEO effectively optimized the controller parameters of PID gains. The flowchart of GMEO is illustrated in
Figure 4.
4.2. Dual-Stream Multi-Dependency Graph Neural Network (DMGNN)
In this section, the prediction using a Dual-Stream Multi-Dependency Graph Neural Network (DMGNN) is discussed [
31]. DMGNN enhances optimization by efficiently predicting and adjusting PID parameters, capturing complex dependencies in dynamic systems. It was chosen for its ability to model both global and local patterns through its dual-stream architecture, making it well-suited for optimizing PID parameters in nonlinear and time-sensitive systems like buck-boost converters. This capability ensures faster convergence and significantly improves performance.
where the trainable weight matrix for feature transformation is specified by
and
.
DMGNN captures the complex dependencies and relationships between the PID parameters and the system’s dynamic behavior. Its dual-stream architecture learns both global patterns (long-range dependencies) and local patterns (short-range dependencies) between the system states, improving PID parameter adjustment.
Here, the learnable transformation matrix and bias is specified by
and
, correspondingly, while
indicates the
function. The DMGNN refines the parameters
,
, and
by adjusting them locally, ensuring that the PID controller performs optimally under different load conditions and varying operational environments.
where a pre-defined hyper-parameter and reduction to
as the training proceeds is specified by
,
is the pre-defined hyper parameter, the features from the two branches are specified by
and
, and the concatenation operation is specified by
. The adjusted PID parameters are used to control the buck-boost converter, ensuring that the system’s output voltage is maintained at the desired set point while adapting dynamically to load variations.
where
and
variables related to the system’s performance and state, respectively, while
represents the computation process of hazard rates. DMGNN dynamically adjusts the ideal PID parameters
,
, and
based on system conditions by learning both global and local patterns in the system’s behavior.