1. Introduction
Recently, there has been a noteworthy global upswing in the adoption of renewable energy sources. This increased interest is a proactive response to economic, political, and social factors, all directed toward diminishing dependence on conventional fossil fuels [
1,
2]. The efficiency of power systems that encompass a substantial share of renewable energy sources is shaped by various factors. Challenges arise from the inherent unpredictability associated with renewable energy sources. Consequently, the use of energy storage systems plays a pivotal role in effectively bringing renewable energy systems into commercial viability [
3].
In recent years, there has been a marked uptick in interest surrounding supercapacitors (SCs). They are being viewed as a viable supplementary power source owing to their outstanding high power density and relatively high energy density [
4]. The successful incorporation of supercapacitors into energy storage systems has been observed across diverse industrial applications, such as electric vehicles and solar energy systems [
5,
6]. The integration of SCs provides significant advantages in maintaining stability within electrical power systems. This is achieved by augmenting the energy supply derived from batteries and the unpredictable nature of renewable resources.
To optimize the integration of supercapacitors into energy storage systems, it is crucial to establish a precise dynamic representation that effectively represents the static and dynamic properties of supercapacitors. The precise definition and modeling of the system’s characteristics is a vital step in enhancing and managing any energy storage system. SC dynamic modeling is used to identify and characterize electrical and thermal performances, condition diagnostics-monitoring and estimation of state of charge (SOC), state of power (SOP), state of health (SOH), and control mechanism design [
7,
8,
9]. Supercapacitors are modeled based on the characteristics to be monitored, with five key models: equivalent circuit models, electrochemical models, thermal models, fractional order models, and intelligent models [
7,
10].
Different models have been developed to explain the behavior of supercapacitors; however, the electrical equivalent circuit models are found to be a convenient and common way to simulate the electrical behavior of SCs. To characterize and simulate the electrical behavior of SC, equivalent circuit models utilize parameterized RC networks defined by ODEs. Utilizing electrical models enables the assessment of supercapacitor capacitance, considering its variations with bias voltage, voltage drop, as well as power loss attributed to internal resistance, self-discharge, and leakage current effects. Additionally, these models account for the electric dynamic behavior influenced by ion diffusion [
11,
12].
Modeling and characterization of supercapacitors have been extensively investigated using electrochemical impedance spectroscopy (EIS) approaches [
13,
14,
15,
16] or time response experiments and simulations [
17,
18,
19,
20]. Both techniques aim to provide a variety of equivalent circuit representations that are required for characterizing the state of the SC cell in the course of operation. Hence, when analyzing the time domain system, various circuit models have been proposed. Classical models, multi-stage ladder models, and dynamic models are the three categories of equivalent circuit models. In addition, a wide range of nonlinear models have been formulated for supercapacitors, which include fractional order models, as well as methodologies for determining parameters, thereby demonstrating the effectiveness of these models [
21,
22]. Nevertheless, the procedure of determining parameters for these models sometimes entails multiple stages of linearization and extensive experimental investigation.
Methods such as adaptive filter algorithms, metaheuristic optimization methods, and intelligent artificial intelligence are being deployed in the literature to find the optimum parameters for the different SC equivalent models. The recursive least squares (RLS) method was used to identify the parameters of the classical equivalent circuit model in [
23,
24,
25]. In [
25], the authors used a voltage-dependent capacitor in the first branch instead of the constant capacitance capacitor, making it simpler to identify the time-variant parameters. Similarly, in [
26], a nonlinear least square method was used to identify the parameters of a supercapacitor model. The authors provided a framework for the branch number selection criteria and order reduction. The total least-squares method was used in [
27] to identify the three parameters (
C0,
KV, and
C2) of the reduced two-branch Zubieta model [
28]. The first branch resistor, R
0, was assumed to be given by the manufacturer, and the second branch resistor,
R2, and the leakage resistor,
Rlea, were found using the circuit analysis method. The voltage response, input current, and their second- and first-order derivatives were used to define the estimation problem. The derivatives of the electric variables were obtained using filter differentiation.
Moreover, Kalman filters were used to identify supercapacitor dynamic model parameters online [
29]. The authors used the extended Kalman filter to automate the estimation of the error bound. The researchers proved that the proposed estimator model can be used to accurately depict the voltage behavior of the SC under various scenarios.
Optimization is still a challenging computational task. Consequently, numerous algorithms have been proposed to address this challenge. Two questions must be answered to ensure the best possible solution to this problem: how to identify global and local optimization and how to preserve such optimization until the end of the search. Swarm intelligence, which includes particle swarm optimization (PSO) [
30], the grey wolf optimizer (GWO) [
31], whale optimizer [
32], ant-lion optimizer (ALO) [
33], genetic algorithm (GA) [
34], artificial ecosystem-based optimizer (AEO) [
35], and many others, evolved as a result of these questions over the last two decades. These metaheuristic algorithms are widely used to solve and optimize various real-world problems. However, few researchers in the field of energy storage have used metaheuristic optimization algorithms as an appealing tool for identifying electrical circuit model parameters.
In 2022, a study by researchers [
36] employed a straightforward and reliable methodology, using the bald eagle search (BES) optimization algorithm to identify the parameters of the Zubieta model’s supercapacitor equivalent circuit [
20]. The envisaged bald eagle search algorithm mimics bald eagle hunting behavior to demonstrate the consequences of each stage of hunting. The BES approach’s robustness was assessed in comparison to other metaheuristic algorithms for two supercapacitors (SCs) with modules of 470 F and 1500 F. The authors attained an MSSE of 1.32 × 10
−8 for the 1500 F SC when employing the BES. In contrast, the PSO, the most closely related optimizer based on the obtained results, yielded a higher MSSE of 6.69 × 10
−6.
In [
37], a similar approach was used with the interior search optimizer (ISA), a notable metaheuristic optimization algorithm proposed by Gandomi [
38]. The mean square error (MSE) for parameterization of a 470 F capacitor using the ISA was 0.004487% compared to 0.0310895 for GWO, 0.0045% for GA, and 0.03109% for WA. These findings suggest that the ISA method can be used to optimize the parameters of the Zubieta model, and they are comparable to the genetic algorithm approach.
In [
39], a real-time modeling approach based on the weighting bat algorithm (WBA) was proposed. The model was used for parameter identification in the reduced Zubieta model, enabling real-time power management in embedded systems powered by supercapacitors, as well as predicting supercapacitor behavior. The WBA convergence of the fitness function in 50 iterations is comparable to the genetic algorithm with lower computational time, 0.4328% and 0.43815%, respectively.
Furthermore, the study in [
40] introduced an electrical model that is based on the optimization of parameters for a passive electrical circuit model. There exist multiple methodologies for the parameter identification of supercapacitor models. These include the analytical method [
20], Segmentation Optimization [
41], the binary quadratic equation fitting method [
42], Universal Adaptive Stabilization and Optimization [
11], Particle Swarm Algorithm (PSO) [
43], the recursive least-squares method [
44], and other alternative techniques.
Over the past few years, there has been a notable influx of research focusing on the development and application of advanced optimizers for parameter identification, particularly in the field of energy storage systems. Metaheuristic algorithms have continued to dominate the landscape of parameter estimation, offering innovative solutions to address challenges associated with accuracy and convergence rates.
One prominent optimizer in recent literature is the Sine Cosine Algorithm (SCA), introduced by Mirjalili et al. [
45]. SCA has shown promise in optimizing complex and nonlinear functions, making it suitable for parameter identification tasks. Studies have applied SCA to various energy storage system models, demonstrating its effectiveness in achieving accurate parameter estimates within a reduced computational time.
Another development, as mentioned earlier, is the grey wolf optimizer (GWO), proposed by Mirjalili et al. [
31]. GWO draws inspiration from the social hierarchy of grey wolves and has been successfully applied to parameter identification tasks in renewable energy systems. Its ability to strike a balance between exploitation and exploration makes it well-suited for handling complex and high-dimensional optimization problems.
Other noteworthy contributions are the Dandelion Optimization Algorithm [
46] used for the parameter estimation of proton exchange membrane fuel cells (PEMFCs) and the Metaheuristic Mountain Gazelle Optimizer for parameter estimation of single- and double-diode photovoltaic cell models [
47]. These new optimization strategies demonstrated improved performance compared with traditional algorithms, showcasing the potential of metaheuristic optimization paradigms to address the complexities associated with renewable energy system models.
Nevertheless, the existing literature points out limitations in the application of meta-heuristic optimizers to identify variables in the SCs equivalent electrical circuit model. Additionally, the currently employed algorithms have their own constraints and highlight notable gaps that warrant further investigation. Existing studies often lack a standardized approach, leading to inconsistencies in parameter estimation. Challenges include the absence of a universally accepted model and the need for improved convergence rates and accuracy in optimization algorithms. Moreover, there is a limited exploration of hybrid optimization strategies that could potentially enhance the efficiency of parameter identification for supercapacitors. Addressing these gaps is crucial to advancing the reliability and performance of supercapacitors in energy storage systems, promoting a more standardized and effective methodology for parameter identification. Consequently, to build an accurate model for SCs, it is vital to use an efficient optimization technique that effectively addresses the limitations of current optimizers.
In this study, the authors propose a more straightforward and reliable approach by employing the metaheuristic gradient-based optimization (MGBO) algorithm [
48,
49] to ascertain the parameters of the SCs electrical circuit model, specifically the Zubieta model. Having a precise model is crucial for accurately characterizing the behavior of SCs, thereby facilitating further research in the field. Moreover, to enhance the effectiveness of PSO and address the underlying constraints, numerous variations of PSO have been developed. Multiple implementations of PSO algorithm variants have been proposed in the literature. These variants include adaptive inertia weight PSO, constriction PSO, and hybrid PSO [
50,
51]. These changes encompass the incorporation of additional strategies, such as the dynamic manipulation of the inertia weight, the self-adjustment of acceleration constants, and integrating it with other optimization techniques [
52].
Hence, this research develops a modified metaheuristic gradient-based optimization (M-MGBO) algorithm and two versions of the particle swarm optimization (PSO) algorithm. The motivation for doing this is as follows:
Gradient-based algorithms such as the MGBO can suffer from problems related to computing/approximating the gradient. Particle swarm optimization (PSO)/metaheuristic approaches have been used to circumvent issues related to the gradient.Both MGBO and PSO approaches may be affected due to solutions being trapped in local optima. The literature has several randomized approaches to avoid such issues.However, such randomized approaches often disregard/do not explicitly incorporate checking for parameter bounds and also do not necessarily check if a randomized candidate solution actually helped achieve the objective of not being trapped in a local optimum. Furthermore, it is possible that due to the nature of the optimization problem or the algorithm or both, such issues related to entrapment in local optima, and parameter-bound constraint violation happen more frequently during a certain stage (early iterations/late iterations) of the iterative optimization cycle. Moreover, encoding such information in the optimization algorithm itself may help achieve better results.
In addition to improving parameter estimation accuracy, which requires a reduced number of algorithm iterations, alleviating the above-mentioned issues is the objective of this research. To alleviate the above issues, this work develops the M-MGBO algorithm, which incorporates an appropriate local escaping operator (LCEO) algorithm, the details of which are available in
Section 3.3.1. Further, results exist in the literature showing the benefits of generating hybrid versions of the PSO algorithm [
53]. Therefore, this work combines the MBGO algorithm with the PSO and also further combines the advanced LCEO algorithm mentioned earlier with the PSO algorithm, using them for supercapacitor parameter estimation. The relevant details are available in
Section 3.3.2 and
Section 3.3.3.
4. Results and Discussion
In this study, the Zubieta model parameters were determined for an SC bank comprising six capacitors in series, each with a rating of 2300 F, resulting in a total bank capacitance of 383.33 F. The estimation process involved using three different input signals: a step input with amplitudes of 10 A (
Figure 4a) and 4 A (
Figure 4b), along with a variable frequency input (
Figure 4c), corresponding to profiles 1, 2, and 3 respectively. For each input, four distinct algorithms (M-MGBO, PSO, PSOLCEO, and PSO-MGBO) were executed in three separate runs. The Zubieta model, characterized by three branches, requires the estimation of a total of eight parameters, namely
, and
. Each of the algorithms (M-MGBO, PSO, PSOLCEO, and PSO-MGBO) requires initial guesses, which are formed using the analytical Zubieta model parameters and also using any initial guess selection rules the algorithms may have. These are detailed in the listings of each algorithm—for example, certain algorithms use the random initialization of the parameters’ values in the given bounds. The bounds themselves are derived using the parameters’ values derived using Zubieta’s analytical method. In each iteration of the respective algorithm, the parameters are estimated following the respective algorithmic procedures as outlined in
Section 3. These estimated parameters are then used to estimate the supercapacitor terminal voltage at the given value of current, following Zubieta’s model equations. The mean square error between the measured and estimated supercapacitor terminal voltage, which is obtained using the estimated parameters, is used as the cost/fitness function to evaluate the fitness of the estimated parameters.
It is worth noting that the existence of a voltage-dependent capacitance in one of the Zubieta model’s branches makes this a nonlinear model. It is this nonlinear model that is used to evaluate the fitness of the estimated model parameters. Moreover, the MGBO algorithm used incorporates both gradient and population-based techniques to determine the search direction using Newton’s method. This search direction is based on a linear estimation of the gradient of the cost/fitness function. However, such approximation can make the algorithm prone to being stuck in a local optimum. To avoid this, the local escaping operator (LCEO)-based routine is used to augment the MGBO algorithm into the M-MGBO algorithm.
4.1. Parameters Estimation Results
The evaluation of the algorithms’ performance considered various criteria, including MSE and the computation time. Optimization took place on a personal computer equipped with an Intel(R) Xeon(R) E-2124 central processing unit operating at a frequency of 3.30 gigahertz and 16 gigabytes of random-access memory.
The cost function for the optimization problem was set to minimize the difference between the simulated SC voltage from the derived model and the actual measured voltage.
where
is the capacitor terminal voltage given by Equation (7) obtained using estimated model parameters, and
(t) is the measured supercapacitor voltage.
The M-MGBO algorithm demonstrated superior performance compared with the other algorithms. In the case of Profile 1 (
Table 3), M-MGBO and PSO-LCEO exhibited MSE values of 0.003927 and 0.003493, respectively. Notably, M-MGBO boasted the shortest computational time at 20,141.028 s, outperforming PSO-LCEO (26,782.93 s) and the hybrid PSO-MGBO (77,735.494 s). Additionally, parameters obtained by the M-MGBO algorithm were closer to analytically derived parameters (
Table 2). For instance, the first branch capacitor C1 had a value of 0.7723 F with M-MGBO compared with 14.607195 F with PSO-LCEO.
The convergence curve plays a vital role in assessing optimization algorithms, offering insights into the speed and efficiency of their convergence to the optimal solution. Notably, the convergence curve for the M-MGBO algorithm stands out for its swift and stable convergence with minimal oscillation. While the PSO-LCEO algorithm also showed stable convergence, it did so at a slower pace compared with the M-MGBO algorithm. For Profile 1,
Figure 9a illustrates the convergence curve of the best run for all four algorithms, showcasing the M-MGBO algorithm’s rapid convergence. Simultaneously,
Figure 10a presents the voltage response for the same input.
Table 4 demonstrates the performance of the four optimization algorithms in estimating the parameters of a Zubieta supercapacitor model using Profile 2 as input. The M-MGBO algorithm exhibited the lowest MSE of 0.001001 and a significantly shortened computational time of 31,320.07 s. In contrast, the PSO-LCEO algorithm yielded a very similar MSE of 0.001017 compared with the M-MGBO algorithm. However, it exhibited a longer execution time of 36,297.98 s.
The hybrid PSO-MGBO and standard PSO algorithms exhibited higher MSE values of 0.01342 and 0.009034, respectively. In terms of average MSE, the M-MGBO algorithm outperformed others with a value of 0.001001, followed by the PSOL-CEO algorithm at 0.003169. The hybrid PSO-MGBO and regular PSO algorithms had higher average MSE values at 0.103347 and 0.107982, respectively. It is noteworthy that the LCEO operator, which is employed in the PSO-LCEO algorithm, serves as a method for escaping local optima during the optimization process. Despite this feature, the PSO-LCEO algorithm could not provide superior parameter estimates compared with the M-MGBO algorithm in this study. For the convergence curve of the best run for the four algorithms, refer to
Figure 9b, while
Figure 10b presents the corresponding voltage response.
The parameters obtained using Profile 3 input are presented in
Table 5. The M-MGBO algorithm exhibited an MSE of 0.0042315, while the PSO-LCEO showed a comparable performance with an MSE of 0.004231. The hybrid PSO-MGBO had a similar MSE of 0.004231 but still outperformed the PSO-LCEO in terms of computational time, completing the task in only 32,206 s compared with 39,252 s. Notably, the M-MGBO algorithm showcased the best performance in terms of simulation time, with a runtime of 28,622 s.
It can be noted from the results that the M-MGBO and PSO-LCEO algorithms showed the best performance in terms of MSE, while the PSO-MGBO algorithm had a good balance between performance and simulation time. The PSO algorithm had the lowest computational time, but it had a significantly worse performance in terms of MSE. The convergence curve for the best run of the four algorithms is presented in
Figure 9c, and the voltage response is presented in
Figure 10c.
4.2. M-MGBO Results Validation
The results of 30 iterations of the M-MGBO method used to estimate the parameters of the Zubieta model of supercapacitors using a variable frequency input are presented in
Table 6. The primary objective of this analysis is to assess the algorithm’s performance in terms of the consistency of parameter estimations, the accuracy of results pertaining to resistance and capacitance values, and any other noteworthy observations obtained from the table.
It is also important to consider the parameters’ standard deviation while analyzing the algorithm’s performance, as it provides a measure of the algorithm’s consistency. The standard deviation is a statistical measure used to assess the variability in parameter estimations across the 30 runs. Lower values of standard deviation indicate a higher level of consistency in the estimations. It is worth mentioning that certain parameters, including and , exhibit remarkably low standard deviation values, specifically 0.0009 and 0.02, respectively. The observed low values of standard deviation suggest that the M-MGBO algorithm exhibits a consistent pattern of convergence, yielding parameter estimations that are highly similar across every iteration. The level of consistency demonstrated in this context is of the utmost importance for practical applications, as it ensures the reliability and predictability of outcomes.
4.3. MGBO and M-MGBO Comparsion
To verify the reliability and assess the effectiveness of the modifications applied to the original MGBO algorithm, three separate runs were executed using the unaltered MGBO algorithm but with varying frequency inputs. The MGBO algorithm produced an average MSE of 0.0064, with a standard deviation of 5.7735 × 10
−7 over the course of these three runs.
Table 7 provides a comparative analysis between the M-MGBO and MGBO. Additionally,
Figure 11 illustrates the convergence curve for the best-performing runs.
For the MGBO yielded a value of 53.0087 F/V, while the M-MGBO resulted in 51.7453 F/V. Despite the standard M-MGBO exhibiting a greater MSE when compared to the M-MGBO, it demonstrated a lower standard deviation for . Specifically, the standard deviation for was 0.0702 for the standard MGBO and 0.2311 for M-MGBO. This pattern is consistent for both and .
4.4. Parameters Testing
As demonstrated earlier, the M-MGBO method provides a more precise estimation of parameters concerning accuracy and MSE. To conduct additional testing on parameters from the four distinct methods, the optimal parameters acquired from the three runs (as presented in
Table 4,
Table 5 and
Table 6) undergo further evaluation by interchanging and applying them to the remaining inputs. For example, parameters obtained using Profile 1 input are tested using Profile 2 and Profile 3 as inputs, and their mean squared error (MSE) is recorded.
Table 8 presents a scenario where parameters, initially estimated via Profile 1, were subsequently evaluated with Profile 2 and Profile 3 inputs. In the experiment with Profile 2, it is evident that the parameters of the M-MGBO algorithm yield the lowest MSE value of 1.4291. Furthermore, the PSO, PSO-MGBO, and PSO-LCEO had similar MSE values. Specifically, the PSO-LCEO algorithm demonstrated the second lowest MSE of 1.6312, followed by the PSO-MGBO method, with an MSE of 1.6396, and the PSO algorithm, with an MSE of 1.6442. When considering the use of parameters with Profile 3, it was seen that the M-MGBO algorithm exhibited the lowest MSE of 0.1195. Subsequently, the PSO-LCEO program showed an MSE of 0.1469, while the PSOGBO algorithm yielded an MSE of 0.1482. The PSO method demonstrated the most significant MSE value, measuring 0.4614.
Table 9 presents the results of the parameter testing conducted using parameters estimated using Profile 3 but tested with Profile 1 and Profile 2. The M-MGBO parameters yielded the lowest MSE in all tests. Specifically, the MSE was found to be 0.7448 for the Profile 2 input and 0.1754 for the Profile 1 input. The parameters of the PSO-LCEO algorithm exhibited the second-lowest MSE for both tests. Specifically, the MSE was found to be 0.7464 for the 4 A step input and 0.2001 for the 10 A step input.
The PSO-MGBO algorithm parameters have been determined to be the third-best performing algorithm, whereas the PSO algorithm exhibited the greatest mean MSE for both test cases. In a similar manner, while employing parameters estimated using Profile 2 tested with Profile 1 as input (
Table 10), the same pattern emerged. The M-MGBO algorithm parameters yielded the lowest MSE value of 1.4149, followed by the PSO-LCEO and the PSO-MGBO, with MSE values of 1.5438 and 1.5349, respectively. In the instance of the Profile 3 input, the PSOL-CEO parameters yielded the lowest MSE of 0.9877, while the M-MGBO resulted in a slightly higher MSE of 0.9902. Furthermore, the particle swarm optimization (PSO) method yielded the highest MSE for both test scenarios.
To perform a more exhaustive evaluation of the algorithm’s parameters, two supplementary tests were conducted. These tests involved using a step input of 8 A (See
Figure 12a) and a controlled current sequence as inputs. The optimal run parameters derived from prior experiments were used in both tests. Additionally, the performance is also assessed by comparing it to the analytical technique in order to validate the suitability of the parameters derived from the four algorithms.
As evident from
Table 11, concerning the Profile 2 parameters, the M-MGBO algorithm demonstrates the lowest MSE at 0.9619, followed by the PSOLCEO with the second lowest MSE of 0.9793, and subsequently, the PSO-MGBO algorithm. In the case of the Profile 1 parameters, the PSO-LCEO showcased the lowest MSE of 0.1418, followed by the PSO-MGBO algorithm, while the M-MGBO ranked third, with an MSE of 0.2699. Profile 3 parameters exhibited a closely aligned response among M-MGBO, PSO-LCEO, and PSO-MGBO, displaying minimal deviation in MSE values. Conversely, the PSO algorithm, across various inputs, performed the least favorably, indicating higher MSE values and implying lower accuracy and reliability in the estimated parameter.
The controlled current sequence profile used for the following tests is illustrated in
Figure 12b. According to the data presented in
Table 12, it can be observed that the M-MGBO algorithm exhibited superior performance compared with the other algorithms and the analytical approach in terms of Profile 2 parameters. Specifically, the M-MGBO algorithm achieved an MSE value of 1.5037.
In the case of the Profile 1 parameters, it is evident that the M-MGBO algorithm exhibited superior performance compared with the other algorithms. The M-MGBO algorithm achieved an MSE of 0.0562, while the PSO-LCEO algorithm achieved an MSE of 0.2005, and the PSO-MGBO algorithm achieved an MSE of 0.5573. Nevertheless, the PSO-MGBO algorithm had the lowest performance in this particular test, as the PSO algorithm outperformed it, with an MSE value of 0.3547. Moreover, using Profile 3 parameters with the controlled current sequence profile, it was observed that the M-MGBO algorithm demonstrated superior performance in comparison to the other algorithms, achieving an MSE of 0.0483. On the other hand, the PSO, PSO-LCEO, and PSO-MGBO algorithms demonstrated a similar performance, with minimal deviation in MSE values.