Next Article in Journal
Photo-Ordering and Deformation in Azobenzene-Containing Polymer Networks under Irradiation with Elliptically Polarized Light
Next Article in Special Issue
Recognition of Timestamps and Reconstruction of the Line of Organism Development
Previous Article in Journal
Two-Stage Pretreatment of Jerusalem Artichoke Stalks with Wastewater Recycling and Lignin Recovery for the Biorefinery of Lignocellulosic Biomass
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Estimation of Small-Scale Kinetic Parameters of Escherichia coli (E. coli) Model by Enhanced Segment Particle Swarm Optimization Algorithm ESe-PSO

by
Mohammed Adam Kunna Azrag
1,*,
Jasni Mohamad Zain
1,
Tuty Asmawaty Abdul Kadir
2,
Marina Yusoff
1,
Aqeel Sakhy Jaber
3,
Hybat Salih Mohamed Abdlrhman
4,
Yasmeen Hafiz Zaki Ahmed
4 and
Mohamed Saad Bala Husain
4
1
Institute for Big Data Analytics & Artificial Intelligence (IBDAAI), Komplek AI-Khawarizmi, Universiti Teknologi Mara, Shah Alam 40450, Selangor, Malaysia
2
Faculty of Computing, Universiti Malaysia Pahang, Pekan 26600, Pahang, Malaysia
3
Department of Electrical Power Engineering Techniques, Al-Ma’moon University College, Baghdad 10013, Iraq
4
Faculty of Chemical Engineering, Universiti Malaysia Pahang, Gambang 26300, Pahang, Malaysia
*
Author to whom correspondence should be addressed.
Processes 2023, 11(1), 126; https://doi.org/10.3390/pr11010126
Submission received: 26 September 2022 / Revised: 31 October 2022 / Accepted: 13 November 2022 / Published: 1 January 2023
(This article belongs to the Special Issue Computational Biology Approaches to Genome and Protein Analyzes)

Abstract

:
The ability to create “structured models” of biological simulations is becoming more and more commonplace. Although computer simulations can be used to estimate the model, they are restricted by the lack of experimentally available parameter values, which must be approximated. In this study, an Enhanced Segment Particle Swarm Optimization (ESe-PSO) algorithm that can estimate the values of small-scale kinetic parameters is described and applied to E. coli’s main metabolic network as a model system. The glycolysis, phosphotransferase system, pentose phosphate, the TCA cycle, gluconeogenesis, glyoxylate pathways, and acetate formation pathways of Escherichia coli are represented by the Differential Algebraic Equations (DAE) system for the metabolic network. However, this algorithm uses segments to organize particle movements and the dynamic inertia weight ( ω ) to increase the algorithm’s exploration and exploitation potential. As an alternative to the state-of-the-art algorithm, this adjustment improves estimation accuracy. The numerical findings indicate a good agreement between the observed and predicted data. In this regard, the result of the ESe-PSO algorithm achieved superior accuracy compared with the Segment Particle Swarm Optimization (Se-PSO), Particle Swarm Optimization (PSO), Genetic Algorithm (GA), and Differential Evolution (DE) algorithms. As a result of this innovative approach, it was concluded that small-scale and even entire cell kinetic model parameters can be developed.

1. Introduction

Computing simulations and optimization are important subjects in systems biology and bioinformatics, where they play an important role in mathematical approaches to the reverse engineering of biological systems and managing uncertainty in that context. The large amount of computation required for the simulation, calibration, and analysis of models has prompted a number of researchers to propose various parallelization schemes in an attempt to accelerate these activities [1]. The development of dynamic (kinetic) models at a smaller scale has been the focus of recent research, with the eventual goal of producing whole-cell models. Model calibration has gained a lot of interest, especially with reference to global optimization metaheuristics and hybrid approaches [2].
Thus, kinetic models are being developed so that the dynamics of biological processes can be described in a quantitative manner. These models consider the stoichiometry of reactions and the kinetic expressions associated with each enzyme. In vitro experiments are used to determine the kinetic properties of enzymes by exposing isolated enzymes to optimal conditions. In vitro experiments are used to determine the kinetic properties of enzymes by exposing isolated enzymes to optimal conditions. These conditions are not the same as those found around enzymes inside living cells; thus, when compared to in vivo-measured concentrations, the use of in vitro parameters in kinetic models can result in incorrect predictions of intracellular metabolite concentrations [3]. The dynamics of cell metabolism can be fine-tuned by perturbing a culture and measuring fluxes, enzyme levels, and intra- and extracellular metabolite concentrations as a function of time. Advances in experimental techniques have paved the way for developing dynamic models for metabolic networks that can estimate microbial behavior [4].
Various Escherichia coli (E. coli) models were created and simulated in order to better understand the model’s behavior and produce specific products such as those reported in [5,6,7,8,9]. In [5], the researchers simulate and estimate the kinetic parameters of two pathways while ignoring the other pathways (gluconeogenesis and glyoxylate). In [6], glutamine/aspartate metabolism and fructose consumption are incorporated into the model by utilizing the (Pts) system. However, the model does not consider the entire pathway model or E. coli cell growth. The researchers in [7] generate the experimental time courses of extracellular glucose and biomass in the E. coli model but do not consider the overall production of the pathways. As reported in [8], Kotte used the Monod equations to simulate glucose uptake without estimating the specific growth rate based on a molecular process in E. coli; therein, he did not consider estimating the entire E. coli main metabolic pathway. Neglecting the entire main metabolic pathway may result in an incorrect prediction of the simulation result. As a result, small-scale kinetic parameters must be comprehensively investigated. The study of a complete model can only be comprehensively achieved by including the entire cell system. This is because the feedback loop of several metabolites, such as PEP, OAA, PYR, and others, may affect other enzymes in the main metabolic pathway, causing the concentration of other metabolites to change dynamically over time [10,11].
Recently, three different strategies were compared for estimating the kinetic parameters of a dynamic model of central carbon metabolism in Escherichia coli, i.e., the modified simplex method [12,13], simulated annealing, and differential evolution. According to the authors, differential evolution produced the best results. Moreover, the researchers in [14] revised the central carbon metabolism model and estimated kinetic parameters for the glycolytic enzymes from [5]. The parameter estimation problem was solved using MATLAB and a weighted least squares objective function.
As a result, researchers are increasingly using metaheuristic optimization [15] algorithm methods to estimate the kinetic parameters of the E. coli model and other biological models because of the difficulties in calculating kinetic parameters. Several of these metaheuristic methods have been utilized to estimate the kinetic parameters utilized in [16,17,18,19], and some of these algorithms used experimental data derived from [5,20]. Furthermore, there are hundreds or even thousands of parameters in biological kinetic models that make parameter estimates difficult because of the large search space that must be investigated. High-dimensional kinetic models (with hundreds or thousands of different kinetic parameters) are difficult to compute and, as a result, the above algorithms’ performance is negatively affected, resulting in reduced accuracy [21].
Differential Evolution (DE) is an example of a method that is commonly used and explored for parameter estimation in metabolic models. The DE method’s key shortcoming is its time consumption, which makes it difficult for the DE algorithm to set its parameters when a high number of processors and numerous local searches are involved. Evolutionary algorithms, such as the Genetic Algorithm (GA), share many of the same principles as DE. This methodology is often used to estimate metabolic model parameters. When compared to DE, Particle Swarm Optimization (PSO), and other methods, this algorithm’s key flaw is the time it takes to compute [22,23].
Many other fields, including those referenced in [24,25,26], have used the PSO algorithm since its inception in 1995. Birds and fish finding their food were modeled using the method described in [27]. They share information with each other as they proceed through the search process, and then hunt for their destination randomly and autonomously. Therefore, the search space is filled with different courses and directions for the particles to follow.
Additionally, the Enhanced Segment Particle Swarm Optimization (ESe-PSO) method was conceived and developed based on the Segment Particle Swarm Optimization (Se-PSO) algorithm with the aim of performing deep searches while maintaining the accuracy of Se-PSO. This progression is predicated on the understanding of Se-PSO’s local and global point initialization [2]. Segmentation separates the particles into groups in this scenario, allowing them to work together toward the ideal solution. This approach was modified to identify small-scale kinetic parameters in an E. coli model [9] and governor–turbine models in a single-area power plant [28,29]. As a result of the linearity of the model’s linear inertia weight ( ω ) [28], the inertia weight has an effect on the particles’ exploration and exploitation. As a result, extensive exploration is required at the start and minimal exploitation near the end of the algorithm’s execution. This is performed to avoid the local optima trap and thus increase the efficiency of estimating kinetic parameters with the goal of minimizing model distances in a reasonable amount of time.
In this regard, the model under study [9] was formulated to simulate the main metabolic pathway of E. coli. This model has six pathways, including glycolysis, pentose phosphate, the TCA cycle, gluconeogenesis, and glyoxylate, in addition to acetate formation. This model was chosen in this study rather than the aforementioned models due to its ability to simulate the main metabolic pathways of E. coli with small-scale kinetic parameters. Moreover, as a result of the lack of real experimental data, the nonlinearity of the model, and the small-scale kinetic parameters, the model response metabolites must be investigated depending on estimating small-scale kinetic parameters [30]. The experimental dataset used in this study [20] consists of many metabolites and is used in many studies for kinetic parameters, such as [16,17,21,31]. The purpose of this work was to adopt the ESe-PSO algorithm to estimate small-scale kinetic parameters. The sensitive kinetics were obtained for estimation purposes [4]. The remainder of the paper is organized as follows. The introduction is presented in Section 1. The problem statement is outlined in Section 2. The approach is described in Section 3. Section 4 discusses the outcome. In the final section, the conclusion is summarized. As a result, we believe that this novel technique can help to estimate small-scale dynamic models in systems biology.

2. Problem Formulation

Kinetic metabolic models are part of the enzymatic equation for ODE functions. The models’ behavior and procedure design can be hampered by erroneous metabolic kinetic models. As a result of the nonlinearity of the model, obtaining accurate results is a challenging task because the kinetics are often gathered from multiple laboratories and under differing conditions [32,33,34].
In this regard, the main kinetic metabolic model of E. coli simulated in [9] contains large-kinetic parameters distributed across six pathways. This model had a great impact on E. coli model simulations in terms of understanding and simulating its behavior. However, the researchers stated that the model requires further investigations focused on its kinetic parameter estimations and responses and that further comparisons with real experimental data with small-scale kinetic parameters are necessary. Thus, further study to this end is of significant importance. However, the process of parameter estimation involves looking for the parameter values in a mathematical model (formulated using ODE) that best suit the experimental data. This can be accomplished by minimizing the scalar distance between the model prediction and experimental data, considering experimental errors. This problem can be divided into three categories: multimodal, continuous, and single-objective optimization. The objective function of kinetic parameter estimation considered in this work is described as follows:
f = | ( y r , 1 y s , 1 ) + ( y r , 2 y s , 2 ) + + ( y r , m y s , m ) |
where f is the objective function and y r , m is the actual model output r resulting from m metabolites. The y s , m term is the simulation model output s for m metabolites.
Because biological problems are nonlinear, estimating the best parameters for this problem is difficult because many local minima exist. Most optimization algorithms become easily trapped in local minima as a result, resulting in a slow convergence speed.
Furthermore, the kinetic parameters that need to be estimated using the methods in Section 3 are implemented.

3. Materials and Methods

This section describes the model structure and the algorithms of DE, GA, PSO, Se-PSO, and Ese-PSO in order to estimate small-kinetic parameters.

3.1. The Structure of the E. coli Kinetic Model

The dynamic model of the main metabolic pathway of E. coli formulated in [9] was used as a benchmark and described in Figure 1. This model, which consists of glycolysis, pentose phosphate, the TCA cycle, gluconeogenesis, and glyoxylate pathways, in addition to acetate formation with the phosphotransferase system. This model has 23 metabolites, 28 enzymatic reactions, 10 co-factors (e.g., adenosine triphosphate (ATP), coenzyme A (COA), nicotinamide adenine dinucleotide phosphate (NADPH)), and 172 kinetic parameters. Equation (2) gives the rate at which the concentration of the metabolite in the considered model changes.
d C i d t = j R i j v j μ C i
where C i is the concentration of metabolite i , R i j is the stoichiometric coefficient of metabolite i in the reaction j , v j is the rate of the reaction j , and C i is the growth rate on the dilution effect μ = 0.1   h 1 due to the increase in cell volume as the cell grows [9].
The model mass balance equations in Table 1 and the kinetic rate equations in Table 2 for the investigated model that was described in Figure 1 are all listed below.
After the model structure is described, the algorithms used to estimate the kinetics are stated below.

3.2. GA Algorithm

One of the most well-known heuristic search algorithms is the Genetic Algorithm, which is based on the process of natural evolution. In recent decades, GA has received a lot of attention in engineering design optimization. GA was first introduced in computer science in the 1960s, when a group of biologists attempted to implement the process of evolution in nature in computer code [35]. GA refers to any population-based algorithm that finds the best solution by using selection, crossover, and mutation across chromosomes. A member of the population is referred to as a chromosome/genotype, which is a binary or real-valued string. Several types of GA have been introduced in optimization studies following Barricelli [35]. However, a given optimization problem can be simply defined in GA by following three main steps, which include [36]:
Initialization is the first step.
Step 1: generation;
Step 2: selection;
Step 3: stopping criteria.
The first step in the GA algorithm is the generation of initial chromosomes (genotypes) in step 0. In most practical problems, genotypes are generated at random, and the goodness of each chromosome is assessed using an objective function and the constraints that go with it.
Step 1: Introduce the generation operator and apply it to the current population to generate an intermediate one. In the first step, the initial and intermediate populations are the same (step 0). However, the imposition of the generation operator forms the intermediate population in subsequent iterations;
Step 2: To create the next population, crossover-mutation operators are applied to the results of Step 1. When the generator (operator) creates a chromosome by combining the properties of two parental chromosomes, the term crossover is used. However, the term mutation is used when a new chromosome is formed by making minor changes to the properties of a single parent chromosome [37].
The algorithm design is influenced by experience, model specifications, and the association of experimental results with various heuristic search algorithms. Thus, the algorithm used in this work is described in Algorithm 1 below.
Algorithm 1. The GA Adoption
  • Generate an initial population with m chromosomes at random;
  • Define the kinetic boundaries;
  • Determine the fitness, f(m), of each chromosome in m ;
  • Build the evolved population using the following criteria;
  • Using proportional fitness selection for m1 and m2 chromosomes;
  • Use the crossover function on m1 and m2 to create a new chromosome (m3);
  • Use the mutation function on m3 to generate m′;
  • Include m \ in the next population;
  • Replace the old population with the new population;
  • If the stopping criteria are not met, repeat step 2 of the procedure.

3.3. DE Algorithm

More than two decades ago, Storn and Price [38] presented Differential Evolution (DE), a novel optimization method designed to handle nondifferentiable, nonlinear, and multimodal objective functions. To meet this requirement, DE was designed as a stochastic parallel direct search method that borrows concepts from the broad class of evolutionary algorithms while requiring only a few easily chosen control parameters. Early experimental results show that DE outperforms other well-known evolutionary algorithms in terms of convergence [38,39].
The combination of randomly chosen vectors produces new individuals (vectors) in each population. In our context, this operation is known as mutation. The resulting vectors are then mixed with another predetermined vector—the target vector—in a process known as recombination. This operation produces the trial vector. If and only if the trial vector reduces the value of the objective function f, it is accepted for the next generation. This operation is known as selection. The following is a high-level description of the aforementioned operators (for one generation). Thus, the algorithm used in this work is described in Algorithm 2 below.
Algorithm 2. The DE Adoption
  • Initialization operation: Generate the initial individuals x i 0 , i = 1 ,   2 ,   N P in S 0 . Determine the mutation probability F , the crossover probability CR, and the maximal number of generations G m . Set the current generation G = 0 ;
  • Initialize the kinetic boundaries;
  • For each individual x i G , i = 1 ,   2 ,   N P , perform steps 3–5 to produce the population for the next generation G + 1 ;
  • Mutation operation: a perturbed individual x i G + 1 is generated as follows: x i ~ G + 1 = x r 1 G + F . ( x r 2 G x r 3 G ) ;
  • Crossover operation: the perturbed individual x i ~ G + 1 = [ x i 1 ~ G + 1 ,   x i 2 ~ G + 1 ,   ,   x i n ~ G + 1 ] and the current individual x i G = [ x i 1 G ,   x i 2 G ,   ,   x i n G ] are selected by a binomial distribution to perform the crossover operation to generate the offspring x i G + 1 ;
  • Evaluation operation: the offspring x i G + 1 competes one-to-one with its parent x i G ;
  • G = G + 1 ;
  • Repeat steps 2–7 as long as the number of generations is smaller than the allowable maximum number G m and the best individual is not obtained.

3.4. PSO Algorithm

Kennedy, a social psychologist, and Eberhart, an electrical engineer, developed the idea of particle swarms to create computational intelligence by exploiting already existing natural interacting systems [40,41].
The PSO method was developed in the middle of the 1990s while attempting to mimic the elegant, well-choreographed motion of a flock of birds as part of a social cognition study looking into the idea of collective intelligence in biological populations. Soon after its creation, it was recognized as an evolutionary technique [42]. This technique has received extensive study and in-depth reflection due to its incredibly effective problem-solving capabilities in a variety of technical and scientific applications.
The earliest simulations [42] modelled flocks of birds foraging for grain and were driven by social behavior. This quickly became a potent optimization technique, called Particle Swarm Optimization [43,44] (PSO).
The collection of particles in the search space in the PSO algorithm aims to optimize a fitness function, similar to the movement of flocks of birds in the natural environment in search of food. The particles are randomly placed in the search space and their quality or fitness at that position is evaluated. Then, after a predetermined number of iterations, each particle moves to a new location that is a better fit than the previous one. With some random perturbations, this movement is based on the history of the particle’s best and current locations with those of the best positions attained by other particles in the swarm. Thus, with a fixed number of particles working together, the swarm achieves the most optimal solution to the fitness function in the problem space in subsequent iterations [44,45]. The fitness or objective function in the PSO algorithm is a performance evaluation criterion that is dependent on the algorithm’s application area. A performance criterion is typically defined by a mathematical formulation that quantifies system performance via a performance index. Thus, the algorithm used in this work is described in Algorithm 3 below.
Algorithm 3. The PSO Adoption
  • Initialize particles of PSO;
  • Initialize the parameters;
  • Initialize the velocities and positions of the swarm;
  • For each particle;
  • Iteration i : b i r d s t e p : i + + ;
  • Initialize the kinetics and their boundaries;
  • Calculate the fitness using Equation (1);
  • Find the particles with best fitness in the neighborhood;
  • Calculate the velocity of each particles using this equation: v i   ( t + 1 ) = w v i ( t ) + c 1 r 1 ( p i ( t ) x i ( t ) ) + c 2 r 2 ( G i ( t ) x i ( t ) ) ;
  • Update the position of each particle using this equation: x i ( t + 1 ) = x i ( t ) + v i   ( t + 1 ) ;
  • If the fitness value > the best fitness the best fitness p i ( t ) , then set the current values as the new G i ( t ) ;
  • Otherwise, modify steps 1 and 2 and repeat steps 3–11;
  • Print the best fitness G i ( t ) of each particle;
  • End.
  • End.

3.5. Se-PSO Algorithm

The Se-PSO algorithm is derived from a mix of segmentation and Particle Swarm Optimization (PSO) algorithms, wherein segmentation is utilized to identify the local and global optimal point problems of PSO. On the basis of the dimension, the segmentation can be divided into more than two groups. The concept of parameter segmentation is theoretically illustrated in Figure 2. Assuming that we have three kinetic parameters that are initialized with search space boundaries, parameters 1 and 2 are divided into two segments, whereas parameter 3 is divided into one segment. Then, on the basis of the objective function, a group of particles moving in the search space is initiated. Each search iteration displays the local optimum position in each segment parameter. This scenario is shown in Figure 2 [28].
The following parameters are set in this optimization identification case: the bird steps for PSO = 30, c 2 = 1.2, c 1 = 1.2, and ω = 0.9. According to the experimental results, it is better to initially set the inertia weight to a higher value and gradually reduce it to obtain refined solutions. To improve the algorithm, a new inertia weight is proposed that is set to the damping process as a linearly decreasing time-varying function.

3.6. ESe-PSO Algorithm

The primary parameters of the PSO algorithm ( c 1 ,   c 2 ,   a n d   ω ) help the algorithm obtain the best possible result. When dealing with high-dimensional difficult situations, PSO suffers from early convergence to local optima [26]. As a result, the three parameters indicated above must be carefully calculated for a robust PSO [46]. The first is the learning factor that pulls particles toward their personal optimum position, while the second is the social learning component that pushes particles toward their global optimum position. It also remembers the prior velocity of the particles, preventing them from converging to local minima. Thus, in the basic PSO utilized in Se-PSO, the inertia weight ( ω ) was set (0.9) [28], indicating the necessity to adjust it in order to produce a better estimation in this work.
This approach was created using the fundamental PSO algorithm used in PSO segmentation. This advancement is related to the inertia weight ( ω ), which determines the influence that the last iteration speed has on the current speed and allows the particles to explore larger regions in the beginning and exploit neighboring areas at lower speeds in later stages. The development is carried out by initializing the inertia weight and damping parameter, which is determined throughout the iteration process to improve control convergence and enable the particle to search for a global solution.
According to the PSO algorithm, a particle with a higher fitness value is thought to be nearer to the global optimum than one with a lower fitness value. Stronger local exploration capabilities may be necessary for the particle with a higher fitness to seek through its immediate surroundings for the best solution. On the other hand, a particle with a lower fitness requires stronger global exploration capabilities to move fast to the particles with higher fitness. This improves the likelihood that the particle discovers the global optimum and speeds up the PSO’s convergence. Lower inertia weights can be employed for particles with superior fitness, which helps to accelerate the PSO’s convergence. Higher inertia weights are used for particles that are less fit and far from the optimal particle position, which can improve their capacity for global exploration and help these particles escape from local maxima [24].
The process of damping is subsequently executed before the end of the loop iteration process. This process supports the searching process by calculating the inertia weight after each iteration until the iteration is finished. This method supports the particles through control of convergence, thereby balancing the local and global search by increasing the exploration and exploitation of the particles to locate the optimal values [46,47]. When the inertia weight was initialized to the original interval value ω   [ 1 ,   0.01 ] , the damping value was set to ω d a m p = 0.99. Subsequently, the damping process d a m p is calculated in a decreasing manner in the iteration loop, using Equations (3) and (4), until the iteration is finished:
d a m p = ω   ω d a m p
ω = ( ω m a x ω m i n i t e r m a x i t e r m i n ) + ω m i n
where d a m p is the damping process, ω d a m p is the damping value, ω m a x is the maximum value of ω = 1 , ω m i n is the minimum value of ω = 0.01 , and i t e r m a x and i t e r m i n are the maximum and minimum iterations, respectively.
In this regard, this process explores wider areas in the beginning and exploits nearby areas in the later stages at a reduced speed. The changes are added to Se-PSO since this algorithm combines segmentation and particle swarm optimization based on the inertia weight ( ω ) modification in addition to segmentation. This modification helps the basic Se-PSO to more effectively reach the best optimum solution and increase exploration and exploitation. However, after the damping process is added, the velocity of Se-PSO [28] should be modified according to the damping changes as described in Equation (5):
v i ( t + 1 , j ) = damp . v i ( t , j ) + c 1 r 1 ( p i ( t , j ) x i ( t , j ) ) + c 2 r 2 ( G i ( t , j ) x ( t , j ) )
where x t , j is the position of particle i , v t , j is the velocity of particle i , t is the iteration of particles, j is the number of segments, ω is the inertia weight, d a m p is a damping process, c 1   a n d   c 2 are the acceleration coefficients, and r 1   a n d   r 2 are random numbers between (0 and 1). The p i term is the local optimum position of particle i , and G i is the global optimum position of particle i .
After the ω is modified, the ESe-PSO algorithm proceeds according to the process illustrated by Algorithm 4 below:
Algorithm 4. The ESe-PSO Adoption
  • Set Se-PSO parameters, the problem dimension, and the kinetic boundaries;
  • Initialize the v t , j and x t , j ;
  • Set the segment_number and the segment_length [2,28] for each kinetic parameter;
  • For k = 1 to the number of the segment;
  • Evaluate f ;
  • Select the best G i   ( t , j ) ;
  • Set iteration i = 1 ;
  • Update the ω , d a m p , v t , j , x t , j ;
  • Evaluate f ;
  • If f n e w < f ;
  • Update the G i ( t , j ) ;
  • x i , j ( t + 1 ) = G i ( t , j ) ;
  • If f n e w > f , return to step 1 until the iteration i = i t e r a t i o n is finished or a good solution is discovered. If f n e w < f , then print G i ( t , j ) ;
  • Set x i , j ( t + 1 ) = G i ( t , j ) ;
  • Set the optimal segment of each particle;
  • New optimal_segment ( k p ) = c u r r e n t   p o s i t i o n   ( k p ) ;
  • Apply the PSO [11] Algorithm for the new optimal values.
The ESe-PSO adoption algorithm can be described based on the parameters and steps described in the above pseudocode. Specifically, all of the parameters for the ESe-PSO method and the problem’s dimensions and kinetic boundaries are calculated and set. Then, according to the kinetic sensitivity, the number of segments to be created and the length of each segment for each kinetic are determined. Following that, starting with segment 1 and continuing until the number of segments is reached, the objective function is assessed, and then the global optimum position is selected for each segment in the evaluation. Thereafter, the iteration counter is set and the damping process d a m p   is   included , after which you should update the inertia weight ω , the velocity v t , j , and the location x t , j . Then, once the updating process has been completed, the fitness function f results are evaluated. A particle segment’s global optimum position G ( t , j ) is determined based on whether the new fitness function f n e w is better than the current fitness f update. If, on the other hand, the new fitness function f n e w is less than the present fitness function f , the algorithm returns to the beginning of its parameters and makes the necessary modifications. A second option is that the algorithm finds issues that are highly solvable, or that the iteration is completed. In this situation, the best particle segment is published from each particle segment on a global scale. Following that, the global best of each particle segment is used to determine the current position of each particle segment G ( t , j ) = c u r r e n t   p o s i t i o n   ( k p ) . The PSO algorithm then iterates through the possibilities until it finds the best solution.

4. Results

4.1. Algorithms Estimation Result

A comparative inertia weight is shown in Table 3. Five (5) different inertia weights were tested 30 times to calculate the best solution with the best fitness function (the lowest) updating the scheme (minimization) as described in [48,49,50]. The purpose here was to investigate and enhance the Se-PSO algorithm’s ability with the inertia weight to increase accuracy. These inertia weights were selected because the Se-PSO algorithm uses a constant value and the particles face difficulties in exploration and exploitation since the problem is highly nonlinear and the inertia weight is constant. However, applying different inertia weight ω strategies can enhance the algorithm. Table 3 records the best five inertia weights ω in terms of accuracy and efficiency of the ESe-PSO algorithm when dealing with small-scale kinetic parameters estimation.
Figure 3 describes the 5 scenarios of the inertia weight ( ω ) to determine the best and worst solution before starting the estimation.
Estimation of small-scale kinetics needs a sensitivity analysis to explore the kinetics’ efficacy. However, the sensitivity analysis shows how many outputs are affected by the changes made to each kinetic parameter in order to identify the most sensitive kinetics. The sensitive kinetic parameters of [4,10] were used for estimation purposes, where the sensitivity percentage efficacy is shown in Table 4. It is difficult to optimize 172 kinetics at the same time due to the small-scale kinetic parameters. As a result, the sensitivity analysis was used to identify the most sensitive parameters and reduce estimation costs. All of the kinetics were tested up to a 200% perturbation. Furthermore, during the simulation, 7 kinetic parameters were identified to be the most effective kinetics out of a total of 172 and were designated as the parameters that needed to be estimated, while the remaining kinetics were left at their original values because they had an efficacy of below 20% if the perturbation was increased to 200%. The kinetic parameters’ segments were increased throughout the ESe-PSO execution by adding one segment to each kinetic to boost the likelihood of finding an accurate solution, which allowed the algorithm to search a broad space. Additionally, the kinetic limits, with their upper and lower values, were started with tiny increments. On the basis of the objective function, the optimal segment was determined, as shown in Table 5. Thereafter, after updating the inertia weight by adding the damping process, the method of updating the location, velocity, and the objective function follows that of the PSO algorithm. The best segment is then used as the new boundary, and PSO searches around it. The objective behind the damping process is to boost the particles’ exploration and exploitation capabilities in order to find an optimal solution. The damping process has a value of 0.99, and the inertia weight is [1, 0.01].
As a result, the damping changes its values in a decreasing manner in each iteration until the iteration is complete.
Notably, the ESe-PSO approach utilized in this work precisely minimizes the model response distance. Moreover, rather than the molecule itself, this reduction reduces just 15 metabolites. This is due to the addition of the Se-PSO algorithm’s kinetic parameter segmentation and damping procedure. Table 4 and Table 5 illustrate the segmentation of kinetic parameters and the predicted kinetic parameters.
The segmentation was recommended based on the sensitivity analysis results’ effect on the kinetic parameters. Only two kinetic factors influence more than or equal to 43 metabolites and enzymes, accounting for more than 80% of the total. These were considered two segments as a result of the extremely important alterations to the model output. The others, on the other hand, were regarded as a single section. As a result, each segment was expanded by one in order to maximize the likelihood of discovering an optimal solution.
The estimation of the kinetic parameters functions to minimize the distance of the model responses from [9] and move it towards the experimental data from [20]. From Table 6, it is evident that the simulation concentration result is remarkably close to the real experimental data for the model under study measured by mM = 0.001 mole liter^(−1.0).
As presented in Table 6, using the experimental data from [20], the estimation result of the ESe-PSO algorithm improved as compared to the results from the Se-PSO and PSO algorithms. The estimated result is highlighted. In the ESe-PSO estimation results, 15 metabolites ( G L c , G 6 P ,   F 6 P , G H A P / G A P , F D P , P E P , P Y R , 6 P G , R u 5 P , R 5 P , E 4 P , A c C o A , O A A , 2 K G , and A c e ) were well-optimized with a minimized error of about 16.81%.
Only the I C I T was not properly minimized. In the Se-PSO estimation results, seven (7) metabolites ( G H A P / G A P , F D P , P E P , 6 P G , R u 5 P , 2 K G , and A c e ) were well-optimized and minimized, with an error of about 26.29%. In the PSO estimation results, seven (7) metabolites ( G H A P / G A P , F D P , P E P , 6 P G , R u 5 P   2 K G , and A c e ) were well-optimized and minimized, with errors of approximately 37.09%. In the DE estimation results, six (6) model outputs ( D H A P / G A P , F D P , 6 P G , R u 5 P , and 2 K G ) were well-optimized, with a minimized distance of about 33.9%. In the GA estimation result, six (6) metabolites ( G L c , D H A P / G A P , F D P , P E P , 6 P G , R u 5 P , and 2 K G ) were well-optimized, with a distance minimized to 35.34%. Therefore, it can be confirmed that the results were perfect when compared to the model under study, which has a distance of 57.16% [9].
This indicates that, for kinetic parameter estimation, all these algorithms could provide a decent result, but the adopted and enhanced algorithm produces a better estimation than the others in terms of accuracy. Some of the metabolites were not minimized to a slight degree due to the other pathways’ involvement, the model complexity, and the lumping together of various metabolites [9].
A comparative experiment is shown in Table 7. The ESe-PSO achieved the best ( 7.04   10 5 and 7.41   10 5 ) objective function mean as compared to the Se-PSO algorithm and the best mean (0.000603 and 0.00379), PSO (0.003893 and 0.00549), GA (0.11476 and 0.269007), DE (0.049185), and 0 and DE (0.049185 and 0.280478), respectively, for the Hoque dataset. Overall, the ESe-PSO can be adopted to effectively estimate small-scale kinetic parameters to obtain accurate and acceptable results.
Notably, the ESe-PSO was superior to the original Se-PSO, PSO, and other state-of-the-art approaches in terms of distance minimization, and the smallest objective function’s value produced appropriate fits to two sets of experimental data. This is because the ESe-PSO algorithm added a damping process to increase the exploration and exploitation of the search space to support the particle in finding a global optimum solution. This modification facilitates the accurate determination of the optimal solution. The inertia weight ω was adjusted to the maximum and minimum during the damping process.
However, as stated in [32,51], the mean, STD (standard deviation), distance minimization, and F-test can be calculated for the result accuracy. The STD is a well-known measurement of how broad the meanings of the values being distributed are. The distance minimization is used to see how much the algorithm moves the estimation closer to real experimental data. An F-test is any statistical test in which the test statistic has an F-distribution under the null hypothesis. It is most often used when comparing statistical models that have been fitted to a dataset, in order to identify the model that best fits the population from which the data was sampled. As a result, using the experimental dataset, the algorithms were implemented and adopted to minimize the distance between the simulation results and the results in [20].
The hypothesis of this study is based on the results from the six estimations as follows:
H 0 : S T D E 2 S T D D 2
H 1 : S T D E 2 < S T D D 2
where S T D E 2 is the standard deviation of the optimized result E , S T D D 2 is the standard deviation of the model under study D , and n E , n D are the number of variables for the optimized and model result, respectively.
To ensure that the final simulated results were statistically consistent with the experimental results in Table 5, a statistical test, the F-test [51], was applied to the ESe-PSO algorithm results with the model under study and the experimental data. The results, using the method from Hoque et al., 2005, show that all the metabolites achieved an STD close to the mean and 0. Thus, this demonstrates that the results produced by ESe-PSO are consistent with Equation (9). The hypothesis of the result in Table 5 was calculated and confirmed using Equations (8) and (9).
F t e s t = S T D E 2 S T D D 2 = 0.7381 2.0941 = 0.3525
F 1 0.05 = 1 F 0.05 , n E , n D = 1 F 0.05 , 15 , 15 = 1 2.4034 = 0.4161
The hypothesis in Table 5 aims to minimize the distance of the model under study. Therefore, it was concluded that H 0 is rejected, while H 1 is accepted as a reasonable result. The model simulation, after estimation, is presented in Table 5.
After the kinetic parameters were estimated and the model outputs under study were minimized, an observable increase or decrease in the model pathway output simulation results was noted as compared to the model under investigation, as shown in Figure 4. In the glycolysis pathway, the model response simulation of G L c e x , F D P , G A P / D H A P , P E P , and P Y R decreased, while G 6 P and F 6 P increased due to the p t s system and a little consumption of G L c e x . In the pentose phosphate pathway, the model outputs simulation of 6 P G , R u 5 P , R 5 P , X u 5 P , and E 4 P increased, while S 7 P decreased due to the increase in G 6 P and the involvement of F 6 P and G A P / D H A P in the calculation.
In the TCA cycle, the simulated model outputs of O A A , F U M , and G O X increased due to gluconeogenesis/analaprotic pathway involvement and the effect of the m e z , p c k , and p p c enzymes. This was also due to the increases in P E P and P Y R , whereas the metabolites I C I T , 2KG, S U C , and M A L decreased. Moreover, acetate formation had a certain impact on the model response, which resulted in increases in A C P and decreases in A C E and A c C O A . This was due to A c C O A s involvement in the TCA cycle and the glyoxylate pathways. In the glyoxylate pathway, the metabolites I C I T , S U C , and M A L decreased, while G O X increased. This was attributed to the involvement of the TCA cycle, the analaprotic pathway, and A c C O A involvement. In the analaprotic pathway, the metabolites P E P , P Y R , and M A L decreased, while the metabolite O A A increased due to the TCA cycle and analaprotic pathway involvement.
On the contrary, the other metabolites moved slightly towards the experimental data with small errors. These changes occurred due to other metabolites’ participation, model complexity, glucose depletion, and the lumping together of various metabolites to simplify the model.

4.2. ESe-PSO Algorithm with Different Optimization Problems

The performance of the enhanced segment particle swarm optimization (ESe-PSO) algorithms was compared to that of the original segment particle swarm optimization (Se-PSO) algorithms. The test functions were chosen from six different benchmark functions using the Sphere, Rosenbrock, Rastrigin, Griewank, Shubert, and Booth functions with asymmetric initial range settings (higher and lower boundary values). The experimental results indicated that the ESe-PSO method outperformed the original Se-PSO algorithm in terms of convergence speed under all test conditions. However, the experimental results established ESe-PSO as a potentially useful optimization algorithm in a variety of other fields.
Nonlinear functions are used as a comparison here. The first function is the Sphere function, which is represented by equation ( f ( x ) ) , as follows:
f ( x ) = i = 1 n X 2
where X = ( X 1 ,   X 2 , ,   X n ) is an n-dimensional real valued vector. The second function is the Rosenbrock function as described by equation ( f 1 ( x ) ) :
f 1 = i = 1 n ( 100 ( X i X i 2 ) 2 + ( X i 1 ) 2 )
The third function is the generalized Rastrigrin function as described by equation ( f 2 ( x ) ) :
f 2 = 10 d + i = 1 d [ x i 2 10 cos ( 2 π   x i ) ]
The fourth function is the generalized Griewank function as described by equation ( f 3 ( x ) ) :
f 3 ( x ) = 1 4000 i = 1 n X i = 1 2 i = 1 n cos ( X i i ) + 1
The fifth function is the generalized Shubert function as described by equation ( f 4 ( x ) ) :
f 4 = ( i = 1 5 i   cos ( ( i + 1 ) x 1 + i ) ) ( i = 1 5 i   cos ( ( i + 1 ) x 2 + i ) )
The sixth function is the generalized Booth function as described by equation ( f 5 ( x ) ) :
f 5 = ( x 1 + 2 x 2 7 ) 2 + ( 2 x 1 + x 2 5 ) 2
As shown in Table 8, the maximum number of iterations for each function in both algorithms was set to 50, 100, and 150. The bird number was set to 20, 40, 60, and 80. Each algorithm was evaluated 10 times in order to determine the mean global optimum position.
Furthermore, as demonstrated in Table 9 and Table 10, the ESe-PSO convergence speed towards the optimal values was faster than that of Se-PSO. It is worth noting, however, that the ESe-PSO method’s convergence was swift in all functions but slowed when scanning a huge space for the global optimum location before being chosen by the ESe-PSO algorithm as it approached the optimum.
The ESe-PSO took 0.194 s to attain the optimum global position, whereas the Se-PSO took only 0.213 s. The self-time column in Table 9 and Table 10 show that ESe-PSO took 0.004 s to determine the global optimum position, while Se-PSO took 0.013 s. Furthermore, as indicated in the calls column (4920 and 650), ESe-PSO searched the vast space nearly twice as fast as Se-PSO.
When compared to Se-PSO, the global optimum position of ESe-PSO in the Sphere function produced a far superior outcome Table 11 in a short period of time.
Furthermore, as shown in Table 12 and Table 13, ESe-PSO’s convergence speed towards the optimal values was faster than the Se-PSO’s. It is worth noting, however, that the convergence of ESe-PSO was swift across the board, but it slowed down when scanning a broad space for the global optimum location before being chosen by the PSO algorithm as it approached the optimum. The ESe-PSO took 0.240 s to obtain the optimum global position, but the Se-PSO only took 0.363 s. The self-time column in Table 12 and Table 13 shows that ESe-PSO took 0.0009 s to determine the global optimum position, while Se-PSO took 0.001 s. Furthermore, as indicated in the calls column (4920), ESe-PSO searched the vast space nearly twice as fast as Se-PSO.
In Table 14, the global optimum position of ESe-PSO exhibited a considerably improved outcome in a short time as compared to Se-PSO in the Rastrigin function.
Furthermore, as shown in Table 15 and Table 16, the ESe-PSO’s convergence speed towards the optimal values was faster than that of the Se- PSO. It is worth noting, however, that the convergence of ESe-PSO was swift across the board, but it slowed down when scanning a broad space for the global optimum location before being chosen by the PSO algorithm as it approached the optimum. The ESe-PSO took 0.135 s to obtain the optimum global position, but the Se-PSO took only 0.158 s. The self-time column in Table 15 and Table 16 shows that ESe-PSO took 0.001 s to determine the global optimum position, while Se-PSO took 0.004 s. Furthermore, as indicated in the calls column (4920), ESe-PSO searched the vast space nearly twice as fast as Se-PSO.
In Table 17, the global optimum position of ESe-PSO exhibited a considerably improved outcome in a short time as compared to Se-PSO in the Rosenbrock function.
Furthermore, as shown in Table 18 and Table 19, the ESe-PSO’s convergence speed towards the optimal values was faster than that of the Se-PSO. It is worth noting, however, that the convergence of ESe-PSO was swift across the board, but it slowed down when scanning a broad space for the global optimum location before being chosen by the PSO algorithm as it approached the optimum. The ESe-PSO took 0.363 s to obtain the optimum global position, but the Se-PSO took only 0.048 s. The self-time column in Table 18 and Table 19 shows that ESe-PSO took 0.001 s to determine the global optimum position, while Se-PSO took 0.013 s. Furthermore, as indicated in the calls column (4920), ESe-PSO searched the vast space nearly twice as fast as Se-PSO.
In Table 20, the global optimum position of ESe-PSO exhibited a considerably improved outcome in a short time as compared to Se-PSO in the Griewank function.
Furthermore, as shown in Table 21 and Table 22, the ESe-PSO’s convergence speed towards the optimal values was faster than that of the Se-PSO. It is worth noting, however, that the convergence of ESe-PSO was swift across the board, but it slowed down when scanning a broad space for the global optimum location before being chosen by the PSO algorithm as it approached the optimum. The ESe-PSO took 0.25 s to obtain the optimum global position, but the Se-PSO took only 0.032 s. The self-time column in Table 21 and Table 22 shows that ESe-PSO took 0.002 s to determine the global optimum position, while Se-PSO took 0.003 s. Furthermore, as indicated in the calls column (4920), ESe-PSO searched the vast space nearly twice as fast as Se-PSO.
In Table 23, the global optimum position of ESe-PSO exhibited a considerably improved outcome in a short time as compared to Se-PSO in the Shubert function.
Furthermore, as shown in Table 24 and Table 25, the ESe-PSO’s convergence speed towards the optimal values was faster than that of the Se-PSO. It is worth noting, however, that the convergence of ESe-PSO was swift across the board, but it slowed down when scanning a broad space for the global optimum location before being chosen by the PSO algorithm as it approached the optimum. The ESe-PSO took 0.363 s to obtain the optimum global position, but the Se-PSO took only 0.048 s. The self-time column in Table 24 and Table 25 shows that ESe-PSO took 0.001 s to determine the global optimum position, while Se-PSO took 0.004 s. Furthermore, as indicated in the calls column (4920), ESe-PSO searched the vast space nearly twice as fast as Se-PSO.
In Table 26, the global optimum position of ESe-PSO exhibited a considerably improved outcome in a short time as compared to Se-PSO in the Booth function.

5. Conclusions

For the purposes of this study, ESe-PSO and a number of other state-of-the-art algorithms were developed and assessed. Particle segmentation was used in order to direct the motion of the particles toward the global optimal location. The inertia weight and dampening procedure of this algorithm were also changed to boost exploration and exploitation in order to discover the global optimum location more quickly. We demonstrate the universal applicability of the adopted technique by successfully applying it to small-scale kinetic parameters. The E. coli model’s small-scale kinetic parameters were evaluated using the ESe-PSO, Se-PSO, PSO, DE, and GA algorithms. Small-scale models can benefit greatly from the ESe-PSO method because of its high estimate efficiency. The seven kinetic parameters ( v m a x p y k , n p k , i c d h , k i c d h f , k i c d h n a p d , k i c d h n a d p m , and v m a x i c l ) were effectively estimated. The F-test, the mean, and the STD proved that the results are moved closely to the real experimental data.

Author Contributions

Conceptualization, M.A.K.A.; methodology, M.A.K.A.; software, M.A.K.A. and A.S.J.; validation, T.A.A.K. and A.S.J.; formal analysis, M.A.K.A.; investigation, H.S.M.A., Y.H.Z.A. and M.S.B.H.; resources, M.Y.; data curation, M.A.K.A.; writing—original draft preparation, M.A.K.A.; writing—review and editing, J.M.Z.; visualization, A.S.J.; supervision, T.A.A.K.; project administration, J.M.Z.; funding acquisition, T.A.A.K. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

The authors gratefully acknowledge the support from the Institute for Big Data Analytics and Artificial Intelligence (IBDAAI), Universiti Teknologi MARA (UiTM), and the Universiti Malaysia Pahang (UMP).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Mason, J.C.; Covert, M.W. An energetic reformulation of kinetic rate laws enables scalable parameter estimation for biochemical networks. J. Theor. Biol. 2019, 461, 145–156. [Google Scholar] [CrossRef] [PubMed]
  2. Kunna, M.A.; Kadir, T.A.A.; Remli, M.A.; Ali, N.M.; Moorthy, K.; Muhammad, N. An enhanced segment particle swarm optimization algorithm for kinetic parameters estimation of the main metabolic model of Escherichia coli. Processes 2020, 8, 963. [Google Scholar] [CrossRef]
  3. Villaverde, A.F.; Henriques, D.; Smallbone, K.; Bongard, S.; Schmid, J.; Cicin-Sain, D.; Crombach, A.; Saez-Rodriguez, J.; Mauch, K.; Balsa-Canto, E.; et al. BioPreDyn-bench. BioPreDyn-bench: A suite of benchmark problems for dynamic modelling in systems biology. BMC Syst. Biol. 2015, 9, 1–5. [Google Scholar] [CrossRef] [Green Version]
  4. Kunna, M.A.; Kadir, T.A.; Jaber, A.S. Sensitivity Analysis in Large-Scale of Metabolic Network of E. Coli. In Proceedings of the 2013 International Conference on Advanced Computer Science Applications and Technologies, Kuching, Malaysia, 23–24 December 2013; pp. 346–351. [Google Scholar]
  5. Chassagnole, C.; Noisommit-Rizzi, N.; Schmid, J.W.; Mauch, K.; Reuss, M. Dynamic modeling of the central carbon metabolism of Escherichia coli. Biotechnol. Bioeng. 2002, 79, 53–73. [Google Scholar] [CrossRef] [PubMed]
  6. Usuda, Y.; Nishio, Y.; Iwatani, S.; Van Dien, S.J.; Imaizumi, A.; Shimbo, K.; Kageyama, N.; Iwahata, D.; Miyano, H.; Matsui, K. Dynamic modeling of Escherichia coli metabolic and regulatory systems for amino-acid production. J. Biotechnol. 2010, 147, 17–30. [Google Scholar] [CrossRef] [PubMed]
  7. Matsuoka, Y.; Shimizu, K. Catabolite regulation analysis of Escherichia coli for acetate overflow mechanism and co-consumption of multiple sugars based on systems biology approach using computer simulation. J. Biotechnol. 2013, 168, 155–173. [Google Scholar] [CrossRef] [PubMed]
  8. Oliver, K.; Zaugg, J.B.; Heinemann, M. Bacterial adaptation through distributed sensing of metabolic fluxes. Mol. Syst. Biol. 2010, 6, 355. [Google Scholar]
  9. Kadir, T.A.; Mannan, A.A.; Kierzek, A.M.; McFadden, J.; Shimizu, K. Modeling and simulation of the main metabolism in Escherichia coli and its several single-gene knockout mutants with experimental verification. Microb. Cell Factories 2010, 9, 1–21. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  10. Azrag, M.A.; Kadir, T.A.; Jaber, A.S. Segment particle swarm optimization adoption for large-scale kinetic parameter identification of Escherichia coli metabolic network model. IEEE Access 2018, 6, 78622–78639. [Google Scholar] [CrossRef]
  11. Kunna, M.A.; Kadir, T.A.; Jaber, A.S.; Odili, J.B. Large-scale kinetic parameter identification of metabolic network model of E. coli using PSO. Adv. Biosci. Biotechnol. 2015, 6, 120. [Google Scholar] [CrossRef] [Green Version]
  12. Ceric, S.; Kurtanjek, Z.; Ceric, S.; Kurtanjek, Z. Model identification, parameter estimation, and dynamic flux analysis of E. coli central metabolism. Chem. Biochem. Eng. Q. 2006, 20, 243–253. [Google Scholar]
  13. Nelder, J.A.; Mead, R. A simplex method for function minimization. Comput. J. 1965, 7, 308–313. [Google Scholar] [CrossRef]
  14. Won, W.; Park, C.; Lee, S.Y.; Lee, K.S.; Lee, J. Parameter estimation and dynamic control analysis of central carbon metabolism in Escherichia coli. Biotechnol. Bioprocess Eng. 2011, 16, 216–228. [Google Scholar] [CrossRef]
  15. Qin, H.; Ma, X.; Herawan, T.; Zain, J.M. MGR: An information theory based hierarchical divisive clustering algorithm for categorical data. Knowl.-Based Syst. 2014, 67, 401–411. [Google Scholar] [CrossRef] [Green Version]
  16. Tohsato, Y.; Ikuta, K.; Shionoya, A.; Mazaki, Y.; Ito, M. Parameter optimization and sensitivity analysis for large kinetic models using a real-coded genetic algorithm. Gene 2013, 518, 84–90. [Google Scholar] [CrossRef]
  17. di Maggio, J.; Paulo, C.; Estrada, V.; Perotti, N.; Ricci, J.C.D.; Diaz, M.S. Parameter estimation in kinetic models for large scale biotechnological systems with advanced mathematical programming techniques. Biochem. Eng. J. 2014, 83, 104–115. [Google Scholar] [CrossRef]
  18. Villaverde, A.F.; Fröhlich, F.; Weindl, D.; Hasenauer, J.; Banga, J.R. Benchmarking optimization methods for parameter estimation in large kinetic models. Bioinformatics 2019, 35, 830–838. [Google Scholar] [CrossRef] [Green Version]
  19. Sagar, A.; LeCover, R.; Shoemaker, C.; Varner, J. Dynamic Optimization with Particle Swarms (DOPS): A meta-heuristic for parameter estimation in biochemical models. BMC Syst. Biol. 2018, 12, 1–5. [Google Scholar] [CrossRef] [Green Version]
  20. Hoque, M.A.; Ushiyama, H.; Tomita, M.; Shimizu, K. Dynamic responses of the intracellular metabolite concentrations of the wild type and pykA mutant Escherichia coli against pulse addition of glucose or NH3 under those limiting continuous cultures. Biochem. Eng. J. 2005, 26, 38–49. [Google Scholar] [CrossRef]
  21. Azrag, M.A.K.; Kadir, T.A.A.; Ismail, M.A. A Review of Large-Scale Kinetic Parameters in Metabolic Network Model of Escherichia coli. Adv. Sci. Lett. 2018, 24, 7512–7518. [Google Scholar] [CrossRef] [Green Version]
  22. Egea, J.A.; Rodríguez-Fernández, M.; Banga, J.R.; Marti, R. Scatter search for chemical and bio-process optimization. J. Glob. Optim. 2007, 37, 481–503. [Google Scholar] [CrossRef] [Green Version]
  23. Baker, S.M.; Schallau, K.; Junker, B.H. Comparison of different algorithms for simultaneous estimation of multiple parameters in kinetic metabolic models. J. Integr. Bioinform. 2010, 7, 254–262. [Google Scholar] [CrossRef]
  24. Chen, Z.; Liu, H.; Tian, Y.; Wang, R.; Xiong, P.; Wu, G. A particle swarm optimization algorithm based on time-space weight for helicopter maritime search and rescue decision-making. IEEE Access 2020, 8, 81526–81541. [Google Scholar] [CrossRef]
  25. Ghorpade, S.N.; Zennaro, M.; Chaudhari, B.S.; Saeed, R.A.; Alhumyani, H.; Abdel-Khalek, S. Enhanced differential crossover and quantum particle swarm optimization for IoT applications. IEEE Access 2021, 9, 93831–93846. [Google Scholar] [CrossRef]
  26. Jamian, J.J.; Abdullah, M.N.; Mokhlis, H.; Mustafa, M.W.; Bakar, A.H.A. Global particle swarm optimization for high dimension numerical functions analysis. J. Appl. Math. 2014, 2014, 329193. [Google Scholar] [CrossRef]
  27. Eberhart, R.; Kennedy, J. A new optimizer using particle swarm theory. In MHS’95, Proceedings of the Sixth International Symposium on Micro Machine and Human Science, Nagoya, Japan, 4–6 October 1995; IEEE: Piscataway, NJ, USA, 1995; pp. 39–43. [Google Scholar]
  28. Jaber, A.S.; Ahmad, A.Z.; Abdalla, A.N. A new parameters identification of single area power system based LFC using Segmentation Particle Swarm Optimization (SePSO) algorithm. In Proceedings of the 2013 IEEE PES Asia-Pacific Power and Energy Engineering Conference (APPEEC), Hong Kong, China, 8–11 December 2013; pp. 1–6. [Google Scholar]
  29. Azrag, M.A.; Kadir, T.A. Empirical Study of Segment Particle Swarm Optimization and Particle Swarm Optimization Algorithms. Int. J. Adv. Comput. Sci. Appl. 2019, 10, 480–485. [Google Scholar] [CrossRef] [Green Version]
  30. Jahan, N.; Maeda, K.; Matsuoka, Y.; Sugimoto, Y.; Kurata, H. Development of an accurate kinetic model for the central carbon metabolism of Escherichia coli. Microb. Cell Factories 2016, 15, 1–19. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  31. Remli, M.A.; Deris, S.; Mohamad, M.S.; Omatu, S.; Corchado, J.M. An enhanced scatter search with combined opposition-based learning for parameter estimation in large-scale kinetic models of biochemical systems. Eng. Appl. Artif. Intell. 2017, 62, 164–180. [Google Scholar] [CrossRef] [Green Version]
  32. Rodriguez-Fernandez, M.; A Egea, J.; Banga, J.R. Novel metaheuristic for parameter estimation in nonlinear dynamic biological systems. BMC Bioinform. 2006, 7, 1–8. [Google Scholar] [CrossRef] [Green Version]
  33. Villaverde, A.F.; Egea, J.A.; Banga, J.R. A cooperative strategy for parameter estimation in large scale systems biology models. BMC Syst. Biol. 2012, 6, 1–7. [Google Scholar] [CrossRef] [Green Version]
  34. Azrag, M.A.; Kadir, T.A.; Kabir, M.N.; Jaber, A.S. Large-Scale Kinetic Parameters Estimation of Metabolic Model of Escherichia coli. Int. J. Mach. Learn. Comput. 2019, 9, 160–167. [Google Scholar] [CrossRef] [Green Version]
  35. Barricelli, N.A. Numerical testing of evolution theories. Acta Biotheor. 1962, 16, 69–98. [Google Scholar] [CrossRef]
  36. Keshanchi, B.; Souri, A.; Navimipour, N.J. An improved genetic algorithm for task scheduling in the cloud environments using the priority queues: Formal verification, simulation, and statistical testing. J. Syst. Softw. 2017, 124, 1–21. [Google Scholar] [CrossRef]
  37. Baluja, S.; Caruana, R. Removing the genetics from the standard genetic algorithm. In Machine Learning Proceedings; Morgan Kaufmann: Tahoe City, CA, USA, 1995; pp. 38–46. [Google Scholar]
  38. Storn, R. System design by constraint adaptation and differential evolution. IEEE Trans. Evol. Comput. 1999, 3, 22–34. [Google Scholar] [CrossRef] [Green Version]
  39. Storn, R.; Kenneth, P. Differential evolution—A simple and efficient heuristic for global optimization over continuous spaces. J. Glob. Optim. 1997, 11, 341–359. [Google Scholar] [CrossRef]
  40. Kennedy, J.; Russell, E. Particle swarm optimization. In Proceedings of the ICNN’95-International Conference on Neural Networks, Perth, WA, Australia, 27 November–1 December 1995; Volume 4, pp. 1942–1948. [Google Scholar]
  41. Saini, D.K.; Prasad, R. Order reduction of linear interval systems using genetic algorithm. Int. J. Eng. Technol. 2010, 2, 316–319. [Google Scholar]
  42. Wang, D.; Tan, D.; Liu, L. Particle swarm optimization algorithm: An overview. Soft Comput. 2018, 22, 387–408. [Google Scholar] [CrossRef]
  43. Shi, Y.; Eberhart, R.C. Empirical study of particle swarm optimization. In Proceedings of the 1999 Congress on Evolutionary Computation-CEC99 (Cat. No. 99TH8406), Washington, DC, USA, 6–9 July 1999; Volume 3, pp. 1945–1950. [Google Scholar]
  44. Jiao, B.; Lian, Z.; Gu, X. A dynamic inertia weight particle swarm optimization algorithm. Chaos Solitons Fractals 2008, 37, 698–705. [Google Scholar] [CrossRef]
  45. Cekus, D.; Skrobek, D. The influence of inertia weight on the Particle Swarm Optimization algorithm. J. Appl. Math. Comput. Mech. 2018, 17, 5–11. [Google Scholar] [CrossRef] [Green Version]
  46. Mashayekhi, M.; Harati, M.; Estekanchi, H.E. Development of an alternative PSO-based algorithm for simulation of endurance time excitation functions. Eng. Rep. 2019, 1, e12048. [Google Scholar] [CrossRef] [Green Version]
  47. Bansal, J.C.; Singh, P.K.; Saraswat, M.; Verma, A.; Jadon, S.S.; Abraham, A. Inertia weight strategies in particle swarm optimization. In Proceedings of the 2011 Third World Congress on Nature and Biologically Inspired Computing, Salamanca, Spain, 19–21 October 2011; pp. 633–640. [Google Scholar]
  48. Chatterjee, A.; Siarry, P. Nonlinear inertia weight variation for dynamic adaptation in particle swarm optimization. Comput. Oper. Res. 2006, 33, 859–871. [Google Scholar] [CrossRef]
  49. Xin, J.; Chen, G.; Hai, Y. A particle swarm optimizer with multi-stage linearly-decreasing inertia weight. In Proceedings of the 2009 International Joint Conference on Computational Sciences and Optimization, Sanya, China, 24–26 April 2009; Volume 1, pp. 505–508. [Google Scholar]
  50. Feng, Y.; Teng, G.F.; Wang, A.X.; Yao, Y.M. Chaotic inertia weight in particle swarm optimization. In Proceedings of the Second International Conference on Innovative Computing, Information and Control (ICICIC 2007), Kumamoto, Japan, 5–7 September 2007; p. 475. [Google Scholar]
  51. Yazici, B.; Cavus, M. A comparative study of computation approaches of the generalized F-test. J. Appl. Stat. 2021, 48, 2906–2919. [Google Scholar] [CrossRef] [PubMed]
Figure 1. The main metabolic model of E. coli.
Figure 1. The main metabolic model of E. coli.
Processes 11 00126 g001
Figure 2. The Se-PSO scenario.
Figure 2. The Se-PSO scenario.
Processes 11 00126 g002
Figure 3. The inertia weight ( ω ) scenario.
Figure 3. The inertia weight ( ω ) scenario.
Processes 11 00126 g003
Figure 4. The model simulation using the Ese-PSO algorithm.
Figure 4. The model simulation using the Ese-PSO algorithm.
Processes 11 00126 g004
Table 1. The mass balance equation.
Table 1. The mass balance equation.
MetabolitesMass Balance Description
Cell (X) d [ X ] d t = μ [ X ]
Extra Glucose ( G L C e x ) d [ G L C e x ] d t = v P T S [ X ]
Glucose-6-phosphate ( G 6 P ) d [ G 6 P ] d t = v P T S v P G I v G 6 P D H μ [ G 6 P ]
Fructose 6-phosphate ( F 6 P ) d [ F 6 P ] d t = v P G I v P F K + v T K T B + v T A L μ [ F 6 P ]
Fructose 1,6-Phosphate ( F D P ) d [ F D P ] d t = v P F K v A L D O μ [ F D P ]
Glyceraldehyde 3-phosphate ( G A P ) d [ G A P ] d t = 2 v A L D O v G A P D H + v T K T A + v T K T B v T A L μ [ G A P ]
Phosphoenol-pyruvate ( P E P ) d [ P E P ] d t = v G A P D H + v P C K v P T S v P Y K v P P C μ [ P E P ]
Pyruvate ( P Y R ) d [ P Y R ] d t = v P Y K + v P T S + v M E Z v P D H μ [ P Y R ]
Acetyl-CoA ( A c C o A ) d [ A c C o A ] d t = v P D H + v A C S + v C S v P T A μ [ A c C o A ]
Isocitrate ( I C I T ) d [ I C I T ] d t = v C S v I C D H v I C L μ [ I C I T ]
2-Keto-D-gluconate ( 2 K G ) d [ 2 K G ] d t = v I C D H v 2 K G D H μ [ 2 K G ]
Succinate ( S U C ) d [ S U C ] d t = v 2 K G D H + v I C L v S D H μ [ S U C ]
Fumarate ( F U M ) d [ F U M ] d t = v S D H v F U M μ [ F U M ]
Malate ( M A L ) d [ M A L ] d t = v F U M + v M S v M D H v M E Z μ [ M A L ]
Oxaloacetate ( O A A ) d [ O A A ] d t = v M D H + v P P C v C S v P C K μ [ O A A ]
Glyoxylate ( G O X ) d [ G O X ] d t = v I C L v M S μ [ G O X ]
Acetyl phosphate ( A C P ) d [ A C P ] d t = v P T A v A C K μ [ A C P ]
Acetate ( A C E e x ) d [ A C E e x ] d t = ( v A C K v A C S ) μ [ X ]
6-Phosphogluconolactone ( 6 P G ) d [ 6 P G ] d t = v G 6 P D H v 6 P G D H μ [ 6 P G ]
Ribose 5-phosphate ( R u 5 P ) d [ R U 5 P ] d t = v 6 P G D H v R P E v R P I μ [ R u 5 P ]
Ribulose 5-phosphoenolpyruvate ( R 5 P ) d [ R 5 P ] d t = v R P I v T K T A μ [ R 5 P ]
Xylulose 5-phosphate ( X u 5 P ) d [ X u 5 P ] d t = v R P E v T K T A v T K T B μ [ X u 5 P ]
Sedoheptulose 7-phosphate ( S 7 P ) d [ S 7 P ] d t = v T K T A v T A L μ [ S 7 P ]
Erythrose 4-phosphate ( E 4 P ) d [ E 4 P ] d t = v T A L v T K T B μ [ E 4 P ]
Table 2. The kinetic rate equations.
Table 2. The kinetic rate equations.
ReactionsKinetic Equation
Cell growth ( X ) { μ m ( 1 [ X ] X m ) ( [ G L c e x ] K s + [ G L c e x ] ) k A T P v A T P ( . ) , ( [ G L c e x ] > 0 ) μ m A [ A c e e x ] K s A + [ A c e e x ]   k A T P v A T P ( . ) , ( [ G L c e x ] 1   a n d [ A c e e x ] > 0 )
Phosphotransferase systems (PTS) v P T S m a x [ G L c e x ] [ P E P ] P Y R ( K a 1 + K a 2 [ P E P ] [ P Y R ] + K a 3 [ G L c e x ] + [ G L c e x ] [ P E P ] [ P Y R ] ) ( 1 + [ G 6 P ] n G 6 P K G 6 P )
Phosphoglucose isomerase/phosphoglucoisomerase (PGI) v P G I   m a x ( [ G 6 P ] [ F 6 P ] K e q ) K G 6 P   ( 1 + [ F 6 P ] K F 6 P   ( 1 + [ F 6 P ] K 6 p g i n h F 6 P ) + [ 6 P G ] K 6 p g i n h G 6 P ) + G 6 P
Phosphofructokinase (PFK) v P F K m a x K A T P   [ F 6 P ] K ( A T P   , A D P ) ( [ F 6 P ] + K z F 6 P K b ( A D P , A M P ) + [ P E P ] K P E P K a ( A D P , A M P ) ) ( 1 + L p f k ( 1 + [ F 6 P ]   ( K a ( A D P , A M P ) K s F 6 P ( K b ( A D P , A M P ) + [ P E P ] K P E P ) ) ) n P F K   )
Aldolase (Aldo) v A L D O m a x ( [ F D P ] [ D H A P ] [ G A P ] K e q ) ( K F D P   + [ F D P ] + K G A P [ D A H P ] [ K e q V b i f ] + K D H A P [ G A P ] [ K e q V b i f ] + [ F D P ] [ G A P ] K i n h P E P + [ D H A P ] [ G A P ] K e q V b i f )
Glyceraldehyde 3-phosphate dehydrogenase (GAPDH) v G A P D H m a x ( [ G A P ] [ P E P ] [ N A D H ] K e q [ N A D ] ) ( K G A P ( 1 + [ P E P ] K P G P ) + [ G A P ] ) ( K N A D N A D ( 1 + [ N A D H ] K N A D H ) + 1 )
Pyruvate kinase (PYK) v P Y K m a x   [ P E P ] ( P E P K P E P + 1 ) n p y k 1 [ A D P ] K P E P ( L P Y K ( 1 + [ A T P ] K A T P [ F D P ] K F D P + [ A M P ] K A M P + 1 ) n p y k + ( [ P E P ] K P E P + 1 ) n p y k )   ( [ A D P ] + K A D P )
Phosphoenolpyruvate carboxylase (Ppc) K 1 + K 2 [ A c C O A ] + K 3 [ F D P ] + K 4 [ A c C O A ] [ F D P ] 1 + K 5 [ A c C O A ] + K 6 [ F D P ] ( [ P E P ] K m + [ P E P ] )
Glucose-6-phosphate dehydrogenase (G6PDH) v G 6 P D H m a x [ G 6 P ] [ N A D P ] ( [ G 6 P ] + K g 6 p ) ( 1 + [ N A D P H ] K n d p h ) ( K n a d p ( 1 + [ N A D P H ] K n a d p h ) + N A D P )
Hydroxyprostaglandin dehydrogenase (PGDH) v P G D H m a x [ 6 P G ] [ N A D P ] ( [ 6 P G ] + K 6 p g ) ( [ N A D P ] + K n a d p ( 1 + [ N A D P H ] K n a d p h ) ( 1 + [ A T P ] K a t p ) )
Ribulose-phosphate 3-epimerase (Rpe) v R p e m a x ( [ R u 5 P ] [ R 5 P ] K e q R p e )
Ribose-5-phosphate isomerase (Rpi) v R p i m a x ( [ R u 5 P ] [ R 5 P ] K e q R p i )
Transketolase 1 (TktA) v T K t A m a x ( [ R 5 P ] [ X u 5 P ] [ S 7 P ] [ G A P ] K e q T K t A )
Transketolase 2 (TktB) v T K t B m a x ( [ X u 5 P ] [ E 4 ] [ F 6 P ] [ G A P ] K e q T K t B )
Tyrosine ammonia lyase (Tal) v T a L m a x ( [ G A P ] [ S 7 P ] [ E 4 P ] [ F 6 P ] K e q T K t B )
Phosphoenolpyruvate carboxykinase (PcK) v P c K m a x ( [ O A A ] [ A T P ] [ A D P ] K m O A A [ A T P ] [ A D P ] + [ O A A ] [ A T P ] [ A D P ] + K i A T P K m O A A K i A D P + K i A T P K m O A A K m P E P K i A D P [ P E P ] + K i A T P K m O A A K i P E P K l A T P   [ A T P ] [ P E P ] [ A D P ] + K i A T P K m O A A K i A D P K l O A A [ O A A ] )
Pyruvate dehydrogenase (PDH) v P D H m a x [ N A D ] ( 1 1 + K i [ N A D H ] [ N A D ] ) ( [ P Y R ] K m P Y R ) ( 1 K m N A D ) ( [ C O A ] K m C O A ) ( 1 + [ P Y R ] K m P Y R ) ( 1 N A D + 1 K m N A D + [ N A D H ] K m N A D H [ N A D ] ) ( 1 + [ C O A ] K m C O A + [ A c C O A ] K m A c C O A )
Phosphate acetyltransferase (Pta) v P t a m a x ( 1 K i A c C O A K m P ) ( [ A c C o A ] [ P ] [ A c P ] [ C o A ] K e q ) ( 1 + [ A c C o A ] K i A c C o A + [ P ] K i P + [ A C P ] K i A C P + [ C o A ] K i C o A + ( [ A c C o A ] [ P ] K i A c C o A K m P ) + ( [ A c P ] [ C o A ] K m A C P K i C o A ) )
(Acetate Kinase) (Ack) v A c k m a x ( 1 K m A D P K m A C P ) ( [ A c P ] [ A D P ] [ A C E ] [ A T P ] K e q ) ( 1 + [ A c p ] K m A c P + [ A C E ] K m A C E ) ( 1 + [ A D P ] K m A D P + [ A T P ] K m A T P )
Acetyl-CoA synthetase (Acs) v A c s m a x [ A C E ] [ N A D P ] ( K m + [ A C E ] ) ( K e q + [ N A D P ] )
Citrate synthase (Cs) v C S m a x [ A c C o A ] [ O A A ] ( K d A c C o A K m O A A + K m A c C o A [ O A A ] ) + ( [ A c C o A ] K m O A A ( 1 + [ N A D H ] K i 1 N A D H ) ) + ( [ A c C o A ] [ O A A ] ( 1 + [ N A D H ] K i 2 N D A H ) )
Isocitrate dehydrogenase (ICDH)Processes 11 00126 i001
Isocitrate lyase (IcL) v l c l f m a x [ I C I T ] K m I C I T ( 1 + [ I C I T ] K m I C I T + [ S U C ] K m S U C + [ P E P ] K m P E P + [ 2 K G ] K m 2 K G + 1 K l )
Malate synthase (MS) v M S m a x   [ G O X ] K m G O X   [ A c C o A ] K m A c C o A   v M S m a x   [ M A L ] K m M A L ( 1 + [ G O X ] K m G O X + [ M A L ] K m M A L + ( 1 +   [ A c C o A ] K m A c C o A ) )
2-Ketoglutarate ( 2 KG )Processes 11 00126 i002
Succinate dehydrogenase (SDH) v S D H 1 v S D H 2 ( [ S U C ] [ F U M ] K e q ) K m S U C v S D H 2 + v S D H 2 [ S U C ] + v S D H 1 [ F U M ] K e q
Fumarate hydratase (Fum) v F u m 1 v F u m 2 ( [ F U M ] [ M A L ] k F u m   e q ) K m F u m v F u m 1 + v F u m 2 [ F U M ] + V F u m 1 [ M A L ] K e q
Malic enzyme (Mez) v M e z m a x   [ M A L ]   [ N A D P ] ( K M A L + [ M A L ] )   ( K e q + [ N A D P ] )  
Malate dehydrogenase (MDH)Processes 11 00126 i003
Table 3. The inertia weight test.
Table 3. The inertia weight test.
NumberReferencesName of the Inertia WeightThe Formula of Inertia WeightBest Solution FunctionWorst Solution Function
1[26]Random inertia weight ω = 0.5 + r a n d ( ) 2   r a n d is a random number between 0 and 1.0.0510.9
2[38]Linear decreasing inertia weight ω = ( ( ω m a x ω m i n ) ( i t e r m a x i t e r m i n ) i t e r m a x ) + ω m i n
where ω m i n is the minimum inertia weight = 0.4, ω m a x is the maximum inertia weight = 0.9, and i t e r m a x ,   m i n is the maximum and minimum iterations.
0.0230.07
3[27]Constant inertia weight ω = 0.9 0.040.8
4[39]The chaotic inertia weight ω = ( ω 1 ω 2 ) * i t e r m a x i t e r m i n i t e r m a x + ω 2 * z
z = 4 * z * ( 1 z )
where z is interval number (0, 1), ω 1 , 2 are the maximum and minimum inertia weight, and i t e r m a x ,   m i n are the maximum and minimum iterations.
0.0310.09
5ESe-PSODamping process d a m p = ( ( ω m a x ω m i n i t e r m a x i t e r m i n ) + ω m i n ) * ω d a m p
ω m a x = 1 is the maximum inertia weight, ω m i n = 0.01 is the minimum inertia weight, i t e r m a x ,   m i n is the maximum and minimum iterations, and ω d a m p = 0.99 is the damping process value.
0.0210.06
Note: The shaded cells represent the best and lower objective function.
Table 4. The kinetic parameter segments.
Table 4. The kinetic parameter segments.
KineticsNumber of SegmentsSensitivity Affection
  v m a x p y k 239 (metabolites and enzymes) 73.58%
n p k 344 (metabolites and enzymes) 83.09%
i c d h 241 (metabolites and enzymes) 77.35%
k i c d h f 242 (metabolites and enzymes) 79.13%
k i c d h n a p d 343 (metabolites and enzymes) 80.24%
Kicdhnapm242 (metabolites and enzymes) 79.24%
v m a x i c l 346 (metabolites and enzymes) 86.79%
Table 5. The kinetic parameters boundaries.
Table 5. The kinetic parameters boundaries.
KineticsOriginalLowerUpper Kinetic Estimation
v m a x p y k 1.085000.820001.50000.820000
n p k 3.000002.300003.50003.402000
i c d h 24.421023.500024.90023.82500
k i c d h f 289,800289,799289,800289,799.9
k i c d h n a d p d 0.00600.00300.05000.027000
k i c d h n a d p m 0.01700.00600.06000.007350
v m a x i c l 3.83153.30004.30003.956000
Table 6. The simulation results.
Table 6. The simulation results.
MetabolitesExperimental DataModel DataOptimized Values PSOOptimized Values Se-PSOOptimized Values ESe-PSO Optimized Values DE Optimized Values GA
G l c 0.0617 mM0.12276 mM0.16717 mM0.20195 mM0.0395 mM0.1523 mM0.192 mM
G 6 P 1.76 mM0.20364 mM0.18432 mM0.17731 mM0.98621 mM0.1602 mM0.2013 mM
F 6 P 0.42 mM0.02132 mM0.01967 mM0.01906 mM0.0942 mM0.02013 mM0.019 mM
D H A P / G A P 0.231 mM0.31106 mM0.02496 mM0.23518 mM0.23518 mM0.26023 mM0.2812 mM
F D P 0.67 mM1.4645 mM0.84333 mM0.71747 mM0.65053 mM0.9523 mM1.214 mM
P E P 1.04 mM1.4917 mM1.2159 mM1.1587 mM1.087 mM1.21 mM1.325 mM
P Y R 1.71 mM2.8101 mM3.3025 mM3.832 mM2.3502 mM3.025 mM2.894 mM
6 P G 0.96 mM0.01785 mM0.01828 mM0.01881 mM0.0932 mM0.01925 mM0.0181 mM
R u 5 P 0.088 mM0.02135 mM0.02093 mM0.02101 mM0.0352 mM0.01933 mM0.0183 mM
R 5 P 0.243 mM0.0762 mM0.07443 mM0.07457 mM0.0835 mM0.07023 mM0.0752 mM
E 4 P 0.112 mM0.02744 mM0.02162 mM0.02003 mM0.0625 mM0.02625 mM0.0252 mM
A c C o A 0.145 mM1.0015 mM1.117 mM1.2679 mM0.9325 mM1.1025 mM1.081 mM
O A A 0.241 mM0.02962 mM0.02268 mM0.01592 mM0.0503 mM0.02525 mM0.0282 mM
I C I T 0.21 mM0.2103 mM0.01241 mM0.00206 mM0.1902 mM0.01925 mM0.038 mM
2 K G 0.134 mM5.3656 mM4.2219 mM2.8222 mM2.6253 mM4.335 mM4.4241 mM
A c e 0.36 mM0.00347 mM0.00499 mM0.00626 mM0.0825 mM0.00225 mM0.00342 mM
Distance057.16%37.09%26.29%16.18%33.9%35.34%
Note: The shaded cells represent the best simulation result of each algorithm.
Table 7. Comparative objective function results over 20 runs [19].
Table 7. Comparative objective function results over 20 runs [19].
Methods MeanSTDBestLower
PSO0.005490.0183910.000570.030252
Se-PSO0.0003790.0035260.0000210.00758
ESe-PSO7.41 × 10−50.0003383.7 × 10−60.00052
DE0.0233850.2804780.000240.50257
GA0.1149850.2690070.00810.1472
Note: The shaded cells represent the MEAN, STD, and BEST and LOWER objective functions.
Table 8. The functions’ boundaries.
Table 8. The functions’ boundaries.
FunctionsLower and Upper Values
f [−5, 5]
f 1 [−5, 10]
f 2 [−5.12, 5.12]
f 3 [−600, 600]
f 4 [−10, 10] for i = 1 , 2 ,
[−5.12, 5.12] i = 1 , 2
f 5 [−10, 10] for all i = 1 , 2
Table 9. ESe-PSO consumption for Sphere function.
Table 9. ESe-PSO consumption for Sphere function.
FunctionCallsTotal TimeSelf-Time
ESe-PSO10.194 s0.003 s
PSO30.191 s0.040 s
Sphere49200.161 s0.151 s
Table 10. Se-PSO consumption for Sphere function.
Table 10. Se-PSO consumption for Sphere function.
FunctionCallsTotal TimeSelf-Time
Se-PSO10.213 s0.004 s
PSO30.209 s0.030 s
Sphere49200.179 s0.179 s
Table 11. ESe-PSO and Se-PSO global best positions for Sphere function.
Table 11. ESe-PSO and Se-PSO global best positions for Sphere function.
Bird StepsDimensionIterationESe-PSOSe-PSO
510151.21607 × 1072.12225 × 106
1510153.68313 × 1094.25861 × 108
2510152.83561 × 10102.03589 × 109
Table 12. ESe-PSO consumption for Rastrigin function.
Table 12. ESe-PSO consumption for Rastrigin function.
Function Calls Total Time Self-Time
ESe-PSO 10.240 s0.0009 s
PSO30.239 s0.0291 s
Rastrigin49200.210 s0.210 s
Table 13. Se-PSO consumption for Rastrigin function.
Table 13. Se-PSO consumption for Rastrigin function.
FunctionCallsTotal Time Self-Time
Se-PSO10.363 s0.001 s
PSO30.362 s0.028 s
Rastrigin49200.334 s0.334 s
Table 14. ESe-PSO and Se-PSO global best position for Rastrigin function.
Table 14. ESe-PSO and Se-PSO global best position for Rastrigin function.
Bird StepsDimension Iteration ESe-PSOSe-PSO
510153.4572 × 10−78.3919 × 10−6
1510155.7394 × 10−72.2586 × 10−7
2510156.8241 × 10−92.03589 × 10−8
Table 15. ESe-PSO consumption for Rosenbrock function.
Table 15. ESe-PSO consumption for Rosenbrock function.
FunctionCallsTotal TimeSelf-Time
ESe-PSO10.135 s0.001 s
PSO30.134 s0.021 s
Rosenbrock49200.113 s0.113 s
Table 16. Se-PSO consumption for Rosenbrock function.
Table 16. Se-PSO consumption for Rosenbrock function.
FunctionCallsTotal TimeSelf-Time
Se-PSO10.158 s0.004 s
PSO30.154 s0.026 s
Rosenbrock49200.128 s0.128 s
Table 17. ESe-PSO and Se-PSO global best position for Rosenbrock function.
Table 17. ESe-PSO and Se-PSO global best position for Rosenbrock function.
Bird StepsDimensionIterationESe-PSOSe-PSO
5101500
15101500
25101500
Table 18. ESe-PSO consumption for Griewank function.
Table 18. ESe-PSO consumption for Griewank function.
FunctionCallsTotal TimeSelf-Time
ESe-PSO 10.023 s0.001 s
PSO30.022 s0.014 s
Griewank49200.008 s0.008 s
Table 19. Se-PSO consumption for Griewank function.
Table 19. Se-PSO consumption for Griewank function.
FunctionCallsTotal TimeSelf-Time
Se-PSO10.037 s0.002 s
PSO30.035 s0.022 s
Griewank49200.013 s0.013 s
Table 20. ESe-PSO and Se-PSO global best position for Griewank function.
Table 20. ESe-PSO and Se-PSO global best position for Griewank function.
Bird StepsDimensionIterationESe-PSOSe-PSO
510159.8935 × 10−88.1222 × 10−7
1510155.4572 × 10−128.4574 × 10−10
2510156.1742 × 10−153.5258 × 10−13
Table 21. ESe-PSO consumption for Shubert function.
Table 21. ESe-PSO consumption for Shubert function.
FunctionCallsTotal TimeSelf-Time
ESe-PSO10.025 s0.002 s
PSO30.024 s0.015 s
Shubert49200.006 s0.006 s
Table 22. Se-PSO consumption for Shubert function.
Table 22. Se-PSO consumption for Shubert function.
FunctionCallsTotal TimeSelf-Time
Se-PSO10.032 s0.003 s
PSO30.029 s0.020 s
Shubert49200.009 s0.009 s
Table 23. ESe-PSO and Se-PSO global best position for Shubert function.
Table 23. ESe-PSO and Se-PSO global best position for Shubert function.
Bird StepsDimensionIterationESe-PSOSe-PSO
51015−186.7278−186.7234
151015−186.7300−186.7245
251015−186.7309−186.7289
Table 24. ESe-PSO consumption for Booth function.
Table 24. ESe-PSO consumption for Booth function.
FunctionCallsTotal TimeSelf-Time
ESe-PSO10.015 s0.001 s
PSO30.014 s0.010 s
Booth49200.004 s0.004 s
Table 25. Se-PSO consumption for Booth function.
Table 25. Se-PSO consumption for Booth function.
FunctionCallsTotal TimeSelf-Time
Se-PSO10.027 s0.004 s
PSO30.023 s0.014 s
Shubert49200.009 s0.009 s
Table 26. ESe-PSO and Se-PSO global best position for Booth function.
Table 26. ESe-PSO and Se-PSO global best position for Booth function.
Bird StepsDimensionIterationESe-PSOSe-PSO
510150.00000.0023
1510150.00000.0012
2510150.00000.0001
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Azrag, M.A.K.; Zain, J.M.; Kadir, T.A.A.; Yusoff, M.; Jaber, A.S.; Abdlrhman, H.S.M.; Ahmed, Y.H.Z.; Husain, M.S.B. Estimation of Small-Scale Kinetic Parameters of Escherichia coli (E. coli) Model by Enhanced Segment Particle Swarm Optimization Algorithm ESe-PSO. Processes 2023, 11, 126. https://doi.org/10.3390/pr11010126

AMA Style

Azrag MAK, Zain JM, Kadir TAA, Yusoff M, Jaber AS, Abdlrhman HSM, Ahmed YHZ, Husain MSB. Estimation of Small-Scale Kinetic Parameters of Escherichia coli (E. coli) Model by Enhanced Segment Particle Swarm Optimization Algorithm ESe-PSO. Processes. 2023; 11(1):126. https://doi.org/10.3390/pr11010126

Chicago/Turabian Style

Azrag, Mohammed Adam Kunna, Jasni Mohamad Zain, Tuty Asmawaty Abdul Kadir, Marina Yusoff, Aqeel Sakhy Jaber, Hybat Salih Mohamed Abdlrhman, Yasmeen Hafiz Zaki Ahmed, and Mohamed Saad Bala Husain. 2023. "Estimation of Small-Scale Kinetic Parameters of Escherichia coli (E. coli) Model by Enhanced Segment Particle Swarm Optimization Algorithm ESe-PSO" Processes 11, no. 1: 126. https://doi.org/10.3390/pr11010126

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop