Next Article in Journal
Thin Right Triangle Convexity
Next Article in Special Issue
A Protocol for Solutions to DP-Complete Problems through Tissue Membrane Systems
Previous Article in Journal
A Quantum-Behaved Particle Swarm Optimization Algorithm on Riemannian Manifolds
Previous Article in Special Issue
Membrane Computing after 25 Years
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Extended Membrane System Based on Cell-like P Systems and Improved Particle Swarm Optimization for Image Segmentation

1
School of Management Engineering, Shandong JianZhu University, Jinan 250101, China
2
School of Business, Shandong Normal University, Jinan 250014, China
3
School of Management Science and Engineering, Shandong University of Finance and Economics, Jinan 250014, China
*
Author to whom correspondence should be addressed.
Mathematics 2022, 10(22), 4169; https://doi.org/10.3390/math10224169
Submission received: 8 October 2022 / Revised: 4 November 2022 / Accepted: 4 November 2022 / Published: 8 November 2022
(This article belongs to the Special Issue Membrane Computing: Theory, Methods and Applications)

Abstract

:
An extended membrane system with a dynamic nested membrane structure, which is integrated with the evolution-communication mechanism of a cell-like P system with evolutional symport/antiport rules and active membranes (ECP), and the evolutionary mechanisms of particle swarm optimization (PSO) and improved PSO inspired by starling flock behavior (SPSO), named DSPSO-ECP, is designed and developed to try to break application restrictions of P systems in this paper. The purpose of DSPSO-ECP is to enhance the performance of extended membrane system in solving optimization problems. In the proposed DSPSO-ECP, the updated model of velocity and position of standard PSO, as basic evolution rules, are adopted to evolve objects in elementary membranes. The modified updated model of the velocity of improved SPSO is used as local evolution rules to evolve objects in sub-membranes. A group of sub-membranes for elementary membranes are specially designed to avoid prematurity through membrane creation and dissolution rules with promoter/inhibitor. The exchange and sharing of information between different membranes are achieved by communication rules for objects based on evolutional symport rules of ECP. At last, computational results, which are made on numerical benchmark functions and classic test images, are discussed and analyzed to validate the efficiency of the proposed DSPSO-ECP.

1. Introduction

Membrane computing (MC), as an important branch of bio-inspired computing, initiated by Păun [1], is inspired by the structure, functioning, interaction, and cooperation of living cells in tissues, organs, and organisms, and the computing model of membrane computing is also called membrane systems or P systems. P systems, as a class of distributed parallel computing models, which consist of membrane structure, objects (or data), and rules. Three classic computing models of P systems based on different membrane structures or cell arrangements are generally reported in previous studies and research, including cell-like P systems, tissue-like P systems, and neural-like P systems [2]. Research shows that three classic P systems and their variants have the same computing power as Turing machines or are more efficient [3].
Cell-like P systems, as a class computing model of MC, are inspired by the hierarchical structure and communication mechanism of living cells. The underlying membrane structure of cell-like P systems can be abstracted to arbitrary graphs in mathematics [4]. Cell-like P systems have highly effective efficiency in solving optimization problems with linear or polynomial complexity due to the distributed parallel computing working manner. It has been applied to many fields due to the simplicity and parallelism of computing models of P systems [5]. However, the application of cell-like P systems has been limited by the incompleteness of fundamental computation and the complicacy of implementation. Due to the limitation we have mentioned above, evolutionary membrane computing (EMC) [6] become important research in the application of cell-like P systems, which are integrated with evolutionary computation (EC), including evolutionary algorithm (EAs) and swarm intelligence (SI). Particle swarm optimization (PSO), as a classic optimization algorithm of SI algorithms, is simple and easy to implement, and the evolutionary mechanism of PSO, also provides a new way for EAs based on cell-like P systems to solve complex problems.
This work focuses on the development of an extended membrane system in dynamic nested membrane structure (NMS)-based MIEAs using cell-like P systems and improved PSO for solving optimization problems, and simply named DSPSO-ECP, which is based on the evolution-communication mechanism of cell-like P system with evolutional symport/antiport rules and active membranes (ECP), and evolutionary mechanism of PSO and improved PSO with starling flock behavior (SPSO), Fitness-Euclidean distance ratio (FER) and global information. Two kinds of evolution rules for objects based on updated models of PSO and improved SPSO are introduced to evolve objects in different membranes, including basic evolution rules in elementary membranes and local evolution rules in sub-membranes. A group of sub-membranes for elementary membranes are designed to change the membrane structure dynamically through membrane creation and dissolution rules with promoter/inhibitor. The exchange and sharing of information between different membranes are achieved by communication rules of ECP for objects. At last, the proposed DSPSO-ECP is evaluated on eight classic numerical benchmark functions and compared with four PSO-based optimization approaches to evaluate the effectiveness. Furthermore, computational experiments compared with other segmentation approaches, which are made on eight classic test images, are conducted to verify the efficiency of the proposed DSPSO-ECP. The proposed DSPSO-ECP based on cell-like P systems and improved PSO is to try to break these restraints of cell-like P systems and expand to widen the application of cell-like P systems, and the modified update model of improved PSO gives a new evolution mechanism for P systems to evolve objects and membranes. In addition, the PSO algorithm still has some limitations, such as the easily trapped into local optima and high complexity for solving big datasets [7], the evolution-communication mechanism and distributed parallel computing framework of cell-like P systems obviously need to balance the exploration and exploitation and improve the performance of PSO algorithm in solving complex problems including image segmentation problems.
The rest of this paper is organized as follows. The related works of cell-like P systems and PSO algorithms are summarized and discussed in Section 2. The basic framework of ECP, and improved SPSO based on FER and global information, are described in Section 3. The extended membrane system based on the evolution-communication mechanism of ECP and the evolutionary mechanism of PSO and improved SPSO is proposed in Section 4. Some rules, including evolution rules for objects, evolution rules with promoter/inhibitor for membranes, and communication rules for objects are described in more detail in this section. Experimental results and analysis, which are made on eight classic numerical benchmark functions compared with four PSO-based optimization approaches are reported in Section 5. Section 6 gives experimental results and a discussion of the image segmentation problem on eight classic test images to evaluate the efficiency of the proposed DSPSO-ECP. Section 7 provides some conclusions and outlines future research directions.

2. Related Works

The studies of cell-like P systems and their variants mainly focused on the theoretical analysis and application of computing models [8]. A variety of cell-like P systems based on biological facts [9], the biochemical reaction of cells [10], mathematical biological cells [11], and theoretical computer science have been directly designed and developed for solving problems in the studies of theoretical analysis of cell-like P systems, which is recruited various ingredients, including energy, catalysts, and mitosis [12], such as cell-like P systems with active membranes inspired in the mitosis process [13]; polarizationless P systems with active membranes [14], to avoid polarizations; cell-like P systems with evolutional symport/antiport rules inspired by conservation law [15], and its variants [16]. The analysis of computing power and computational efficiency of cell-like P systems and their variants are also an important part of theoretical studies and works [17].
Furthermore, EMC [6], as an important research application for cell-like P systems, which is integrated with EC, including EAs and SI as we mentioned above, is designed and developed to break the limitation of the original computing model of P systems and to solve more complicated problems [18]. Membrane-inspired evolutionary algorithms (MIEAs) of EMC based on extended cell-like P systems [19], which are utilized various SI algorithms, such as genetic algorithm (GA) [20], differential evolution (DE) [21], particle swarm optimization (PSO) [22,23,24] and its variants [25], ant colony optimization (ACO) [26], artificial bee colony algorithm (ABC) [27] and biogeography-based optimization (BBO) [28], and clustering algorithms, such as hierarchical clustering algorithm [29], consensus clustering algorithm [30] and feature selection algorithm [31], have been widely used for solving more complex problems in real-world.
PSO, which was initiated by Kennedy and Eberhart in 1995 [32], is a classic optimization algorithm of SI algorithms. The basic principle of PSO is based on the group behavior of birds flocking and individual trajectory analysis. The individual experiences and the global experience of the population are adopted in the updated model of PSO to balance exploration and exploitation [33]. Therefore, it has been successfully applied to many difficult problems [34]. However, like most SI algorithms, PSO is easily trapped in local optima and appeared prematurity under the guidance of local and global experiences [7]. A lot of research and work has been done to improve the performance of standard PSO, which can be divided into mechanism analysis [35], improved PSO [33], and applications of PSO [36].
The studies of improved PSO are mainly related to modified PSO algorithms and hybrid PSO algorithms based on meta-heuristic approaches [37]. Modified PSO algorithms are based on the updated model of PSO and adopted some strategies and methods in the search process of particles, such as flight mechanisms of particles including levy flight [38,39,40], learning strategies for particles including cirssoss learning [41], cognitive learning [42] and comprehensive learning [40]; population topology including stochastic topology[7] and dynamic topology [43]; and optimization strategies including random walk strategy [44], chaos strategy [45] and synergistic strategy [46]; search strategies including local search [47,48] and charged system search [49,50]. Hybrid PSO algorithms are combined with some traditional and evolutionary optimization methods in order to utilized the advantages of both methods and improve the global search ability of PSO, such as simulated annealing (SA) [51], tabu search (TS) [52], BBO [53], artificial bee colony (ABC) [54], genetic algorithm (GA) [55] and differential evolution (DE) [56]. Speciality, multi-swarm PSO is an important field of improved PSO in recent years [57,58,59,60]. PSO has been widely used to solve practical engineering problems due to easy implementation and robust performance [61], including clustering problems [62], signalized traffic problems [63], image segmentation [64], feature selection [65], antenna synthesis [66] and fuzzy controlled systems [67].
From the related works of cell-like P systems and PSO, EMC based on cell-like P systems and swarm intelligence, including PSO algorithm, mainly focuses on the traditional or classic evolutionary mechanism of evolutionary algorithms with static membrane structure, and improved optimization algorithms are required not only improved the performance of algorithms but also a more suitable way for combining the evolution-communication mechanism of cell-like P systems. The dynamic membrane structure needs to be specially designed to enhance the global search ability and avoid prematurity. Although improved PSO algorithms based on other traditional optimization strategies and approaches have great potential in solving respective problems, the balance between exploration and exploitation of PSO remains an important problem in improved PSO. The improved PSO integration with the framework and computing features, including dynamic, indeterminacy, and flat maximal parallelism of cell-like P systems also provided a new way to improve the global search ability of PSO and solve more complex problems.

3. Proposed ECP and Improved SPSO

3.1. Cell-like P System with Evolutional Symport/Antiport Rules and Active Membranes

As a classic computing model of cell-like P systems, cell-like P systems with evolutional symport/antiport rules are proposed to modify and change the information of objects during the process of communication, which is based on the real biochemical reaction of living cells [16]. But the membrane structure of the P system will not be changed during computation. Therefore, active membranes are introduced to dynamically change the membrane structure of a cell-like P system with evolutional symport/antiport rules, and this P system is simply named ECP. Especially, ECP is described as a tuple as the following:
= Γ , ε , μ , ω 1 , , ω m , R , σ o u t ,  
where
(1)
Γ is a nonempty finite alphabet of objects;
(2)
ε is a set of initial objects located in the environment;
(3)
μ is the initial membrane structure of ECP, which contains m membranes;
(4)
ω 1 , , ω m are finite multisets of initial objects over Γ ;
(5)
R is multiple but finite sets of evolution rules, which consist of two kinds of evolution rules, including evolutional symport/antiport rules for objects and evolution rules for membranes, which are defined in the following.
Evolutional symport rules are of the form: u i j i u j , where 1 i j m , u Γ + , u Γ , which means objects u in membrane i evolve to new objects u and send to membrane j . Evolutional antiport rules are of the form: u i v j v i u j , where 0 < i j m , u , v Γ + , u , v Γ , which means objects u in membrane i evolve to new objects u and send to membrane j , and objects v in membrane j evolve to new objects v and send to membrane i at the same time. Noted that the evolutional symport/antiport rules of ECP only executed on the membranes with set membership.
Evolution rules for membranes contain membrane creation rules and membrane dissolution rules. Membrane creation rules are of the form: u i u i 1 u i n , for 0 < i m , u Γ , which means a group of sub-membranes of membrane i are created and the total number of sub-membranes is n . Membrane dissolution rules are of the form: u i λ , for 0 < i m , u Γ , which means membrane i is dissolved in a configuration, and λ is empty symbol [17].
(6)
σ o u t is output membrane or region in ECP, where σ o u t { σ 1 , σ 2 , , σ m } .

3.2. Improved Particle Swarm Optimization Inspired by Starling Flock Behavior

A novel PSO approach, which is inspired by the collective responses of starling birds, is proposed to enhance the local search ability of particles, and is simply named SPSO [68]. In SPSO, the update of velocity and position of particles are based on the collective information of seven neighbors, and the selection of neighbors for each particle is based on Euclidean distance. Thus, the impact of quality and global information can easily be overlooked and underestimated. Therefore, an improved SPSO approach, which is based on the Fitness-Euclidean distance ratio (FER) [69] and global best information is introduced to avoid restrictions as we mentioned above. For particle i , the value of FER with particle i ( i i ) at iteration t is determined by (1) in the following
F E R i , i = α f p i t f p i t p i t p i t   ,   for   i , i = 1 , 2 , , N ,
where F E R i , i is value of FER between particle i and particle i . f is the fitness function. p i t is local best position of particle i at iteration t . p i t p i t is the Euclidean distance between local best position p i t and p i t at iteration t . α is adjusting parameter, where α = S / f p w ( t ) f p g ( t ) , p w ( t ) and p g ( t ) are worst and best local position for particles at iteration t . S is the size of search sparce, where S = j D s j u s j l 2 , s j u and s j l are the maximum and minimum of j -th dimension of search sparce, D is the dimension of search space. All particles are listed in decreasing order of FER, and seven particles ranked on the top are taken as neighbors of particle i . The new velocity v i t + 1 of i -th particle at iteration t + 1 is determined by (2) in the following
v i t + 1 = χ v i t + r 3 p n # t x i t + r 4 p g t x i t ,   for   i = 1 , 2 , , N ,
where χ is constriction coefficient. v i t is the velocity of i -th particle at iteration t . r 3 and r 4 are two random numbers. p n # t is collective local best position of seven neighbors for particle i at iteration t , where p n # t = 1 7 j = 1 7 p j t . x i t is the position of i -th particle at iteration t . p g t is global best position of particles at iteration t .

4. The Proposed DSPSO-ECP

In this section, an extended membrane system based on the dynamic membrane structure of ECP, and the evolutionary mechanism of PSO and improved SPSO, is designed and proposed, which is simply named DSPSO-ECP. The updated model in PSO and improved SPSO, as evolution rules for objects, are adopted to evolve objects in membranes. Evolutional symport rules of ECP, as communication rules for objects, are introduced to the exchanging and sharing of information in different membranes. Membrane creation/dissolution rules of ECP, as membrane evolution rules, are used to generate and dissolve sub-membranes in the P system. Therefore, DSPSO-ECP has two evolutionary mechanisms which are respectively applied to objects and membranes, and the communication mechanism is achieved by executing evolutional symport rules for objects in different membranes. More details about DSPSO-ECP are given in the following.

4.1. The General Framework of DSPSO-ECP

The general framework of DSPSO-ECP can be completely described as a tuple which is given in the following
= Γ , ε , μ , ω 1 , , ω m , R , R , σ o u t ,
where
(1)
Γ is a non-empty finite alphabet of objects;
(2)
ε is a finite multiset of initial objects located in the environment;
(3)
μ is the membrane structure of DSPSO-ECP, which contains m membranes;
(4)
ω 1 , , ω m are finite multisets of initial objects in membranes, with ω i Γ , for 1 i m ;
(5)
R represents finite sets of evolution rules in DSPSO-ECP, which consist of two kinds of evolution rules, including evolution rules for objects and evolution rules for membranes, where R = R , R # . R represents finite sets of evolution rules for objects, R = R 2 , R 3 , , R m , where R i ( 2 i m ) represents a finite set of evolution rules for objects associated with membrane i . Evolution rules R i of membrane i are of the form: R i = u v , for u , v Γ + . Especially, when an evolution rule of R i in membrane i is applied, objects u evolve to objects v in same membrane. R # represents finite sets of evolution rules for membranes, R # = R 2 # , R 3 # , , R m # , where R i # ( 2 i m ) represents a finite set of evolution rules for membranes associated with membrane i , including membrane creation rules and membrane dissolution rules.
(6)
R represents finite sets of commutation rules based on evolutional symport rules of ECP in DSPSO-ECP, where R = R 1 , R 2 , , R m . R i ( 1 i m ) represents a finite set of communication rules for objects associated with membrane i . Especially, R i j ( 1 j i m ) represents communication rules for objects associated with membrane i and membrane j , noted that membrane j must be the child or parent membranes of membrane i .
(7)
σ o u t is the output membrane in DSPSO-ECP, where σ o u t { σ 1 , , σ m } . When the computation of DSPSO-ECP has been completed, objects in the output membrane σ o u t are transferred to the environment, which can be viewed as the computational results of DSPSO-ECP. The initial membrane structure of the proposed DSPSO-ECP is graphically depicted in Figure 1.
In DSPSO-ECP, all membranes are labeled from 1 to m , and the environment is labeled as 0, which can be denoted by σ 0 , σ 1 , , σ m . Membrane 1 is a skin membrane which is not contained in any other membranes. Membrane i ( 2 i m ) is an elementary membrane that does not contain any other membranes. The number of elementary membranes in DSPSO-ECP is m 1 as shown in Figure 1. Especially, membrane 1 is the parent membrane for elementary membrane i , similarly, elementary membrane i are also child membranes for membrane 1. Noted that the communication relationship only exists in the parent membrane and its corresponding child membranes.

4.2. Evolution Rules for Objects

Two types of evolution rules, which are based on the evolutionary mechanism of classic PSO and improved SPSO, are adopted to achieve the evolution of objects in membranes, including basic evolution rules in elementary membranes and local evolution rules in sub-membranes. In DSPSO-ECP, each object u i consisting of two attribute vector, i.e., velocity and position, u i = V i , X i , where V i is the velocity of object u i , and X i is the position of object u i .

4.2.1. Basic Evolution Rules for Objects

As basic evolution rules for objects, the evolutionary model of classic PSO is introduced to update the velocity and position of objects, which are only executed on elementary membranes σ o ( 2 o m ). Specially, the velocity V i t + 1 of i -th object u i in elementary membrane σ o at iteration t + 1 is determined by (3) in the following
V i ( t + 1 ) = ω V i ( t ) + c 1 r 1 X i l b e s t ( t ) X i ( t ) + c 2 r 2 X o g b e s t ( t ) X i ( t ) ,   for   i = 1 , 2 , , n o ,
where ω is inertia weight. c 1 and c 2 are the local and global learning factors, which control the influence of local best and global best information. r 1 and r 2 are two random numbers that uniformly distributed between 0 and 1. X i l b e s t t is the local best position of object u i at iteration t , which is also called local best of object u i at iteration t and is denoted by u i l b e s t t . X o g b e s t t is the global best position for all objects in elementary membrane σ o at iteration t , which called global best of elementary membrane σ o at iteration t and is denoted by u o g b e s t t . n o is the number of objects in elementary membrane σ o . The position X i t + 1 of i -th object u i at iteration t + 1 is determined by (4) in the following
X i t + 1 = X i t + V i t + 1 ,   for   i = 1 , 2 , , n o ,  
The local best position X i l b e s t t + 1 of i -th object u i at iteration t + 1 is updated according to (5) in the following
X i l b e s t t + 1 = X i t + 1 , if f X i t + 1 < f X i l b e s t t X i l b e s t t ,   otherwise ,   for   i = 1 , 2 , , n o ,
The global best position X o g b e s t t + 1 in elementary membrane σ o at iteration t + 1 is updated according to (6) in the following
X o g b e s t t + 1 = X i l b e s t t + 1 , if f X i l b e s t t + 1 < f X o g b e s t t X o g b e s t t ,   otherwise ,   for   i = 1 , 2 , , n o ,
A linear increasing strategy is adopted to update dynamically the values of inertia weight ω , which is given by (7) in the following
ω t = ω min + ω max ω min t / t max ,  
where ω min and ω max are the minimum and maximum of inertia weight, and t max is the maximum of iterations.

4.2.2. Local Evolution Rules for Objects

As local evolution rules, the evolutionary model of improved SPSO is adopted to update the velocity and position of objects, which are only executed on sub-membranes σ o h ( 1 h H ) of elementary membrane σ o ( 2 o m ), where H is the number of sub-membranes of elementary membrane σ o . The velocity V i t + 1 of i -th object u i in sub-membranes σ o h at iteration t + 1 , is determined by (8) in the following
V i t + 1 = χ V i t + r 3 X n # n b e s t t X i t + r 4 X o h g b e s t t X i t ,   for   i = 1 , 2 , , n o h ,  
where χ is constriction coefficient which is given by χ = 2 / 2 φ max φ max 2 4 φ max . r 3 and r 4 are two random numbers that uniformly distributed between 0 and φ max / 2 . φ max is positive constant and is usually set to 4.1, where φ max = 4.1 [69]. X n # n b e s t t is the mean of local best position for seven neighbors of object u i at iteration t , for X n # n b e s t ( t ) = 1 / 7 n = 1 7 X n l b e s t t , X n l b e s t t is the local best position of neighbors u n for object u i at iteration t . The selection of neighbors u n for object u i are based on FER according to Equatiion (1). The position X i t + 1 of i -th object u i in sub-membrane σ o h at iteration t + 1 is determined by Equatiion (4). The local best position X i l b e s t t + 1 and global best position X o h g b e s t t + 1 at iteration t + 1 are also determined by Equations (5) and (6).

4.3. Evolution Rules for Membranes

There are two types of evolution rules with promoter/inhibitor p ( p Γ ) for membranes, including membrane creation rules with promoter p ( p Γ ) and membrane dissolution rules with inhibitor ¬ p ( ¬ p Γ ). The evolution rules for membranes with promoter/inhibitor are only achieved on elementary membranes σ o ( 2 o m ) and its sub-membranes σ o h ( 1 h H ) in the proposed DSPSO-ECP. When an evolution rule for the membrane is applied, skin membrane σ 1 , and other membranes not involved in membrane evolution and environment will keep the same configuration at the current iteration or time.

4.3.1. Membrane Creation Rules with Promoter

Membrane creation rules with the promoter in the proposed DSPSO-ECP are of the form, u p o u o 1 u o 2 u o H , for 2 o m . It only can be executed on a time or iteration if there is an elementary membrane σ o contains multiset of promoter objects p and objects u ( u Γ ). When a membrane creation rule associated with elementary membrane σ o is applied, a group of sub-membranes σ o h for elementary membrane σ o are created, which consist of H number of sub-membranes. All objects in elementary membrane σ o are copied into sub-membrane σ o h .
Figure 2 gives an example of membrane creation. When elementary membrane 3 contains a multiset of promoter objects p and objects, a group of sub-membranes for elementary membrane 3, which consist of H number of membranes, will be created. All objects in sub-membrane σ o h ( 1 h H ), as shown in the shade area of Figure 2, are copied from elementary membranes 3.

4.3.2. Membrane Dissolution Rules with Inhibitor

Membrane dissolution rules with an inhibitor in the proposed DSPSO-ECP are of the form, u ¬ p j λ , where 2 j m or 1 j H . It only can be executed on a time or iteration if there is an elementary membrane σ o or its corresponding sub-membranes σ o h contain multiset of inhibitor objects ¬ p . When a membrane dissolution rule associated with membrane σ j is applied, membrane σ j is dissolved, and all objects in membrane σ j are decomposed at the current time or iteration. Especially, λ is an empty symbol, where λ Γ .
Figure 3 gives an example of membrane dissolution. When elementary membrane 3 and its sub-membranes 1 to H contain multiset of inhibitor objects ¬ p and objects, elementary membrane 3 and corresponding sub-membranes will be dissolved. Especially, if sub-membrane 2 of elementary membrane 3 not contains any multiset of inhibitor objects, as shown in blue area of Figure 3. Then sub-membrane 2 will turn into new elementary membrane 3 and continue to perform subsequent computation, elementary membrane 3 and other sub-membranes are dissolved through membrane dissolution rules.

4.4. Communication Rules for Objects

In DSPSO-ECP, communication rules based on evolutional symport rules of ECP are introduced to the exchanging and sharing of global information for different membranes. There are two kinds of communication rules for objects based on the set membership, including communication rules from elementary membrane σ o ( 2 o m ) to non-elementary membrane σ 1 , and communication rules from non-elementary membrane σ 1 to elementary membrane σ o . More details are given in the following.

4.4.1. Communication Rules from Elementary Membrane to Non-Elementary Membrane

Communication rules from elementary membrane σ o to non-elementary membrane σ 1 are of the form, R o 1 : u o g b e s t t o λ 1 λ o u o t 1 , for 2 o m . It only can be executed on a time or iteration if there is an elementary membrane σ o in a configuration contains multiset of global best u o g b e s t . When a communication rule at iteration t is applied, global best u o g b e s t t in elementary membrane σ o evolve to position of object u o t and send to non-elementary membrane σ 1 . Thus, non-elementary membrane σ 1 only contains m 1 number of objects coming from elementary membranes σ 2 to σ m . Objects with the best fitness values among these transferred objects will be selected in non-elementary membrane σ 1 as the global best at iteration t , which can be denoted by u 1 g b e s t t .

4.4.2. Communication Rules from Non-Elementary Membrane to Elementary Membrane

Communication rules from non-elementary membrane σ 1 to elementary membrane σ o are of the form, R 1 o : λ o u 1 g b e s t t 1 u o g b e s t t o λ 1 , for 2 o m . It only can be executed on a time or iteration if there is a non-elementary membrane σ 1 in a configuration contains multiset of global best u 1 g b e s t t . When a communication rule at iteration t is applied, global best u 1 g b e s t t in non-elementary membrane σ 1 evolve to global best u o g b e s t t and send to elementary membrane σ o . At the same time, global best u 1 g b e s t t in non-elementary membrane σ 1 is sent to environment σ 0 , which is regarded as computational result of DSPSO-ECP at iteration t . The communication relationship in the proposed DSPSO-ECP is shown in Figure 4, and the direction of information transmission between different membranes is graphically depicted as the direction of black arrow.

4.5. Computation Procedure of DSPSO-ECP

(1)
Initialization mechanism
<1> Parameters initialization
In the proposed DSPSO-ECP, the total number of initial objects is denoted by N . Each elementary membrane contains the same number of initial objects as we mentioned above, where n o = n . The number of elementary membranes is denoted by m 1 , and the number of sub-membranes for the elementary membrane is denoted by H , noted that each elementary membrane has the same number of sub-membranes. The initial membrane structure of DSPSO-ECP is depicted in Figure 1;
<2> Velocity and position initialization
The velocity and position of initial objects in each elementary membrane are randomly generated from search space. The objective of the optimization problem is to minimize the fitness function;
<3> Update local best and global best
Update local best u i l b e s t ( 1 i n o ) and global best u o g b e s t for all initial objects in each elementary membrane according to Equations (5) and (6). Noted that evolution rules with promoter/inhibitor for membranes only executed on a configuration at a moment when promoter and inhibitor conditions are satisfied;
(2)
Evolution mechanism
<1> Basic evolution mechanism for elementary membranes
Step 1: Update velocity and position
Basic evolution rules for objects are adopted to update the velocity and position of objects in each elementary membrane according to Equations (3), (4) and (7);
Step 2: Update local best and global best
Update local best u i l b e s t and global best u o g b e s t for all objects in each elementary membrane according to Equations (5) and (6);
<2> Local evolution mechanism for sub-membranes
Step 1: Create sub-membranes
In particular, promoter p is regarded as judgment condition, which can be described as follows. When global best in the elementary membrane σ o cannot be further improved for l i m i t iterations, where p = l i m i t > 2 , a group of sub-membranes σ o h ( 1 h H ) for elementary membrane σ o are created through membrane creation rules;
Step 2: Update velocity and position
Local evolution rules for objects are adopted to update velocity and position of objects in each sub-membrane according to Equations (1), (4) and (8);
Step 3: Update global best
Update global best u o h g b e s t in each sub-membrane according to Equation (6);
Step 4: Dissolve membranes
Inhibitor p is regarded as a comparison condition, which is used to compare the fitness values of global best in elementary membrane σ o and its corresponding sub-membranes σ o h , where p = f u o h g b e s t > f u o g b e s t , the corresponding sub-membranes are dissolved through membrane dissolution rules. Otherwise, the sub-membrane σ o h will replace its corresponding elementary membrane σ o and continue to perform the next evolution;
(3)
Communication mechanism
<1> Communication rules from elementary membranes to non-elementary membrane
For each elementary membrane, the first kind of communication rules based on evolutional symport rules of ECP is adopted to send global best u o g b e s t of elementary membrane σ o to non-elementary membrane σ 1 , and to evolve into the position of object u o . Especially, if a group of sub-membranes are created, for each sub-membrane, the global best u o h g b e s t of sub-membrane σ o h is also sent to non-elementary membrane σ 1 at the same time. The best object among these transferred global best is stored as global best u 1 g b e s t in non-elementary membrane σ 1 ;
<2> Communication rules from non-elementary membrane to elementary membranes
For each elementary membrane, the second kind of communication rules based on evolutional symport rules of ECP is used to send global best u 1 g b e s t of non-elementary membrane σ 1 to elementary membrane σ o , and to evolve into global best u o g b e s t of elementary membrane σ o . At the same time, global best u 1 g b e s t of non-elementary membrane σ 1 is also sent to environment σ 0 as the computational result of DSPSO-ECP for each iteration;
(4)
Halting and output
The evolution and communication mechanism of the proposed DSPSO-ECP are repeatedly implemented with an iteration form until the termination criterion is satisfied. The termination criterion of DSPSO-ECP is stopped to whether the maximum number of iterations is reached. When the system halts, the last global best, which is stored in the environment, is viewed as the final computational result of the proposed DSPSO-ECP. The pseudo-code of computation for the proposed DSPSO-ECP is given in the following.
Input: N , m , t max , c 1 , c 2 , r 1 , r 2 , φ max , N e , ω min , ω max , H , l i m i t ;
(1) Initialization mechanism
 <1> Velocity and position initialization
   for i = 1 to N
     Velocity of object u i : V i = r a n d s l , s u ;
     Position of object u i : X i = r a n d s l , s u ;
   end
  <2> Update the local and global best;
     for   o = 2 to m
     for i = 1 to n o ( n o = N / m 1 )
      if f X i < f X i l b e s t
       Local best of u i l b e s t : X i l b e s t = X i ;
        if f X i l b e s t < f X i g b e s t
        Global best of u o g b e s t : X i g b e s t = X i l b e s t ;
        end if
      end if
     end
   end
for t = 1 to t max
(2) Evolution mechanism
 <1> Basic evolution mechanism for elementary membranes
  for o = 2 to m
   for i = 1 to n o
   Step 1: Update velocity and position;
    ω t = ω min + ω max ω min t / t max ;
        V i ( t + 1 ) = ω V i ( t ) + c 1 r 1 X i l b e s t ( t ) X i ( t ) + c 2 r 2 X o g b e s t ( t ) X i ( t ) ;
        X i t + 1 = X i t + V i t + 1 ;
   Step 2: Update local best and global best;
   <2> Local evolution mechanism for sub-membranes
   if p = l i m i t > 2
   Step 1: Membrane creation, σ o 1 to σ o H of σ o are created;
   Step 2: Update velocity and position;
    V i t + 1 = χ V i t + r 3 X n # n b e s t t X i t + r 4 X o h g b e s t t X i t ;
    X i t + 1 = X i t + V i t + 1 ;
   Step 3: Update global best;
   Step 4: Membrane dissolution, σ o 1 to σ o H are dissolved;
   for j = 1 to H
      if p = f u o j g b e s t > f u o g b e s t
       sub-membrane σ o j is dissolved;
      else
        Global best: u o g b e s t = u o j g b e s t ;
        Sub-membrane σ o j replace elementary membrane σ o ;
       end
      end
     end
(3) Communication mechanism
<1> Communication rules from σ o to σ 1
     for o = 2 to m
     Position of object u o : X o = u o g b e s t ;
     if f X o < f u 1 g b e s t
       Global best of u 1 g b e s t : u 1 g b e s t = X o ;
       end if
     end
  <2> Communication rules from σ 1 to σ o and σ 0
     for o = 2 to m
     Global best of u o g b e s t : u o g b e s t = u 1 g b e s t ;
     end
     Global best of u 0 g b e s t : u 0 g b e s t = u 1 g b e s t ;
end
(4) Halting and output
  if t > t max
   Best Position of P system: u 0 g b e s t ;
   Best Fitness of P system: f u 0 g b e s t ;
 end
Output: Best Position of P system, Best Fitness of P system;

4.6. Complexity Analysis

In this subsection, the complexity of proposed the DSPSO-ECP is discussed and analyzed. N is the total number of initial objects located in elementary membranes. m 1 is the number of elementary membranes. n o ( 2 o m ) is the number of initial objects in elementary membrane σ o , where n o = n , and N = o = 2 m n o = n m 1 . t max is the maximum number of iterations. D is the dimension of search space.
The computation of the proposed DSPSO-ECP contains three steps. At the first step of initialization computing, the computation time for all objects in an elementary membrane is denoted by n D . The complexity of initialization mechanism in DSPSO-ECP is O n D due to the distributed parallel computing model of P systems. At the second step of evolution computing, the evolution rules for objects in elementary membranes and its sub-membranes are executed in parallel. The computation time needed by executing a basic evolution rule for all objects in an elementary membrane is n D , and the computation time needed by executing a local evolution rule for all objects in a sub-membrane is n n + D . Thus, the time needed by one evolution for all objects in system is n n + D , and the complexity of evolution mechanism in DSPSO-ECP is O n n + D . At the third step of communication computing, the computation time needed by executing a communication rule between elementary membranes and non-elementary membrane is set to 2, and the time needed by one communication in system is 2 m 1 . Thus, the complexity of the communication mechanism in DSPSO-ECP is O 2 m 1 . Therefore, the computation time of the whole P system needed by once iteration is n n + D + 2 m 1 , and the total computation time of DSPSO-ECP is n D + t max n n + D + 2 m 1 . Due to m n , the complexity of the proposed DSPSO-ECP can be simplified to O n t max n + D .

5. Experimental Results and Analysis

In this section, computational experiments which are made on eight classic numerical benchmark functions are conducted to evaluate the efficiency of the proposed DSPSO-ECP. More details about these classic benchmark functions are introduced first. The effectiveness of DSPSO-ECP is compared with four existing PSO approaches, including classic PSO, Fitness-Euclidean distance ratio (FER-PSO) [69], particle swarm optimization with multiple subpopulations (MPSO) [70], and SPSO [68]. Especially, all comparative approaches including DSPSO-ECP, are implemented on MATLAB (2016b) and all experiments are conducted on a DELL desktop computer, which made in Dell Technologies, Xiamen, China, with an Intel 8.00 GHZ i7-8550U processor and 16GB of RAM in a Window 10 Environment.

5.1. Benchmark Functions

In this subsection, eight classic numerical benchmark functions are adopted to evaluate the exploitation efficiency and exploration efficiency of competitive approaches, which are reported in previous studies and works [71], including unimodal and multimodal functions. Especially, the objective of these benchmark functions is minimized, and their domains are shown in Table 1.
The domains and minimum of eight classic numerical benchmark functions, including Griewank, Ackley, Tablet, Schwefel 2.22, Zakharov, Sphere, Schwefel 2.21, and Dixon−Price, are depicted in Table 1. The dimension is set to 2 and 10, where D = 2 and D = 10 . The shape and ranges of these benchmark functions with D = 2 are depicted in Figure 5.

5.2. Comparison with Other Exising Approaches

The classic PSO and three improved PSO approaches, such as FER-PSO, MPSO, and SPSO, are used as comparison approaches as we have mentioned above, which have been reported in previous literature and studies. The neighbor’s information based on FER is adopted to guide the trajectory of particles in FER-PSO. The population of particles is divided into multiple subpopulations in MPSO, and a migration strategy for subpopulation is adopted to enhance the global search ability. The values of adjusting parameters in these comparative approaches, including the proposed DSPSO-ECP, are the best ones which are listed in Table 2.
DSPSO-ECP and other comparative approaches are run 50 independent times in order to eliminate the effects of random factors. Simple statistics, including worst values (Worst), best values (Best), mean values (Mean), and standard deviations (S.D.) of the fitness function, are introduced to measure the effectiveness of comparative approaches. Convergence results obtained by these approaches on eight benchmark functions with D = 2 are depicted in Figure 6.
Compared with the classic PSO, FER-PSO, MPSO, and SPSO approach, the convergence curve of the proposed DSPSO-ECP declined quickly during the iterations as shown in Figure 6. The computational results obtained by these comparative approaches on eight benchmark functions are reported in Table 3.
As indicated by Table 3, statistical results obtained by DSPSO-ECP, such as Worst, Best, Mean, and S.D., are the minimum compared with other approaches. The mean time of computation obtained by these comparative approaches on eight benchmark functions is listed in Table 4, and the computation time of the proposed DSPSO-ECP is acceptable.
Furthermore, the convergence results obtained by these comparative approaches on eight benchmark functions with D = 10 are depicted in Figure 7. Statistical results of the fitness function are reported in Table 5. These computational results validate the efficiency of the proposed DSPSO-ECP in solving single-modal and multimodal function optimization problems with D = 2 and D = 10 .

5.3. Friedman Test Statistics

In this subsection, the Friedman test is introduced to further investigate the performance of the proposed DSPSO-ECP. The Mean of fitness function obtained by these comparative approaches, as listed in Table 3 and Table 5, are used in the Friedman test. The null hypothesis is that all comparative approaches, including the proposed DSPSO-ECP, have the same performance for any benchmark function. Mathematically, the Friedman test works as follows [73].
In the Friedman test, the benchmark function is treated as a random sample, which is denoted by i , for 1 i 8 , and the comparative approach is viewed as treatment, which is denoted by j , for 1 j 5 . The mean of the fitness function obtained by comparative approaches for each benchmark function are ranked from largest to smallest [74]. The rank of the comparative approach j on the benchmark function i is denoted by r i j , the mean of rank is 1 2 p + 1 , in this case, three, where p is the number of comparative approaches, in this case, five. The Friedman test statistic χ r 2 is given by (9) in the following
χ r 2 = 12 n p p + 1 j = 1 p i = 1 n r i j 2 3 n p + 1
where n is the number of benchmark functions, in this case, eight. The Friedman test statistic follows a Chi-squared distribution with p 1 degrees of freedom. Ranks of comparative approaches on benchmark function with D = 2 and D = 10 are presented in Table 6 and Table 7. With p 1 = 4 degrees of freedom, the critical value of statistic which is denoted by χ 2 at the significance level α = 5 % , is set to 9.488, where χ 2 = 9.488 . The Friedman test statistics with different search dimensions are computed using the ranks of Table 6 and Table 7, and the results are, χ r 2 = 32 > 9.488 with D = 2 , and χ r 2 = 30.6 > 9.488 with D = 10 . Thus, the conclusion of the Friedman test is to reject the null hypothesis, i.e., the comparative approaches are significantly different. It was, in this comparison experiment, different optimization approaches have significantly different performances on eight benchmark functions with D = 2 and D = 10 .

6. Image Segmentation

In this section, the experimental results obtained by existing segmentation approaches, including DSPSO-ECP, are reported and discussed in order to further evaluate the efficiency of the proposed DSPSO-ECP. All segmentation approaches are implemented on MATLAB (2016b) and all experiments are conducted on a DELL desktop computer with an Intel 8.00 GHZ i7-8550U processor and 16GB of RAM in a Windows 11 Environment.

6.1. Benchmark Images

Eight benchmark test images are adopted in the comparison experiment, including Stones, Island, Tiger, Worker, Surfer I, Teste4, Church, and Cow, which are provided by Berkeley Segmentation Dataset and Benchmark (BSDS300) [75], and the size of test images is set to 481 × 321. The original images are depicted in Figure 8.

6.2. Objective Function

Ostu’s threshold segmentation method is a classical approach for solving grey image segmentation problems, which is initiated by Ostu [76]. Discriminant criterion of Ostu’s method is introduced to evaluate the performance of the experimental approach according to maximize between-class variance. The discriminant criterion of Ostu’s method is described as follows. A grayscale image I contains N pixels is divided into K + 1 classes, the values of the minimum and maximum of grey levels for image I are set to 0 and L , usually, L = 255 . Then a group of thresholds t 1 , t 2 , , t K contain K thresholds are adopted to segment the original image I into K + 1 distinct classes C 0 , C 1 , , C K , which can be described by (10) in the following
C 0 = p x , y I 0 p x , y t 1 , C 1 = p x , y I t 1 + 1 p x , y t 2 , , , C K = p x , y I t K + 1 p x , y L
where C K is K -th class in image I . t K is K -th threshold of dividing thresholds t 1 , t 2 , , t K . p x , y is the grey level of pixel located at x , y in image I . The number of pixels which is included in i -th ( 0 i L ) grey level or frequency is denoted by n i , and the probability of i -th grey level in image I is denoted by P i , where P i = n i / N and N = i = 0 L n i . The optimal thresholds { t 1 , t 2 , , t K } are determined by (11), (12), (13), (14) and (15) in the following
{ t 1 , t 2 , , t K } = arg max { σ B 2 ( t 1 , t 2 , , t K ) }
ω 0 = i = 0 t 1 P i ,   ω 1 = i = t 1 + 1 t 2 P i ,     , ω k = i = t k + 1 t k + 1 P i ,   ,   ω K = i = t K + 1 L P i ,
μ 0 = i = 0 t 1 i P i / ω 0 ,   μ 1 = i = t 1 + 1 t 2 i P i / ω 1 ,   , μ k = i = t k + 1 t k + 1 i P i / ω k ,   ,   μ K = i = t K + 1 L i P i / ω K ,  
σ k 2 = ω k μ k μ I 2 ,   μ I = i = 0 L i P i ,   for   k = 0 , 1 , , K ,  
σ B 2 = k = 0 K σ k 2 = σ 0 2 + + σ k 2 + + σ K 2
where σ B 2 is between-class variance, which is also used as objective function of experimental approaches to measure the performance of segmentation approaches.

6.3. Comparison with Other Segmentation Approaches

In order to verify the efficiency of the proposed DSPSO-ECP for solving high dimensional multilevel thresholding image segmentation problems, the experimental experiments are conducted on eight benchmark test images with a different number of threshold values, i.e., K = 5 , 8 , 10 [77]. Some nature-inspired approaches, such as PSO [32], DE [78], adaptive particle swarm optimization (APSO) [79], and complex chained P system based on the evolutionary mechanism (CCP) [80], are used as comparison approaches to validate the effectiveness of DSPSO-ECP. The main objective of these comparative approaches, including DSPSO-ECP, is to find a group of thresholds { t 1 , t 2 , , t K } to maximize the between-class variance σ B 2 according to Equations (11)–(15). Figure 9, Figure 10, Figure 11 and Figure 12 depicted the segmentation images of Stones, Island, Tiger, and Worker obtained by these comparative approaches.
It can be easily observed that the segmentation quality of test images has been gradually improved with the increasing number of threshold values. The grey-level histogram is also introduced to exhibit the threshold values obtained by comparative approaches more clearly. The grey-level histograms of the Worker image at a 10-threshold level are depicted in Figure 13.
The grey-level histograms of the Worker image with optimal threshold values obtained by all comparative approaches are shown in Figure 13, and the values of optimal thresholds obtained by comparative approaches are different. Simple statistics, including the Mean and S.D. of between-class variance obtained by comparative approaches on the different number of threshold values for 50 independent times are presented in Table 8.
Compared to other segmentation approaches, the computational results demonstrate the efficiency of the proposed DSPSO-ECP on eight benchmark test images with a different number of threshold values as shown in Table 8. Furthermore, some existing segmentation approaches, including the whale optimization algorithm (WOA) [81], gray wolf optimizer (GWO) [82], whale optimization algorithm based on thresholding heuristic (WOA-TH) [77], and wolf optimizer based on thresholding heuristic (GWO-TH) [77], are adopted as comparison approaches to further evaluate the segmentation effectiveness of the proposed DSPSO-ECP. The maximum number of iterations is set to 3000, 3600, and 4000. Simple statistics, including the Mean and S.D., of between-class variance obtained by these segmentation approaches on a different number of threshold values for 100 independent times are presented in Table 9.
It can be seen that results obtained by the proposed DSPSO-ECP are the best results on most of the test benchmark images compared to WOA, GWO, WOA-TH, and GWO-TH as shown in Table 9. Table 10 reported the best fitness values achieved by all comparative approaches for 100 independent times.

6.4. The Wilcoxon Signed Ranks Test

As a non-parametric statistical hypothesis, the Wilcoxon signed ranks test, which is reported in previous research [83], is introduced to validate the difference between the proposed DSPSO-ECP and other segmentation approaches. The mean fitness values obtained by WOA, GWO, WOA-TH, GWO-TH, and DSPSO-ECP are used as comparison criterion in this statistical test of the Wilcoxon signed ranks. The null hypothesis of the Wilcoxon signed ranks test is described as follows. There is no difference between the proposed DSPSO-ECP and other existing segmentation approaches.
Wilcoxon’s test is defined as follows. Let d i is the difference between the performance scores of two compared approaches on i -th out of n problems, where n = 24 . W R + is the sum of ranks for the problems that DSPSO-ECP outperformed compared approach, W R is the sum of ranks for the problems that compared approach outperformed DSPSO-ECP, which are determined by (16) and (17) in the following
W R + = d i > 0 rank d i + 1 2 d i = 0 rank d i
W R = d i < 0 rank d i + 1 2 d i = 0 rank d i
where T R is the smaller of the sums W R + and W R , where T R = min W R + , W R . The results obtained by all pairwise comparison concerning DSPSO-ECP with a level of significance α = 5 % are shown in Table 11. The critical values of distribution T of Wilcoxon for n degrees of freedom is 81 [84]. The values of Wilcoxon test statistic are 11, 0, 20, and 10 from Table 11, where T R = 11 , 0 , 20 , 10 and T = 81 . Therefore, the null hypothesis of Wilcoxon signed ranks test is rejected, and the proposed DSPSO-ECP outperforms other existing segmentation approach with the p -value (the p -value have been computed by using MATLAB) associated.

7. Conclusions

An extended membrane system combining a cell-like P system with evolutional symport/antiport rules and active membranes, and an improved PSO mechanism, called DSPSO-ECP, is proposed for solving optimization problems. This extended membrane system under the framework of membrane computing uses a cell-like P system with a dynamic membrane structure, and integrates evolution rules for objects and membranes. There are two kinds of evolution rules for objects in the proposed DSPSO-ECP, including basic evolution rules based on PSO in elementary membranes and local evolution rules based on improved SPSO in sub-membranes. In basic evolution rules, the updated model of velocity and position in PSO is adopted to evolve objects in elementary membranes. The modified update model of velocity in improved SPSO is used in local evolution rules to evolve objects in sub-membranes. A group of sub-membranes for elementary membrane are specially designed to avoid prematurity through membrane creation and dissolution rules with promoter/inhibitor. The exchange and sharing of information between different membranes are achieved by communication rules for objects, which are based on the evolutional symport rules of ECP. The proposed DSPSO-ECP is evaluated on eight numerical benchmark functions, and is compared with classic PSO and three improved PSO approaches. Computational results clearly validate the effectiveness of this extended membrane system. Furthermore, comparison experiments with segmentation approaches, which are made on eight test images, are conducted to verify the performance, and results show the validity of the proposed DSPSO-ECP.
P systems, under the framework of distributed parallel computing model, have highly effective efficiency in solving optimization problems with linear or polynomial complexity. The evolution-communication mechanism of P systems processes data in parallel and also provides a new way to solve complex problems. However, the application of P systems has been limited by the incompleteness of fundamental computation and the complicacy of implementation. Therefore, extended membrane systems, integrated classic P systems, and evolutionary computation are proposed to try to solve these problems as we mentioned above. In the proposed DSPSO-ECP, the unilateral connection of parent-child membranes based on cell-like P systems is too simple and single, and more multilateral connections based on complicated P systems or biochemical reactions may be introduced in future studies. Comparison experiments are only made on low-dimensional benchmark functions, and DSPSO-ECP may have some limitations on high-dimensionality functions or large datasets. More work is needed to apply this extended membrane system to solving more complex optimization problems.

Author Contributions

Conceptualization, X.L. and J.Q.; methodology, L.W.; validation, L.W. and Z.J.; formal analysis, Y.Z.; writing—original draft preparation, L.W. and N.W.; writing—review and editing, Lin Wang. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The eight test images are from the Berkely computer vision group, Berkely segmentation dataset, and benchmark (BSDS300), available at: https://www2.eecs.berkeley.edu/Research/Projects/CS/vision/grouping/segbench/ (accessed on 12 October 2021).

Acknowledgments

This research project was partially supported by the Doctor Foundation of Shandong Jianzhu University (X21008Z), National Natural Science Foundation of China (61876101, 61802234, 61806114), National Natural Science Foundation of Shandong Province, China (ZR2019QF007), the Ministry of Education Humanities and Social Science Research Youth Foundation, China (19YJCZH244), Social Science Fund Project of Shandong Province, China (16BGLJ06, 11CGLJ22), Special Postdoctoral Project of China (2019T120607), Postdoctoral Project of China (2017M612339, 2018M642695).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Păun, G. Membrane computing: An introduction. Theor. Comput. Sci. 2002, 287, 73–100. [Google Scholar] [CrossRef] [Green Version]
  2. Pan, L.; Zeng, L.; Song, T. Membrane Computing an Introduction, 1st ed.; Huazhong University of Science and Technology Press: Wuhan, China, 2012; pp. 1–10. [Google Scholar]
  3. Păun, G. Membrane computing and economics: A General View. Int. J. Comput. Commun. Control 2016, 11, 105–112. [Google Scholar] [CrossRef]
  4. Wang, L.; Liu, X.; Sun, M.; Qu, J. An extended clustering membrane system based on particle swarm optimization and cell-like P system with active membranes. Math. Probl. Eng. 2020, 2020, 5097589. [Google Scholar] [CrossRef] [Green Version]
  5. Wu, F.; Zhang, Q.; Păun, G.; Pan, L. Cell-like spiking neural P systems. Theor. Comput. Sci. 2016, 623, 180–189. [Google Scholar] [CrossRef]
  6. Zhang, G.; Gheorghe, M.; Pan, L.; Pérez-Jiménez, M.J. Evolutionary membrane computing: A comprehensive survey and new results. Inf. Sci. 2014, 279, 528–551. [Google Scholar] [CrossRef]
  7. Peng, J.; Li, Y.; Kang, H.; Shen, Y.; Sun, X.; Chen, Q. Impact of population topology on particle swarm optimization and its variants: An information propagation perspective. Swarm Evol. Comput. 2022, 69, 325–334. [Google Scholar] [CrossRef]
  8. Song, B.; Li, K.; Martín, D.; Pérez-Jiménez, M.J.; Pérez-Hurtado, I. A survey of nature-inspired computing: Membrane computing. ACM Comput. Surv. 2021, 54, 2201–2231. [Google Scholar] [CrossRef]
  9. David, M.; Miguel, A.; Luis, C.; Song, B.; Pan, L.; Pérez-Jiménez, M.J. P systems with symport/antiport rules: When do the surrounding mateer? Theor. Comput. Sci. 2020, 805, 206–217. [Google Scholar]
  10. Zhao, Y.; Liu, X.; Sun, M.; Qu, J.; Zhang, Y. Time-free cell-like P systems with multiple promoters/inhibitors. Theor. Comput. Sci. 2020, 843, 73–83. [Google Scholar] [CrossRef]
  11. Pan, T.; Xu, J.; Jiang, S.; Xu, F. Cell-like spiking neural P systems with evolution rules. Soft Comput. 2019, 23, 5401–5409. [Google Scholar] [CrossRef]
  12. Păun, G. A dozen of research topics in membrane computing. Theor. Comput. Sci. 2018, 736, 76–78. [Google Scholar] [CrossRef]
  13. Jin, Y.; Song, B.; Li, Y.; Jin, Y. Time-free solution to independent set problem using P systems with active membranes. Fundam. Inform. 2021, 182, 243–255. [Google Scholar] [CrossRef]
  14. Pan, L.; Martín, D.; Song, B.; Pérez-Jiménez, M.J. Cell-like P systems with polarizations and minimal rules. Theor. Comput. Sci. 2020, 816, 1–18. [Google Scholar] [CrossRef]
  15. Martín, D.; Cabrera, L.; Jiménez, M. P systems with evolutional symport and membrane creation rules solving QSAT. Theor. Comput. Sci. 2022, 908, 56–63. [Google Scholar] [CrossRef]
  16. Song, B.; Li, K.; Martín, D.; Valencia-Cabrera, L.; Pérez-Jiménez, M.J. Cell-like P systems with evolutional symport/antiport rules and membrane creation. Inf. Comput. 2020, 275, 104542. [Google Scholar] [CrossRef]
  17. Jiang, S.; Wang, Y.; Xu, J.; Xu, F. The computational power of cell-like P systems with symport/antiport rules and promoters. Fundam. Inform. 2019, 164, 207–225. [Google Scholar] [CrossRef]
  18. Zhang, G.; Jiménez, M.; Gheorghe, G. Real-Life Applications with Membrane Computing, 1st ed.; Springer Press: Berlin/Heidelberg, Germany, 2017; pp. 11–12. [Google Scholar]
  19. Guo, P.; Hou, L.; Ye, L. MEATSP: A Membrane Evolutionary Algorithm for Solving TSP. IEEE Access 2020, 8, 19901–1990960. [Google Scholar] [CrossRef]
  20. Wang, Y.; Liu, X.; Xiang, L. GA–based membrane evolutionary algorithm for ensemble clustering. Comput. Intell. Neurosci. 2017, 2017, 4367342. [Google Scholar] [CrossRef] [Green Version]
  21. Wang, J.; Peng, H.; Yang, J.; Huang, X.; Wang, J. DE-MC: A membrane clustering algorithm based on differential evolution mechanism. Rom. J. Inf. Sci. Technol. 2014, 17, 77–89. [Google Scholar]
  22. Singh, G.; Deep, K. Cell-like P-systems based on rules of particle swarm optimization. Appl. Math. Comput. 2014, 246, 546–560. [Google Scholar] [CrossRef]
  23. Peng, H.; Wang, J.; Shi, P.; Riscos-Núñezd, A.; Pérez-Jiménezd, M.J. An automatic clustering algorithm inspired by membrane computing. Pattern Recognit. Lett. 2015, 68 Pt 1, 34–40. [Google Scholar] [CrossRef]
  24. Singh, G.; Deep, K. A new membrane algorithm using the rules of Particle Swarm Optimization incorporated within the framework of cell-like P-systems to solve Sudoku. Appl. Soft Comput. 2016, 45, 27–39. [Google Scholar] [CrossRef]
  25. Wang, L.; Liu, X.; Qu, J.; Zhao, Y.; Jiang, Z.; Wang, N. An extended tissue-like P system based on membrane systems and quantum-behaved particle swarm optimization for image segmentation. Processes 2022, 10, 287. [Google Scholar] [CrossRef]
  26. Xu, Z.; Chen, S. Research of fusion algorithm based on membrane computing and ant colony algorithm in cloud computing resource scheduling. Comput. Meas. Control 2017, 2017, 120–127. [Google Scholar]
  27. Peng, H.; Wang, J. A hybrid approach based on tissue P systems and artificial bee colony for IIR system identification. Neural Comput. Appl. 2017, 28, 2675–2685. [Google Scholar] [CrossRef]
  28. Sang, X.; Liu, X.; Zhang, Z.; Wang, L. Improved biogeography-based optimization algorithm by hierarchical tissue-like P system with triggering ablation rules. Math. Probl. Eng. 2021, 2021, 6655614. [Google Scholar] [CrossRef]
  29. Guo, P.; Jiang, J.; Liu, Y. P system for hierarchical clustering. Int. J. Mod. Phys. C 2019, 30, 1950062. [Google Scholar] [CrossRef]
  30. Zhao, Y.; Zhang, W.; Sun, M.; Liu, X. An improved consensus clustering algorithm based on Cell-like P systems with multi-catalysts. IEEE Access 2020, 8, 154502–154517. [Google Scholar] [CrossRef]
  31. Song, P.; Huang, E.; Song, Q.; Han, T.; Xu, S. Feature selection algorithm based on P systems. Nat. Comput. 2022, 1–11. [Google Scholar] [CrossRef]
  32. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the ICNN’95—International Conference on Neural Networks, Perth, WA, Australia, 27 November—1 December 1995; pp. 69–73. [Google Scholar]
  33. Kassoul, K.; Zufferey, N.; Cheikhrouhou, N.; Belhaouari, S.B. Exponential particle swarm optimization for global optimization. IEEE Access 2022, 10, 78320–78344. [Google Scholar] [CrossRef]
  34. Harrison, K.; Engelbrecht, A.; Berman, B. Self-adaptive particle swarm optimization: A review and analysis of convergence. Swarm Intell. 2018, 12, 187–226. [Google Scholar] [CrossRef] [Green Version]
  35. Li, M.; Chen, H.; Wang, X.; Zhong, N.; Lu, S. An improved particle swarm optimization algorithm with adaptive inertia weights. Int. J. Inf. Technol. Decis. Mak. 2019, 18, 833–866. [Google Scholar] [CrossRef]
  36. Zhang, Y.; Wang, S.; Ji, G. A comprehensive survey on particle swarm optimization algorithm and Its applications. Math. Probl. Eng. 2015, 2015, 931256. [Google Scholar] [CrossRef]
  37. Yuen, M.; Ng, S.C.; Leung, M.; Che, H. A metaheuristic-based framework for index tracking with practical constraints. Complex Intell. Syst. 2021, 8, 4571–4586. [Google Scholar] [CrossRef]
  38. Hakli, H.; Uguz, H. A novel particle swarm optimization algorithm with Levy flight. Appl. Soft Comput. 2014, 23, 333–345. [Google Scholar] [CrossRef]
  39. Liu, X.; He, G. QPSO algorithm based on Levy flight and its application in fuzzy portfolio. Appl. Soft Comput. 2021, 99, 106894. [Google Scholar] [CrossRef]
  40. Zhou, X.; Zhou, S.; Han, S.; Zhu, S. Levy flight-based inverse adaptive comprehensive learning particle swarm optimization. Math. Biosci. Eng. 2022, 19, 5241–5268. [Google Scholar] [CrossRef]
  41. Liang, B.; Zhao, Y.; Li, Y. A hybrid particle swarm optimization with crisscross learning strategy. Eng. Appl. Artif. Intell. 2021, 105, 104418. [Google Scholar] [CrossRef]
  42. Yang, Q.; Jing, Y.; Gao, X.; Xu, D.; Lu, Z.; Jeon, S.W.; Zhang, J. Predominant cognitive learning particle swarm optimization for global numerical optimization. Mathematics 2022, 10, 1620. [Google Scholar] [CrossRef]
  43. Yang, Q.; Bian, Y.; Gao, X.; Xu, D.; Lu, D.; Jeon, S.W.; Zhang, J. Stochastic triad topology-based particle swarm optimization for global numerical optimization. Mathematics 2022, 10, 1032. [Google Scholar] [CrossRef]
  44. Xu, X.; Yan, F. Random walk autonomous groups of particles for particle swarm optimization. J. Intell. Fuzzy Syst. 2022, 42, 1519–1545. [Google Scholar] [CrossRef]
  45. Duan, Y.; Chen, N.; Chang, L.; Ni, Y.; Santhosh Kumar, S.V.N.; Zhang, P. CAPSO: Chaos adaptive particle swarm optimization algorithm. IEEE Access 2022, 10, 29393–29405. [Google Scholar] [CrossRef]
  46. Zhao, S.; Wang, D. Elite-ordinary synergistic particle swarm optimization. Inf. Sci. 2022, 609, 1567–1587. [Google Scholar] [CrossRef]
  47. He, Z.; Hu, H.; Zhang, M.; Zhang, Y.; Li, A.D. A decomposition-based multi-objective particle swarm optimization algorithm with a local search strategy for key quality characteristic identification in production processes. Comput. Ind. Eng. 2022, 172, 108617. [Google Scholar] [CrossRef]
  48. Kayas, S.; Gumuscu, A.; Aydilek, I.; Karaçizmeli, I.; Tenekeci, M. Solution for flow shop scheduling problems using chaotic hybrid firefly and particle swarm optimization algorithm with improved local search. Soft Comput. 2021, 25, 7143–7154. [Google Scholar]
  49. Kaveh, A.; Khanzadi, M. Charged system search and magnetic charged system search algorithms for construction site layout planning optimization. Period. Polytech. Civ. Eng. 2018, 62, 841–850. [Google Scholar] [CrossRef] [Green Version]
  50. Ambrosio, A.; Spiller, D.; Curti, F. Improved magnetic charged system search optimization algorithm with application to satellite formation flying. Eng. Appl. Artif. Intell. 2020, 89, 103473. [Google Scholar] [CrossRef]
  51. Pan, X.; Xue, L.; Lu, Y.; Sun, N. Hybrid particle swarm optimization with simulated annealing. Multimed. Tools Appl. 2019, 78, 29921–29936. [Google Scholar] [CrossRef]
  52. Fute, E.; Pangop, D.; Tonye, E. A new hybrid localization approach in wireless sensor networks based on particle swarm optimization and tabu search. Appl. Intell. 2022, 1–16. [Google Scholar] [CrossRef]
  53. Chen, X.; Tianfield, H.; Mei, C.; Du, W.; Liu, G. Biogeography-based learning particle swarm optimization. Soft Comput. 2017, 21, 7519–7541. [Google Scholar] [CrossRef] [Green Version]
  54. Chen, X.; Tianfield, H.; Du, W. Bee-foraging learning particle swarm optimization. Appl. Soft Comput. 2021, 102, 107134. [Google Scholar] [CrossRef]
  55. Lin, A.; Li, S.; Liu, R. Mutual learning differential particle swarm optimization. Egypt. Inform. J. 2022, 23, 469–481. [Google Scholar] [CrossRef]
  56. Duan, B.; Guo, C.; Liu, H. A hybrid genetic-particle swarm optimization algorithm for multi-constraint optimization problems. Soft Comput. 2022, 26, 11695–11711. [Google Scholar] [CrossRef]
  57. Xia, X.; Tang, Y.; Wei, B.; Zhang, Y.; Gui, L.; Li, X. Dynamic multi-swarm global particle swarm optimization. Computing 2020, 102, 1587–1626. [Google Scholar] [CrossRef]
  58. Asghari, K.; Masdari, M.; Gharehchopogh, F.; Saneifard, R. Multi-swarm and chaotic whale-particle swarm optimization algorithm with a selection method based on roulette wheel. Expert Syst. 2021, 38, e12779. [Google Scholar] [CrossRef]
  59. Borowska, B. Learning competitive swarm optimization. Entropy 2022, 24, 283. [Google Scholar] [CrossRef]
  60. Jocko, P.; Berman, O.; Engelbrecht, A. Multi-guide particle swarm optimisation archive management strategies for dynamic optimisation problems. Swarm Intell. 2022, 16, 143–168. [Google Scholar] [CrossRef]
  61. Bao, Z.; Watanabe, T. Mixed constrained image filter design using particle swarm optimization. Artif. Life Robot. 2010, 15, 363–368. [Google Scholar] [CrossRef]
  62. Wang, L.; Liu, X.; Qu, J.; Wei, Y. A new chaotic starling particle swarm optimization algorithm for clustering problems. Math. Probl. Eng. 2018, 2018, 8250480. [Google Scholar] [CrossRef]
  63. Yuen, M.; Ng, S.; Leung, M. A competitive mechanism multi-objective particle swarm optimization algorithm and its application to signalized traffic problem. Cybern. Syst. 2020, 52, 73–104. [Google Scholar] [CrossRef]
  64. Farshi, T.; Drake, J.; Ozcan, E. A multimodal particle swarm optimization-based approach for image segmentation. Expert Syst. Appl. 2020, 149, 113233. [Google Scholar] [CrossRef]
  65. Too, J.; Sadiq, A.; Mirjalili, S. A conditional opposition-based particle swarm optimisation for feature selection. Connect. Sci. 2021, 34, 339–361. [Google Scholar] [CrossRef]
  66. Fu, K.; Cai, X.; Yuan, B.; Yang, Y.; Yao, X. An efficient surrogate assisted particle swarm optimization for antenna synthesis. IEEE Trans. Antennas Propag. 2022, 70, 4977–4984. [Google Scholar] [CrossRef]
  67. Pozna, C.; Precup, R.; Horvath, E. Hybrid particle filter-particle swarm optimization algorithm and application to fuzzy controlled servo systems. IEEE Trans. Fuzzy Syst. 2022, 30, 4286–4297. [Google Scholar] [CrossRef]
  68. Nuttapong, N.; Tiranee, A.; Booncharoen, S. Particle swarm optimization inspired by starling flock behavior. Appl. Soft Comput. 2015, 35, 411–422. [Google Scholar]
  69. Li, X. A multimodal particle swarm optimizer based on fitness Euclidean–distance ratio. In Proceedings of the GECCO ‘07: Proceedings of the 9th Annual Conference on Genetic and Evolutionary Computation, New York, NY, USA, 7–11 July 2007; pp. 78–85. [Google Scholar]
  70. Chang, W. A modified particle swarm optimization with multiple subpopulations for multimodal function optimization problems. Appl. Soft Comput. 2015, 33, 170–182. [Google Scholar] [CrossRef]
  71. Zhang, X.; Wang, D.; Chen, H. Improved biogeography-based optimization algorithm and its application to clustering optimization and medical image segmentation. IEEE Access 2019, 7, 28810–28825. [Google Scholar] [CrossRef]
  72. Peng, H.; Wang, J.; Shi, P.; Pérez-Jiménez, M.J.; Riscos-Núñez, A. An extended membrane system with active membranes to solve automatic fuzzy clustering problems. Int. J. Neural Syst. 2016, 26, 1650004. [Google Scholar] [CrossRef]
  73. Kumari, B.; Kumar, S. Chaotic gradient artificial bee colony for text clustering. Soft Comput. 2016, 20, 1113–1126. [Google Scholar]
  74. Friedman, M. The use of Ranks to avoid the assumption of normality implicit in the analysis of variance. Publ. Am. Stat. Assoc. 1937, 32, 675–701. [Google Scholar] [CrossRef]
  75. The Berkeley Segmentation Dataset and Benchmark. Available online: https://www2.eecs.berkeley.edu/Research/Projects/CS/vision/grouping/segbench/ (accessed on 2 October 2021).
  76. Otsu, N. A threshold selection method from gray-level histograms. IEEE Trans. Syst. Man Cybern. 1979, 9, 62–66. [Google Scholar] [CrossRef] [Green Version]
  77. Kumar, B.; Arya, K. A new heuristic for multilevel thresholding of images. Expert Syst. Appl. 2019, 117, 176–203. [Google Scholar]
  78. Storn, R.; Kenneth, P. Differential evolution–A simple and efficient heuristic for global optimization over continuous spaces. J. Glob. Optim. 1997, 11, 341–359. [Google Scholar] [CrossRef]
  79. Han, H.; Lu, W.; Hou, Y.; Qiao, J.F. An adaptive-PSO-based self-organizing RBF neural network. IEEE Trans. Neural Netw. Learn. Syst. 2018, 29, 104–117. [Google Scholar] [CrossRef] [PubMed]
  80. Liu, X.; Wang, L.; Qu, J.; Wang, J. A complex chained P system based on evolutionary mechanism for image segmentation. Comput. Intell. Neurosci. 2020, 2020, 6524919. [Google Scholar] [CrossRef]
  81. Mirjalili, S.; Lewis, A. The whale optimization algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  82. Mirjalili, S.; Seyed, M.; Lewis, A. Gray wolf optimize. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef]
  83. Joaquín, D.; Salvador, G.; Daniel, M.; Herrera, F. A practical tutorial on the use of nonparametric statistical tests as a methodology for comparing evolutionary and swarm intelligence algorithms. Swarm Evol. Comput. 2011, 1, 3–18. [Google Scholar]
  84. Ransom, J. Biostatistical analysis. Am. Biol. Teach. 1974, 36, 316. [Google Scholar] [CrossRef]
Figure 1. The initial membrane structure of DSPSO-ECP.
Figure 1. The initial membrane structure of DSPSO-ECP.
Mathematics 10 04169 g001
Figure 2. An example of membrane creation.
Figure 2. An example of membrane creation.
Mathematics 10 04169 g002
Figure 3. An example of membrane dissolution.
Figure 3. An example of membrane dissolution.
Mathematics 10 04169 g003
Figure 4. The communication relationship in DSPSO-ECP.
Figure 4. The communication relationship in DSPSO-ECP.
Mathematics 10 04169 g004
Figure 5. (a) Griewank; (b) Ackley; (c) Tablet; (d) Schwefel 2.22; (e) Zakharov; (f) Sphere; (g) Schwefel 2.21; (h) Dixon−Price. Eight classic numerical benchmark functions with D = 2 .
Figure 5. (a) Griewank; (b) Ackley; (c) Tablet; (d) Schwefel 2.22; (e) Zakharov; (f) Sphere; (g) Schwefel 2.21; (h) Dixon−Price. Eight classic numerical benchmark functions with D = 2 .
Mathematics 10 04169 g005aMathematics 10 04169 g005b
Figure 6. (a) Griewank; (b) Ackley; (c) Tablet; (d) Schwefel 2.22; (e) Zakharov; (f) Sphere; (g) Schwefel 2.21; (h) Dixon−Price; Convergence of comparative approaches on eight benchmark functions ( D = 2 ).
Figure 6. (a) Griewank; (b) Ackley; (c) Tablet; (d) Schwefel 2.22; (e) Zakharov; (f) Sphere; (g) Schwefel 2.21; (h) Dixon−Price; Convergence of comparative approaches on eight benchmark functions ( D = 2 ).
Mathematics 10 04169 g006aMathematics 10 04169 g006b
Figure 7. (a) Griewank; (b) Ackley; (c) Tablet; (d) Schwefel 2.22; (e) Zakharov; (f) Sphere; (g) Schwefel 2.21; (h) Dixon−Price; Convergence of comparative approaches on eight benchmark functions ( D = 10 ).
Figure 7. (a) Griewank; (b) Ackley; (c) Tablet; (d) Schwefel 2.22; (e) Zakharov; (f) Sphere; (g) Schwefel 2.21; (h) Dixon−Price; Convergence of comparative approaches on eight benchmark functions ( D = 10 ).
Mathematics 10 04169 g007aMathematics 10 04169 g007b
Figure 8. (a) Stones; (b) Island; (c) Tiger; (d) Worker; (e) Surfer I; (f) Teste 4; (g) Church; (h) Cow; Benchmark test images.
Figure 8. (a) Stones; (b) Island; (c) Tiger; (d) Worker; (e) Surfer I; (f) Teste 4; (g) Church; (h) Cow; Benchmark test images.
Mathematics 10 04169 g008
Figure 9. (a) PSO; (b) DE; (c) APSO; (d) CCP; (e) DSPSO-ECP; Segmentation images of Stones obtained by all comparative approaches on 5, 8 and 10 number of threshold values.
Figure 9. (a) PSO; (b) DE; (c) APSO; (d) CCP; (e) DSPSO-ECP; Segmentation images of Stones obtained by all comparative approaches on 5, 8 and 10 number of threshold values.
Mathematics 10 04169 g009
Figure 10. (a) PSO; (b) DE; (c) APSO; (d) CCP; (e) DSPSO-ECP; Segmentation images of Island obtained by all comparative approaches on 5, 8 and 10 number of threshold values.
Figure 10. (a) PSO; (b) DE; (c) APSO; (d) CCP; (e) DSPSO-ECP; Segmentation images of Island obtained by all comparative approaches on 5, 8 and 10 number of threshold values.
Mathematics 10 04169 g010
Figure 11. (a) PSO; (b) DE; (c) APSO; (d) CCP; (e) DSPSO-ECP; Segmentation images of Tiger obtained by all comparative approaches on 5, 8 and 10 number of threshold values.
Figure 11. (a) PSO; (b) DE; (c) APSO; (d) CCP; (e) DSPSO-ECP; Segmentation images of Tiger obtained by all comparative approaches on 5, 8 and 10 number of threshold values.
Mathematics 10 04169 g011
Figure 12. (a) PSO; (b) DE; (c) APSO; (d) CCP; (e) DSPSO-ECP; Segmentation images of Worker obtained by all comparative approaches on 5, 8, and 10 number of threshold values.
Figure 12. (a) PSO; (b) DE; (c) APSO; (d) CCP; (e) DSPSO-ECP; Segmentation images of Worker obtained by all comparative approaches on 5, 8, and 10 number of threshold values.
Mathematics 10 04169 g012
Figure 13. (a) Worker; (b) PSO; (c) DE; (d) APSO; (e) CCP; (f) DSPSO-ECP; The grey-level histogram of segmentation image for Worker at 10 threshold level obtained by all comparative approaches.
Figure 13. (a) Worker; (b) PSO; (c) DE; (d) APSO; (e) CCP; (f) DSPSO-ECP; The grey-level histogram of segmentation image for Worker at 10 threshold level obtained by all comparative approaches.
Mathematics 10 04169 g013
Table 1. Benchmark Functions.
Table 1. Benchmark Functions.
Benchmark FunctionsFunction ExpressionDomain F min
Griewank f 1 x = 1 1000 i = 1 D x i 2 i = 1 D cos x i i + 1 x i 600 , 600 D 0
Ackley f 2 x = 20 + e 20 exp 0.2 1 D i = 1 D x i 2 exp 1 D i = 1 D cos 2 π x i x i 32.768 , 32.768 D 0
Tablet f 3 x = 10 6 x 1 2 + i = 2 D x i 2 x i 100 , 100 D 0
Schwefel 2.22 f 4 x = i = 1 D x i + i = 1 D x i x i 10 , 10 D 0
Zakharov f 5 x = i = 1 D x i 2 + i = 1 D 0.5 i x i 2 + i = 1 D 0.5 i x i 4 x i 5 , 10 D 0
Sphere f 6 x = i = 1 D x i 2 x i 5.12 , 5.12 D 0
Schwefel 2.21 f 7 x = max x i x i 100 , 100 D 0
Dixon−Price f 8 x = x 1 1 2 + i = 2 D i 2 x i 2 x i 1 2 x i 10 , 10 D 0
Table 2. Parameter settings in the experiment.
Table 2. Parameter settings in the experiment.
ParametersPSOFER-PSOMPSOSPSODSPSO-ECP
t max 100100100100100
N 100100100100100
c 1 , c 2 (2, 2)(0.68, 0.68)(0.5, 2.5)(2, 2)(2, 2)
r 1 , r 2 (0, 1)(0.2, 0.5)(0.2, 0.5)(0, 1)(0, 1)
φ max 4.14.1
χ 0.7080.708
r 0.4
m i 51
ω min , ω max (0.4, 1.2)(0.4, 0.9)(0.2, 0.4)(0.4, 1.2)
N e 77
m 45 [72]
H 1414
l i m i t 22
Table 3. Computational results of comparative approaches on eight benchmark functions ( D = 2 ).
Table 3. Computational results of comparative approaches on eight benchmark functions ( D = 2 ).
FunctionsStatisticsComparative Approaches
PSOFER-PSOMPSOSPSODSPSO-ECP
GriewankWorst4.70 × 10−21.33 × 10−21.02 × 10−27.40 × 10−38.88 × 10−16
Best3.18 × 10−55.49 × 10−78.30 × 10−93.80 × 10−90.00 × 10−0
Mean1.11 × 10−25.00 × 10−32.89 × 10−31.83 × 10−31.78 × 10−17
S.D.9.02 × 10−34.23 × 10−33.53 × 10−33.17 × 10−31.26 × 10−16
AckleyWorst5.09 × 10−35.30 × 10−41.63 × 10−42.89 × 10−50.00 × 10−0
Best1.53 × 10−43.55 × 10−51.64 × 10−65.56 × 10−70.00 × 10−0
Mean1.22 × 10−32.00 × 10−43.71 × 10−58.06 × 10−60.00 × 10−0
S.D.1.04 × 10−31.37 × 10−43.26 × 10−55.78 × 10−60.00 × 10−0
TabletWorst3.05 × 10−11.13 × 10−21.36 × 10−34.73 × 10−54.22 × 10−25
Best1.72 × 10−43.40 × 10−65.94 × 10−78.07 × 10−105.41 × 10−30
Mean3.02 × 10−21.36 × 10−39.43 × 10−55.84 × 10−61.98 × 10−26
S.D.4.15 × 10−22.06 × 10−32.00 × 10−41.01 × 10−56.54 × 10−26
Schwefel 2.22Worst1.19 × 10−35.70 × 10−51.84 × 10−54.85 × 10−61.16 × 10−16
Best2.07 × 10−55.23 × 10−72.76 × 10−75.50 × 10−82.77 × 10−19
Mean1.73 × 10−42.53 × 10−53.85 × 10−61.04 × 10−61.74 × 10−17
S.D.1.79 × 10−41.65 × 10−53.57 × 10−61.05 × 10−62.55 × 10−17
ZakharovWorst8.95 × 10−82.95 × 10−91.04 × 10−101.18 × 10−111.78 × 10−32
Best6.64 × 10−114.95 × 10−125.76 × 10−143.87 × 10−162.65 × 10−37
Mean1.09 × 10−84.74 × 10−101.60 × 10−111.57 × 10−127.31 × 10−34
S.D.1.59 × 10−86.87 × 10−102.29 × 10−112.55 × 10−122.67 × 10−33
SphereWorst6.18 × 10−81.06 × 10−97.60 × 10−114.56 × 10−122.04 × 10−32
Best1.02 × 10−111.48 × 10−125.30 × 10−158.80 × 10−161.00 × 10−38
Mean8.81 × 10−91.57 × 10−105.56 × 10−123.67 × 10−135.45 × 10−34
S.D.1.20 × 10−82.56 × 10−101.29 × 10−117.05 × 10−132.88 × 10−33
Schwefel 2.21Worst1.19 × 10−35.70 × 10−51.84 × 10−54.85 × 10−61.16 × 10−16
Best2.07 × 10−55.23 × 10−72.76 × 10−75.50 × 10−82.77 × 10−19
Mean1.73 × 10−42.53 × 10−53.85 × 10−61.04 × 10−61.74 × 10−17
S.D.1.79 × 10−41.65 × 10−53.57 × 10−61.05 × 10−62.55 × 10−17
Dixon−PriceWorst1.66 × 10−43.00 × 10−73.62 × 10−81.98 × 10−91.97 × 10−31
Best1.50 × 10−95.60 × 10−111.19 × 10−121.88 × 10−143.70 × 10−32
Mean5.65 × 10−62.79 × 10−82.20 × 10−98.75 × 10−115.03 × 10−32
S.D.2.49 × 10−54.80 × 10−85.22 × 10−92.97 × 10−103.32 × 10−32
Table 4. Mean time obtained by comparative approaches on eight benchmark functions. (Units: s).
Table 4. Mean time obtained by comparative approaches on eight benchmark functions. (Units: s).
FunctionsComparative Approaches
PSOFER-PSOMPSOSPSODSPSO-ECP
Griewank0.14800.14220.400011.91629.1202
Ackley0.13150.14500.38727.83307.7486
Tablet0.13680.14450.38448.78756.8740
Schwefel 2.220.14000.15460.41138.80566.9248
Zakharov0.17700.14620.39108.05785.6925
Sphere0.14190.13720.37188.10605.0243
Schwefel 2.210.12380.12990.21827.91115.8182
Dixon−Price0.12450.12710.21457.77586.2489
Table 5. Computational results of comparative approaches on eight benchmark functions ( D = 10 ).
Table 5. Computational results of comparative approaches on eight benchmark functions ( D = 10 ).
FunctionsStatisticsComparative Approaches
PSOFER-PSOMPSOSPSODSPSO-ECP
GriewankWorst2.51 × 10−11.65 × 10−11.92 × 10−11.40 × 10−10.00 × 10−0
Best2.22 × 10−21.08 × 10−133.20 × 10−21.23 × 10−20.00 × 10−0
Mean9.55 × 10−26.82 × 10−29.12 × 10−26.20 × 10−20.00 × 10−0
S.D.5.37 × 10−23.47 × 10−23.91 × 10−22.90 × 10−20.00 × 10−0
AckleyWorst9.66 × 10−96.93 × 10−91.16 × 10−06.22 × 10−150.00 × 10−0
Best1.07 × 10−101.36 × 10−106.22 × 10−152.66 × 10−150.00 × 10−0
Mean1.80 × 10−99.55 × 10−103.24 × 10−112.74 × 10−150.00 × 10−0
S.D.1.89 × 10−91.12 × 10−91.70 × 10−105.02 × 10−160.00 × 10−0
TabletWorst2.03 × 10−154.55 × 10−161.07 × 10−211.26 × 10−517.71 × 10−18
Best1.20 × 10−182.88 × 10−191.15 × 10−251.95 × 10−546.00 × 10−100
Mean1.81 × 10−175.97 × 10−175.16 × 10−231.64 × 10−521.54 × 10−39
S.D.1.16 × 10−179.85 × 10−171.74 × 10−224.28 × 10−531.09 × 10−23
Schwefel 2.22Worst3.04 × 10−104.48 × 10−101.84 × 10−101.35 × 10−297.62 × 10−69
Best5.19 × 10−126.81 × 10−123.06 × 10−144.68 × 10−311.72 × 10−87
Mean7.18 × 10−115.95 × 10−114.20 × 10−125.16 × 10−302.53 × 10−70
S.D.6.92 × 10−118.30 × 10−112.60 × 10−113.95 × 10−301.21 × 10−62
ZakharovWorst1.03 × 10−165.51 × 10−184.51 × 10−252.84 × 10−525.03 × 10−14
Best1.27 × 10−211.85 × 10−211.45 × 10−298.24 × 10−583.85 × 10−87
Mean5.79 × 10−194.36 × 10−194.26 × 10−262.49 × 10−541.01 × 10−57
S.D.8.14 × 10−198.63 × 10−198.76 × 10−265.57 × 10−557.12 × 10−23
SphereWorst1.56 × 10−181.03 × 10−198.44 × 10−284.16 × 10−568.17 × 10−24
Best1.85 × 10−226.99 × 10−224.71 × 10−314.02 × 10−601.12 × 10−93
Mean5.92 × 10−201.82 × 10−201.32 × 10−284.72 × 10−571.63 × 10−62
S.D.2.31 × 10−192.28 × 10−202.18 × 10−288.95 × 10−571.16 × 10−46
Schwefel 2.21Worst5.75 × 10−054.94 × 10−56.01 × 10−74.03 × 10−208.11 × 10−67
Best7.83 × 10−77.39 × 10−71.23 × 10−98.61 × 10−231.79 × 10−94
Mean1.40 × 10−51.10 × 10−55.71 × 10−82.98 × 10−211.62 × 10−68
S.D.1.46 × 10−58.30 × 10−61.08 × 10−76.35 × 10−211.15 × 10−67
Dixon−PriceWorst3.21 × 1023.21 × 1026.67 × 10−16.72 × 10−16.67 × 10−1
Best6.67 × 10−11.97 × 10−311.43 × 10−115.53 × 10−99.25 × 10−23
Mean4.74 × 1017.03 × 10−06.13 × 10−16.40 × 10−15.75 × 10−1
S.D.8.89 × 1014.53 × 1011.83 × 10−11.32 × 10−12.19 × 10−1
Table 6. Ranks of averages values for five comparative approaches (Mean) and computation of the Friedman test statistic ( D = 2 ).
Table 6. Ranks of averages values for five comparative approaches (Mean) and computation of the Friedman test statistic ( D = 2 ).
FunctionsPSOFER-PSOMPSOSPSODSPSO-ECP
Griewank54321
Ackley54321
Tablet54321
Schwefel 2.2254321
Zakharov54321
Sphere54321
Schwefel 2.2154321
Dixon−Price54321
Total Rank403224168
Average Rank54321
Deviation210–1–2
Table 7. Ranks of averages values for five comparative approaches (Mean) and computation of the Friedman test statistic ( D = 10 ).
Table 7. Ranks of averages values for five comparative approaches (Mean) and computation of the Friedman test statistic ( D = 10 ).
FunctionsPSOFER-PSOMPSOSPSODSPSO-ECP
Griewank53421
Ackley54321
Tablet54312
Schwefel 2.2254321
Zakharov54321
Sphere54321
Schwefel 2.2154321
Dixon−Price54321
Total Rank403125159
Average Rank53.8753.1251.8751.125
Deviation20.8750.125–1.125–1.875
Table 8. Computational results of comparative approaches on 5, 8, and 10 number of threshold values.
Table 8. Computational results of comparative approaches on 5, 8, and 10 number of threshold values.
Images K StatisticsComparative Approaches
PSODEAPSOCCPDSPSO-ECP
Stones5Mean5339.85255340.17525339.33305340.17525340.7360
S.D.3.96 × 10−00.42 × 10−01.07 × 10−00.42 × 10−05.95 × 10−3
8Mean5376.06785378.71845380.49485380.53125380.5419
S.D.2.73 × 10−00.61 × 10−00.02 × 10−00.01 × 10−02.35 × 10−2
10Mean5385.97065387.50315388.83015388.97195389.2911
S.D.1.59 × 10−00.61 × 10−00.81 × 10−00.20 × 10−01.11 × 10−1
Island5Mean1664.09411664.11781664.59951664.60221664.6061
S.D.4.65 × 10−11.95 × 10−11.41 × 10−29.37 × 10−32.33 × 10−13
8Mean1706.12171705.77111707.28241707.31471707.3275
S.D.7.86 × 10−14.82 × 10−16.13 × 10−25.65 × 10−22.07 × 10−2
10Mean1716.04411717.06031719.07591719.25481719.3554
S.D.1.91 × 10−04.95 × 10−16.01 × 10−14.15 × 10−22.14 × 10−2
Tiger5Mean1558.03291557.93181558.45061558.45011558.4540
S.D.5.33 × 10−12.22 × 10−19.97 × 10−37.16 × 10−32.33 × 10−13
8Mean1599.88311600.91861602.45821602.51541602.5726
S.D.1.49 × 10−04.69 × 10−13.31 × 10−14.20 × 10−21.03 × 10−3
10Mean1611.47521612.06361613.94681614.05301614.1907
S.D.1.48 × 10−05.17 × 10−13.63 × 10−16.94 × 10−21.22 × 10−2
Worker5Mean3615.38593616.02413616.87273616.87833616.8847
S.D.1.36 × 10−04.07 × 10−11.35 × 10−21.06 × 10−24.67 × 10−13
8Mean3673.96433674.97573677.29243677.38203677.4455
S.D.2.58 × 10−06.99 × 10−11.08 × 10−16.38 × 10−29.84 × 10−3
10Mean3687.29893689.98563692.52103692.65573692.7237
S.D.2.14 × 10−07.65 × 10−11.16 × 10−16.34 × 10−26.67 × 10−3
Surfer I5Mean1712.50201712.79461713.30501713.30861713.3169
S.D.8.28 × 10−13.18 × 10−12.72 × 10−28.85 × 10−32.33 × 10−13
8Mean1760.41051762.01231764.00411764.15811764.2167
S.D.2.16 × 10−06.11 × 10−11.79 × 10−14.76 × 10−28.97 × 10−2
10Mean1774.92831776.30851779.05531779.15771779.3295
S.D.1.93 × 10−08.59 × 10−18.96 × 10−17.31 × 10−21.40 × 10−2
Teste45Mean5427.40745427.84185428.36455428.36955428.3717
S.D.1.20 × 10−03.73 × 10−11.74 × 10−29.78 × 10−31.87 × 10−12
8Mean5471.51515471.99365473.84745473.88615473.9076
S.D.2.45 × 10−05.14 × 10−12.85 × 10−25.32 × 10−21.06 × 10−3
10Mean5483.54305483.61525485.69935485.76775485.8291
S.D.1.33 × 10−07.08 × 10−16.33 × 10−29.38 × 10−27.46 × 10−3
Church5Mean3381.91073381.49233381.92363381.92463381.9246
S.D.1.71 × 10−22.25 × 10−14.62 × 10−39.25 × 10−139.33 × 10−13
8Mean3422.56213420.95783420.78343423.20063423.4542
S.D.2.66 × 10−09.18 × 10−12.87 × 10−01.23 × 10−08.34 × 10−2
10Mean3429.55703432.83333434.32113435.25093435.3928
S.D.2.40 × 10−05.23 × 10−11.80 × 10−01.36 × 10−04.55 × 10−1
Cow5Mean3965.28063964.37223965.29253965.29323965.2932
S.D.1.96 × 10−24.53 × 10−13.22 × 10−34.67 × 10−139.33 × 10−13
8Mean4028.84474026.04904028.92994028.94504028.9562
S.D.7.58 × 10−28.64 × 10−15.48 × 10−21.07 × 10−21.39 × 10−2
10Mean4038.37614040.51364042.99714043.30444043.7911
S.D.2.42 × 10−06.22 × 10−19.34 × 10−18.28 × 10−13.77 × 10−1
Table 9. Computational results of comparative approaches on 5, 8 and 10 number of threshold values.
Table 9. Computational results of comparative approaches on 5, 8 and 10 number of threshold values.
Images K StatisticsComparative Approaches
WOAGWOWOA-THGWO-THDSPSO-ECP
Stones5Mean5340.72085339.94975340.73855340.73855340.7385
S.D.2.45 × 10−24.96 × 10−01.01 × 10−111.01 × 10−111.87 × 10−12
8Mean5379.57065378.41695380.53605379.73905380.5429
S.D.3.04 × 10−01.49 × 10−07.15 × 10−32.87 × 10−01.58 × 10−3
10Mean5388.12295387.11235389.30975389.15515389.3174
S.D.2.08 × 10−02.19 × 10−07.43 × 10−33.04 × 10−14.52 × 10−3
Island5Mean1664.59521664.49861664.60611664.60521664.6061
S.D.1.56 × 10−29.76 × 10−21.60 × 10−125.31 × 10−32.33 × 10−13
8Mean1707.33061706.94311707.36531707.22321707.3565
S.D.3.79 × 10−26.86 × 10−14.81 × 10−22.79 × 10−14.56 × 10−2
10Mean1719.33821718.43191719.36691719.11691719.3623
S.D.3.10 × 10−26.47 × 10−12.32 × 10−34.83 × 10−19.82 × 10−3
Tiger5Mean1554.27341554.16121554.27361554.24101558.4539
S.D.2.49 × 10−38.37 × 10−22.97 × 10−123.45 × 10−22.33 × 10−13
8Mean1597.69551597.28031598.24581598.19091602.5731
S.D.2.19 × 10−01.30 × 10−01.29 × 10−35.88 × 10−27.20 × 10−4
10Mean1609.46821608.10281609.82901609.69961614.1956
S.D.1.31 × 10−02.20 × 10−09.56 × 10−31.50 × 10−17.00 × 10−13
Worker5Mean3616.39463616.73483616.87853616.86413616.8847
S.D.4.81 × 10−01.27 × 10−19.73 × 10−33.41 × 10−24.67 × 10−13
8Mean3676.30503673.81703677.44763677.32913677.4483
S.D.3.62 × 10−02.38 × 10−06.66 × 10−38.68 × 10−29.33 × 10−13
10Mean3691.57183689.25683692.65883692.60933692.7201
S.D.2.45 × 10−04.09 × 10−06.34 × 10−16.48 × 10−22.75 × 10−2
Surfer I5Mean1713.30891713.20911713.31691713.29781713.3169
S.D.2.08 × 10−27.68 × 10−24.57 × 10−133.39 × 10−22.33 × 10−13
8Mean1764.08381763.57361764.12741763.98121764.1835
S.D.2.12 × 10−11.04 × 10−01.83 × 10−11.71 × 10−11.45 × 10−1
10Mean1779.18291778.17281779.33121778.73971779.3396
S.D.9.28 × 10−11.36 × 10−07.61 × 10−31.18 × 10−04.62 × 10−4
Teste45Mean5355.85205355.77335355.85555355.85555428.3717
S.D.1.21 × 10−28.47 × 10−25.48 × 10−25.48 × 10−121.87 × 10−12
8Mean5399.65645399.06985399.65655399.56385473.9078
S.D.2.91 × 10−39.46 × 10−14.65 × 10−31.27 × 10−11.81 × 10−12
10Mean5411.09845410.35415411.10165410.85125485.8309
S.D.7.79 × 10−37.74 × 10−15.41 × 10−31.86 × 10−15.03 × 10−3
Church5Mean3375.87213374.21973375.91183371.98133381.9246
S.D.2.65 × 10−06.74 × 10−02.62 × 10−09.65 × 10−09.33 × 10−13
8Mean3416.68153415.44153417.08763415.65583421.3577
S.D.2.40 × 10−01.74 × 10−01.78 × 10−03.51 × 10−02.83 × 10−0
10Mean3429.49013426.60183429.72393428.19373434.8720
S.D.1.42 × 10−02.53 × 10−01.11 × 10−02.06 × 10−01.80 × 10−0
Cow5Mean3965.27953965.17393965.29323964.60963965.2932
S.D.2.01 × 10−29.72 × 10−26.40 × 10−122.93 × 10−04.67 × 10−13
8Mean4028.94104027.78924028.84744027.72104028.9619
S.D.1.83 × 10−29.47 × 10−16.11 × 10−11.69 × 10−01.18 × 10−2
10Mean4043.82134042.93094043.74994043.59494043.3274
S.D.6.24 × 10−11.17 × 10−07.05 × 10−17.29 × 10−18.42 × 10−1
Table 10. Best fitness values of comparative approaches on 5, 8 and 10 number of threshold values.
Table 10. Best fitness values of comparative approaches on 5, 8 and 10 number of threshold values.
Images K StatisticsComparative Approaches
WOAGWOWOA-THGWO-THDSPSO-ECP
Stones5Best5340.73855340.73855340.73855340.73855340.7385
8Best5380.54335380.50375380.54335380.54335380.5433
10Best5389.32005389.28405389.32005389.32005389.3200
Island5Best1664.60611664.60611664.60611664.60611664.6061
8Best1707.41681707.32591707.41681707.41681707.4168
10Best1719.36731719.26671719.36731719.36731719.3673
Tiger5Best1554.27361554.27361554.27361554.27361558.4539
8Best1598.24711598.19501598.24711598.24711602.5735
10Best1609.83911609.78461609.83911609.83911614.1956
Worker5Best3616.88473616.87123616.88473616.88473616.8847
8Best3677.44833677.41433677.44833677.44833677.4483
10Best3692.73073692.67523692.73073692.73073692.7307
Surfer I5Best1713.31691713.31691713.31691713.31691713.3169
8Best1764.24311764.23861764.24311764.24311764.2431
10Best1779.33981779.31061779.33981779.33981779.3398
Teste45Best5355.85555355.85555355.85555355.85555428.3717
8Best5399.65825399.63725399.65825399.65825473.9078
10Best5411.10635411.07145411.10635411.10635485.8333
Church5Best3376.17383376.17383376.17383376.17383381.9246
8Best3417.74173417.66033417.74173417.74173423.6059
10Best3430.25963430.07973430.27873430.27873435.9076
Cow5Best3965.29323965.29323965.29323965.29323965.2932
8Best4028.96684028.81554028.96684028.96684028.9668
10Best4044.14984043.91504044.14984044.14984044.1498
Table 11. Wilcoxon signed ranks test results with a level of significance α = 5 % .
Table 11. Wilcoxon signed ranks test results with a level of significance α = 5 % .
Images K WOAGWOWOA-THGWO-TH
Stones50.0177 (4)0.7888 (8)0.0000 (0)0.0000 (0)
80.9723 (12)2.1260 (12)0.0069 (4)0.8039 (13)
101.1945 (15)2.2051 (13)0.0077 (5)0.1623 (7)
Island50.0109 (2)0.1075 (1)0.0000 (0)0.0009 (1)
80.0259 (7)0.4134 (6)−0.0088 (7)0.1333 (6)
100.0241 (6)0.9304 (9)−0.0046 (2)0.2454 (9)
Tiger54.1805 (16)4.2927 (16)4.1803 (12)4.2129 (15)
84.8776 (19)5.2928 (17)4.3273 (14)4.3822 (16)
104.7274 (18)6.0928 (19)4.3666 (15)4.4960 (17)
Worker50.4901(10)0.1499 (4)0.0062 (3)0.0206 (3)
81.1433 (13)3.6313 (15)0.0007 (1)0.1192 (5)
101.1483 (14)3.4633 (14)0.0613 (9)0.1108 (4)
Surfer I50.0080 (1)0.1078 (2)0.0000 (0)0.0191 (2)
80.0997 (8)0.6099 (7)0.0561 (8)0.2023 (8)
100.1567 (9)1.1668 (10)0.0084 (6)0.5999 (11)
4Teste4572.5197 (22)72.5984 (22)72.5162 (18)72.5162 (21)
874.2514 (23)74.8380 (23)74.2513 (19)74.3440 (22)
1074.7325 (24)75.4768 (24)74.7293 (20)74.9797 (23)
Church56.0525 (21)7.7049 (20)6.0128 (17)9.9433 (20)
84.6762 (17)5.9162 (18)4.2701 (13)5.7019 (18)
105.3819 (20)8.2702 (21)5.1481 (16)6.6783 (19)
Cow50.0137 (3)0.1193 (3)0.0000 (0)0.6836 (12)
80.0209 (5)1.1727 (11)0.1145 (10)1.2409 (14)
10−0.4939 (11)0.3965 (5)−0.4225 (11)−0.2675 (10)
W R + 289300190266
W R 1102010
T R 1102010
H 1111
p -value7.14 × 10−51.82 × 10−51.51 × 10−39.90 × 10−5
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Wang, L.; Liu, X.; Qu, J.; Zhao, Y.; Jiang, Z.; Wang, N. An Extended Membrane System Based on Cell-like P Systems and Improved Particle Swarm Optimization for Image Segmentation. Mathematics 2022, 10, 4169. https://doi.org/10.3390/math10224169

AMA Style

Wang L, Liu X, Qu J, Zhao Y, Jiang Z, Wang N. An Extended Membrane System Based on Cell-like P Systems and Improved Particle Swarm Optimization for Image Segmentation. Mathematics. 2022; 10(22):4169. https://doi.org/10.3390/math10224169

Chicago/Turabian Style

Wang, Lin, Xiyu Liu, Jianhua Qu, Yuzhen Zhao, Zhenni Jiang, and Ning Wang. 2022. "An Extended Membrane System Based on Cell-like P Systems and Improved Particle Swarm Optimization for Image Segmentation" Mathematics 10, no. 22: 4169. https://doi.org/10.3390/math10224169

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop