Next Article in Journal
Exploring Long-Term Anomalies in the Vegetation Cover of Peri-Urban Parks Using the Fisher-Shannon Method
Previous Article in Journal
Curriculum Reinforcement Learning Based on K-Fold Cross Validation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Optimizing Multiple Entropy Thresholding by the Chaotic Combination Strategy Sparrow Search Algorithm for Aggregate Image Segmentation

1
School of Information, Chang’an University, Xi’an 710064, China
2
School of Electrical and Electronic Engineering, Wenzhou University, Wenzhou 325035, China
3
School of Optoelectronic Science and Engineering, University of Electronic Science and Technology of China, Chengdu 611731, China
*
Authors to whom correspondence should be addressed.
Entropy 2022, 24(12), 1788; https://doi.org/10.3390/e24121788
Submission received: 12 October 2022 / Revised: 26 November 2022 / Accepted: 3 December 2022 / Published: 6 December 2022

Abstract

:
Aggregate measurement and analysis are critical for civil engineering. Multiple entropy thresholding (MET) is inefficient, and the accuracy of related optimization strategies is unsatisfactory, which results in the segmented aggregate images lacking many surface roughness and aggregate edge features. Thus, this research proposes an autonomous segmentation model (i.e., PERSSA-MET) that optimizes MET based on the chaotic combination strategy sparrow search algorithm (SSA). First, aiming at the characteristics of the many extreme values of an aggregate image, a novel expansion parameter and range-control elite mutation strategies were studied and combined with piecewise mapping, named PERSSA, to improve the SSA’s accuracy. This was compared with seven optimization algorithms using benchmark function experiments and a Wilcoxon rank-sum test, and the PERSSA’s superiority was proved with the tests. Then, PERSSA was utilized to swiftly determine MET thresholds, and the METs were the Renyi entropy, symmetric cross entropy, and Kapur entropy. In the segmentation experiments of the aggregate images, it was proven that PERSSA-MET effectively segmented more details. Compared with SSA-MET, it achieved 28.90%, 12.55%, and 6.00% improvements in the peak signal-to-noise ratio (PSNR), the structural similarity (SSIM), and the feature similarity (FSIM). Finally, a new parameter, overall merit weight proportion (OMWP), is suggested to calculate this segmentation method’s superiority over all other algorithms. The results show that PERSSA-Renyi entropy outperforms well, and it can effectively keep the aggregate surface texture features and attain a balance between accuracy and speed.

1. Introduction

Aggregate particles are widely applied in civil engineering such as road traffic, railway, and housing construction. Natural aggregates, such as rough gravels and smooth pebbles, are both irregular in shape. The geometric characteristics of aggregates, such as the size, shape, and roughness, are related to the aggregate quality evaluation [1,2], and it is crucial to detect them accurately and effectively. Image processing techniques are useful for aiding aggregate detection. Since the aggregate particles are mostly obtained from a muck pile, they are often touching and overlap each other, and their surfaces are very rough. Hence, the aggregate images are usually very noisy, and the image processing for aggregates is harder than for other particles or grains.
The multi-class segmentation algorithm is a popular image processing algorithm, and it can automatically divide the multi-class homogeneous regions based on the features such as discontinuity, similarity in color or gray scale, and texture [3]. For aggregate image segmentation, thresholding, region growing, clustering, and semantic segmentation are the most commonly utilized algorithms. Among them, the watershed [4], region growing [5], and clustering algorithms [6] are typically effective for segmenting aggregate images with clear edges, but the gray-scale value transition from the aggregate surfaces to the edges is gentle. When the aggregate particles overlap or touch each other, these algorithms suffer from a severe under-segmentation problem, resulting in large particle sizes and gray-scale fusion, which loses surface roughness features. In recent years, semantic segmentation [7] has achieved impressive success in aggregate particle size detection, but the parallel segmentation for images with surface textures is still challenging. Thresholding is a straightforward algorithm, and the histogram peaks or valleys in an image are the basic features. The peaks are the most informative features in this gray-scale value range in the image, and the valleys are the least informative features. The number of peaks and troughs in a general image is not fixed, and the valleys can be used as the optimal segmentation thresholds [8]. Thresholding segmentation is simple and robust. It divides pixels into a limited number of classes based on intensity values and a set of thresholds, and it is suitable for a variety of noisy aggregate images.
Dual thresholds [9] can be employed to separate rough or smooth edges and surface textures in aggregate images. Furthermore, compared to global thresholding, multiple thresholding (MT) can easily separate the touching aggregates while maintaining the particle edges, surface roughness, and other details [10]. MT is unaffected by pixels in the neighborhood with similar gray scales. Since the gray scales of aggregate images mostly change gently, MT has better segmentation accuracy and robustness. The algorithm’s success is dependent on determining the proper thresholds. Adaptive MT, such as Otsu [11] and multiple entropy thresholding (MET) [12,13], can automatically determine thresholds. MET measures the amount of image information using entropy as the standard and solves multiple thresholds that can cause the amount of information to reach the extreme value to divide the histogram, resulting in image segmentation. Common entropy includes Renyi entropy [14], symmetric cross entropy [15], Kapur entropy [16], Tsallis entropy, exponential entropy, etc. Kapur entropy is also called maximum entropy. Renyi entropy and Tsallis entropy are extensions of Shannon entropy. They are widely utilized in object recognition and image segmentation. However, they all have the same issue that the operation time grows exponentially as the number of thresholds increases.
An optimization algorithm can assist the above MET to swiftly determine the thresholds and lower the time significantly [17,18,19]. The existing optimization algorithms include particle swarm optimization (PSO) [20], cuckoo search (CS) [21], the bat Algorithm (BAT) [22], gray wolf optimization (GWO) [23], the whale optimization algorithm (WOA) [24], the mayfly algorithm (MA) [25], the sparrow search algorithm (SSA) [26], etc. Their accuracy, speed, and stability are all affected by the population distribution and search paths [19]. For example, the search path of WOA is a spiral, causing the whales to move quickly and WOA to be fast. To prevent the sparrows from repeatedly searching the same position, the population locations of SSA are saved in a matrix. SSA divides the population into producers, scroungers, and vigilantes. These three types of sparrows simultaneously search for the optimal solution, which is quite efficient, and each sparrow has two update mechanisms, making SSA quite robust.
However, these optimization algorithms have two drawbacks, which are an incomplete global search and being trapped in local areas, resulting in a reduction in the algorithms’ accuracy. Laskar et al. [27] overcame the local stagnation by combining WOA and PSO and achieved a breakthrough in the accuracy, but the computational complexity was massively increased. Kumar et al. [28] adopted a chaotic map to make the global search more comprehensive. Later, Chen et al. [29] offered Levy flight to jump out of the local area on its basis. Although the accuracy was improved, it consumed a lot of time. At present, the algorithm with the best balance of accuracy, speed, and stability is SSA [26], which has proven to be a good parameter selector [30,31,32] in the application of network configuration [33], route planning [34], and micro-grid clustering [35]. Thus, this paper optimizes MET based on SSA and corrects the two flaws that were mentioned.
In this study, three strategies are proposed for enhancing the accuracy and stability of SSA. First, piecewise mapping was employed to make the sparrow distribution more uniform. Second, an expansion parameter was studied to increase the search range, which improves the accuracy without adding to the iteration time. Third, a range-control elite mutation strategy was put forward to ensure that the stationary sparrow leaps out of the local area, with a chance to find better values before SSA. The new algorithm is called the PERSSA. PERSSA was utilized to optimize three METs applied to aggregate images. They were Renyi entropy, symmetric cross entropy, and Kapur entropy. This model is called PERSSA-MET.
In short, the main contributions of this paper are listed as follows:
1. The PERSSA-MET is proposed for the segmentation of aggregate images, which can effectively preserve the surface roughness and edge features of aggregates and achieve the best balance between accuracy and speed.
2. Aiming at the characteristic that the aggregate image histogram changes smoothly but has many extreme points, a novel expansion parameter and range-control elite mutation strategies were studied, which can effectively jump out of the local optimum. Combining them with piecewise mapping improved the optimization accuracy and stability of SSA. This algorithm is called PERSSA.
3. A comparative experiment was carried out on three popular METs, which were Renyi entropy, symmetric cross entropy, and Kapur entropy, and the results show that PERSSA-Renyi Entropy is better for aggregate image segmentation.
4. To comprehensively evaluate all methods, an overall merit weight proportion (OMWP) was created to quantify the dominance of the algorithm in all algorithms, which combined precision, stability, and speed.
The remainder of this paper is structured as follows: Section 2 explains the basics of MET and SSA. Section 3 describes PERSSA-MET in detail, including PERSSA and its process for optimizing MET. Section 4 verifies the performance and effectiveness of PERSSA and PERSSA-MET through benchmark function tests and segmentation experiments on various aggregate images. Finally, Section 5 concludes the study and looks forward to future work.

2. Related Works

In this section, the threshold determination methods of three METs are first introduced, and SSA and its current development are described.

2.1. MET

Multiple entropy thresholding (MET) utilizes a histogram to classify pixels into categories based on gray scale and assigns the nearest gray scale to each category. Common METs are Renyi entropy [14], symmetric cross entropy [15], and Kapur entropy [16]. Each MET determines the information differently.
When the number of thresholds is K , the histogram is divided into K + 1 regions. The information of these regions is H 1 , H 2 , , H K + 1 , and the total information is E = H 1 + H 2 + + H K + 1 .
The information amounts for Renyi entropy, symmetric cross entropy, and Kapur entropy in the k-th region can be expressed as Equations (1)–(3):
H K R e n y i ( l ) = 1 1 α l n ( i = l K 1 l K ( P i ω K ( l ) ) α )  
H K S y m m e t r i c   C r o s s ( l ) = i = l K 1 l K h i ( i · l n ( i u K ( l ) ) + u K ( l ) · l n ( u K ( l ) i ) )  
H K K a p u r ( l ) = i = l K 1 l K P i ω K ( l ) l n ( P i ω K ( l ) )  
where [ l K 1 ,   l K ] is the gray-scale range of the K -th region, 0 l K 1 l K L , L is the gray scale of the image, h i is the frequency with grey-scale is i , u K is the mean value of the gray-scale of the region, P i is the probability that the gray scale is i , P i [ 0 , 1 ] ,   ω K is the sum of the probabilities within the region, ω K = i = l K 1 l K P i , α [ 0 ,   1 ) is a tunable parameter, and we take α = 0.5 .
To calculate the l b e s t ( 1 ,   2 ,   ,   K ) that makes E reach its extremum values, which are the final segmentation thresholds, the functions corresponding to the three METs are presented in Equations (4)–(6):
l b e s t R e n y i ( 1 , 2 , , k ) = a r g m a x { E }
l b e s t S y m m e t r i c   C r o s s ( 1 , 2 , , k ) = a r g m i n ( E }
l b e s t K a p u r ( 1 , 2 , , k ) = a r g m a x { E }
These three METs have varied segmentation effects on the objects or histograms of distinct features [3,36], and Table 1 illustrates their differences in aggregate image segmentation. Since there are too many particles for an intuitive review, only a part of the aggregate image, its histogram, and its segmentation results at K = 2 are shown.
Renyi entropy and Kapur entropy detected more bright details for black aggregates, whereas the symmetric cross entropy recognized more shadow details, but its emphasis on detecting white aggregates was the inverse. The main reason for this difference is the changes in the histogram caused by the color, texture, and target ratio of the aggregate image.
The histogram of the aggregate image normally fluctuates smoothly at the peaks and valleys, but there are several extreme points, which are the aggregate characteristics. They are close, but if the thresholds differ only by one gray-scale value, the segmentation results may be quite dissimilar. Therefore, the accuracy and stability of the segmented images are directly influenced by the performance of the optimization algorithm.

2.2. SSA

The sparrow search algorithm (SSA) [26] is an optimization algorithm that mimics sparrow foraging and anti-predation behavior. The sparrow position x i , j affects its fitness value f ( x i , j ) , where i [ 1 ,   n ] is the i -th in n sparrows and j [ 1 ,   d ] is the j -th in the d -dimensional search space.
During foraging, the sparrows are separated into producers and scroungers. The producers have better fitness values, and they provide the foraging area and direction. The producers update their positions according to Equation (7):
x i , j t + 1 = { x i , j t · e x p ( i α · T m a x ) R 2 < S T x i , j t + Q · L R 2 S T
where t is the current number of iterations, T m a x is the maximum number of iterations, α ( 0 ,   1 ] is a random number, R 2 ( 0 ,   1 ] is the alarm value, S T [ 0.5 ,   1 ] is the safe value, Q is a random number that obeys a normal distribution, and L is the all-ones matrix of order 1 × d .
The scroungers follow the producers to obtain food, but some extremely hungry scroungers will convert their foraging paths. The scroungers update their positions according to Equation (8):
x i , j t + 1 = { x b e s t t + 1 + | x i , j t x b e s t t + 1 | · A + · L i n / 2 Q · e x p ( x w o r s t t x i , j t i 2 ) i > n / 2
where x b e s t is the optimal position, x w o r s t is the worst position, A is the matrix of order 1 × d , its element value is ± 1 , and A + = A T ( A A T ) 1 .
During anti-predation, the vigilantes are randomly generated. The sparrows on the edge quickly move to the safe location, and the remaining sparrows approach each other. The vigilantes update their positions according to Equation (9):
x i , j t + 1 = { x b e s t t + β · | x i , j t x b e s t t | f i > f b e s t x i , j t + K · | x i , j t x w o r s t t | ( f i f ω o r s t ) + ε f i = f b e s t
where β is the step size control parameter, which obeys the standard normal distribution; K [ 1 ,   1 ] controls the sparrows’ movement direction; ε is the smallest constant; f b e s t is the best fitness value; and f ω o r s t is the worst fitness value.
Figure 1 shows the optimization principle of SSA. The optimization accuracy and speed of SSA are directly related to the sparrows’ position distribution, search path, and local optimal solution. Some scholars have proposed evolutionary strategies for these three points. Lv et al. [37] introduced the bird swarm algorithm in the SSA’s producers, and the precision increased, but the speed decreased. Chen et al. [29] combined a chaotic map, dynamic weight, and Levy flight with SSA (CDLSSA) to achieve better optimization accuracy and stability, but the segmentation time was lengthened. To ensure speed, Liu et al. [34] proposed to utilize a chaotic map and the adaptive inertia weight optimization SSA (CASSA), but the accuracy improvement was minor. Currently, there is no strategy that can achieve the best balance of segmentation accuracy and convergence speed for the SSA.

3. Proposed Method

In this section, the proposed PERSSA is first introduced, and the specific steps of PERSSA to optimize MET for segmented aggregate images are described.

3.1. PERSSA

PERSSA combines piecewise mapping with the expansion parameter and range-control elite mutation proposed in this paper for the first time, which can effectively jump out of the local optimum and improve the accuracy and stability of the SSA without reducing the convergence speed.

3.1.1. Piecewise Mapping

In the SSA, the sparrows’ initial positions are random, and clustered sparrows are terrible for a global search. Therefore, it is recommended to introduce a chaotic map in the population initialization process to increase the randomness and uniformity of the initial position. Varol et al. [38] compared ten chaotic maps, such as the circle, logistic, piecewise, singer, and tent mapping. Piecewise mapping is the most accurate and stable among them, and it can quickly disturb the population without increasing the optimization time. It can be used to perturb the sparrows’ initial positions. Piecewise mapping can be described by Equation (10):
x ( k + 1 ) = { x ( k ) P 0 x < P x ( k ) P 0.5 P P x < 0.5 1 P x ( k ) 0.5 P 0.5 x < 1 P 1 x ( k ) P 1 P x < 1
where P ( 0 ,   1 ) is the control parameter and P 0.5 . Its value affects the randomness and uniformity of the sequence. Figure 2 is the chaotic sequence of P = 0.4 ,   0.6 ,   0.8 when x ( 1 ) = 0.1 .
It can be seen that the Piecewise mapping has the strong randomness. The closer the P is to 1, the more non-uniform the sequence. The chaotic sequence is most uniform when P = 0.4 .
The steps for piecewise mapping to perturb the sparrows’ initial positions are as follows:
(1) Generate values: x i , j ( 0 ) ,   ( i = 1 ,   2 ,   ,   n ;   j = ( 1 ,   2 ,   ,   d ) ) randomly in [ 0 ,   1 ] .
(2) Set the P value and generate the chaotic sequence through Equation (10). We take P = 0.4 .
(3) Convert the x i , j range from [ 0 ,   1 ] to [ l b ,   u b ] . x i , j = l b + x i , j · ( u b l b ) , where u b and l b are the upper bound and lower bound of the search space, respectively. Take x i , j as the sparrows’ initial positions.

3.1.2. Expansion Parameter

The producers are affected by the function y = e x . As the number of iterations increases, the search scope narrows rapidly. This reduces the global search capability, and it is easy to fall into a local region, reducing the optimization precision. Thus, an expansion parameter, σ , is proposed to widen the search range. At an early stage of the iteration t < T m a x / 2 , σ is larger and spreads the sparrows as far as possible. At a late stage of the iteration t T m a x / 2 , the sparrows are concentrated. Therefore, σ is smaller. The expansion parameter can be expressed by Equation (11):
σ = { σ m a x 1 + c o s ( t π / T m a x ) 2 t < T m a x / 2 σ m i n + 1 c o s ( t π / T m a x ) 2 t T m a x / 2
where σ m a x is the parameter at the beginning of the iteration and σ m i n is the parameter at the end of the iteration. After experiments, the expansion effect is the best when σ m a x is close to 1 and σ m i n is close to −1. We take σ m a x = 0.99 and σ m i n = 0.99 .
By adding Equation (11) into the producers’ location update, Equation (7) becomes Equation (12).
x i , j t + 1 = { x i , j t · e x p ( i σ · α · T m a x ) R 2 < S T x i , j t + Q · L R 2 S T

3.1.3. Range-Control Elite Mutation

The sparrows fall into many locally optimal solutions in the search. If the individual cannot escape in time, the population is excessively consumed, resulting in better solutions being missed. The existing strategies, such as Cauchy–Gaussian mutation, Levy flight, and random walk, do not ensure that the sparrow leaves the neighborhood, and some of them take more time.
Hence, the range-control elite mutation strategy is studied, which can be implemented quickly while improving the precision of the solution. Each iteration selects an elite sparrow ( x b e s t ) with the highest fitness value ( f b e s t ) to jump out of the local region and regulates the distance it moves. The sparrows are still scattered when x b e s t is far from x w o r s t with the lowest fitness value so that x b e s t is randomly mutated within ( l b ,   u b ) . Conversely, the sparrows are assembled when x b e s t and x w o r s t are closer together, controlling x b e s t to mutate outside the rectangular area of x b e s t x w o r s t and inside the region of ( l b ,   u b ) . This is performed to ensure that the elite sparrow ( x b e s t ) continues to optimize globally while avoiding the local optimum. The elite sparrow updates its position according to Equation (13).
x b e s t = { ( u b l b ) · r a n d n + l b | x b e s t x w o r s t | > u b l b 2 2 x b e s t x w o r s t + ( ( u b l b ) · r a n d n + l b ) · r a n d n | x b e s t x w o r s t | u b l b 2
This strategy can greatly increase the probability of the sparrow jumping out of the local optimal solution, and it does not consume iteration time compared with the traditional strategies.

3.2. PERSSA-MET

PERSSA can effectively address the issues of low MET exhaustive method efficiency and the low accuracy and stability of related improvement strategies. In the image segmentation, PERSSA employs MET as the objective function and the image histogram as the search space. PERSSA distributes sparrows globally and converges quickly to allow the function to achieve an extreme value, and the matching sparrow positions are the segmentation thresholds. This model is called PERSSA-MET. The related processes for segmenting aggregate images with PERSSA-MET are illustrated in Figure 3.
The red words in Figure 3 represent the innovation points of the research. The detailed PERSSA process is shown in the dashed box, which is divided into three stages: (1) parameter initialization, including the parameters of PERSSA, parameters extended from MET, and the population’s initial position, x ( 0 ) , after the chaotic mapping; (2) position update: the sparrows are divided based on the fitness values, and three equations are applied to update the positions of the three kinds sparrows; and (3) iterative and mutation: the greedy algorithm keeps the better solutions at each update, and each iteration selects the global optimal sparrow to perform the range-control elite mutation. Until the last iteration, the population converges, the final output f ( x b e s t ,   d ) is employed to assess the benefits and the drawbacks of the optimization algorithm, and x b e s t ,   d is used as the threshold to segment the aggregate image.
In Figure 3, P D is the proportion of producers and S D is the proportion of vigilantes.

4. Experiments and Analyses

In this section, on the one hand, PERSSA and the other seven existing similar optimization algorithms were evaluated through benchmark function tests. On the other hand, experiments on aggregate image segmentation assess were conducted on the performance of PERSSA-MET.
For a fair comparison, all experiments were performed on a PC equipped with an Intel (R) Core (TM) i5-10400F @2.90 GHz CPU and 16 GB RAM, and they were implemented using MATLAB-R2018b within Win-10.

4.1. Evaluation of PERSSA

The benchmark function test was utilized to evaluate the performance of the optimization algorithms, and six benchmark functions were selected, as shown in Table 2. The uni-modal function has only one optimal value, which can test the local convergence ability of the optimization algorithm, and the multi-modal function has several local optimal values and one global optimal value, which can assess the global search ability of the optimization algorithm. The dimension (D) of the benchmark functions was uniformly set to 30, and the versions are illustrated in Figure 4 when the dimension is 2.
PERSSA was compared with seven algorithms, which were PSO [20], GWO [23], WOA [24], MA [25], SSA [26], CDLSSA [29], and CASSA [34]. To ensure fairness, some parameters were set to maximize the performance of each optimization algorithm. They were a population size of 60 and a maximum iteration number of 600. Furthermore, in PSO, C 1 = C 2 = 1.5 and ω = 0.74 ; in GWO, α decreased linearly from 2 to 0 and r 1 ,   r 2 [ 0 ,   1 ] ; in WOA, α [ 0 ,   2 ] , b = 1 , and l [ 1 ,   1 ] ; in MA, g = 1 , g d a m p = 1 , α 1 = 1 , and α 2 = α 3 = 1.5 ; in SSA, CDLSSA, CASSA, and PERSSA, P D = 20 % , S D = 10 % , and S T = 0.8 ; in CDLSSA l e v y b e a t = 1.5 and K = 2 ; and in CASSA S = 1 .
The best value (Best), the average value (Avg), the standard deviation (SD), and the time consumption (T) were selected as the evaluation indicators based on the accuracy and speed of the optimization algorithm, and the unit of T is seconds. Since the sparrows’ initial positions are random, the average value of 60 optimization experiments was taken for the Best, Avg, and SD, and T is the total time spent during the experiment. The relevant statistics are provided in Table 3, in which the bold characters are the best values when comparing the seven optimization algorithms horizontally.
It can be seen from Table 3 that PERSSA had a better overall performance. On the uni-modal functions F 1 ~ F 3 , the Best, Avg, and SD of PERSSA were significantly higher than those of the other algorithms, and its T was only slightly lower than the optimal T. On the multi-modal functions F 4 ~ F 6 , PERSSA was still superior, and its T was optimal on F 6 . In this experiment, WOA, SSA, CASSA, and PERSSA were faster, while CDLSSA and PERSSA had the highest accuracy and stability. However, the T of CDLSSA was roughly twice that of PERSSA. Taken together, PERSSA achieved the best balance of speed and precision.
Figure 5 displays the convergence curves of the eight algorithms in a random experiment to observe the convergence process of PERSSA. For the 30-dimensional functions, PERSSA had the highest accuracy with fewer iterations and converged in approximately 200 iterations. The accuracy of CDLSSA and PERSSA were the same on F 1 , F 2 , and F 5 , but one iteration time of CDLSSA was too long and was inefficient. On F 3 , F 4 , and F 6 , the initial values of PERSSA were closer to the optimal value, which was attributed to the addition of piecewise mapping during the population initialization to make the sparrows’ positions more uniform. It can be observed from F 1 and F 2 that PERSSA had a wider search scope. After adding the expansion parameter, the local optimal value was continuously updated and was close to the global optimal value. On all benchmark functions, the convergence curves of PERSSA had polylines. This was due to the inclusion of the range-control elite mutation, which can easily jump out of local areas, even in the F 4 and F 6 dilemmas. Although CASSA had similar polylines, the escape effect was poor. Overall, when the three proposed strategies work together, PERSSA not only has high precision and fast speed but can also be applied to various functions with high robustness.
The Wilcoxon rank-sum test [39] can compare the correlation between two samples. If P - v a l u e 0.05 , there is a significant difference. Since the sample size in Table 3 was small and the dimension was single, the parameters of 2 and 60 dimensions were added in the inspection to make the experimental results more accurate. The Wilcoxon rank-sum test results of PERSSA and the other optimization algorithms are shown in Table 4, where the bold characters represent significant differences between the two samples.
The results show that the conclusions were consistent with the previous ones. PERSSA’s precision, stability, and robustness differed significantly from those of PSO, GWO, WOA, and MA. PERSSA’s stability differed significantly from that of SSA and CASSA, and its speed differed significantly from that of CDLSSA. From SSA to CASSA to CDLSSA, the precision and robustness increased. PERSSA had the smallest difference in precision compared to the most accurate algorithm (CDLSSA), and it was faster than WOA, SSA, and CASSA. To summarize the result: PERSSA effectively improved the precision while attaining the optimal balance between accuracy and speed.

4.2. Performance of PERSSA-MET

Three METs were combined with PERSSA and SSA to realize PERSSA-MET (i.e., PERSSA-Renyi entropy, PERSSA-symmetric cross entropy, and PERSSA-Kapur entropy) and SSA-MET (i.e,. SSA-Renyi entropy, SSA-symmetric cross entropy, and SSA-Kapur entropy). In Section 4.1, PERSSA was iterated 200 times to achieve convergence on the 30-dimensional functions. After several experiments, only 100 iterations could maximize the savings in the segmentation time while ensuring the accuracy, and the other parameter settings were the same as those in Section 4.1.
The images utilized in the experiment were all obtained from the Key Laboratory of Road Construction Technology and Equipment, Ministry of Education, and they were RGB images with 512 × 512 pixels. A total of 100 aggregate images were tested. Table 5 shows five images with different features, and Table 6 shows their histograms. These histograms are usually flat at the peaks or valleys. For example, No. 1 and No. 4 show gentle gray-scale changes in the dark region without obvious valleys; No. 2 and No. 5 show gentle gray-scale changes in the bright area without evident peaks; and No. 3 has multiple peaks and troughs, and their distances are very short. For rough aggregate surfaces, the segmentation results are mostly discrete granular. Even if the segmentation thresholds are close, many points may be blurred to reduce the original roughness. Thus, the performance of the optimization algorithm directly affects the segmentation accuracy.
Aggregate images contain abundant information. These aggregate particles have different sizes, shapes, colors, surface roughness, etc., and these characteristics are applied to judge the coarse and fine aggregates or gravels and pebbles, to analyze the mineral types, and to decide the grinding degree, etc. Generally, the categories to be segmented are determined according to the application scenarios. The number of thresholds ( K ) is set, and the image is divided into K + 1 categories, which is also the number of fuzzy C-means (FCM) clustering centers. In most of the related literature, K = 2 ~ 6 , and this K can evaluate the accuracy of thresholds while meeting the basic engineering requirements. Due to the presence of space limitations, this paper only presents the partial segmentation results of K = { 2 ,   4 ,   6 } , as shown in Table 5. The colored boxes in Table 5 show areas with large differences in the segmentation of the aggregate surface roughness features. The blue boxes represent better segmentation results, and the red boxes represent the segmentation results that were seriously distorted.
Subjectively, FCM performed the best segmentation when K = 2 , followed by Renyi entropy. As K increased to 4, the accuracy of the MET segmentation results was improved, while FCM became more unstable, and the gray scale of the adjacent pixels merged. When K = 6 , Renyi entropy still performed well for the aggregate images with small particle sizes and touching or overlapping particles, followed by Kapur entropy. Meanwhile, the overall gray scale of symmetric cross entropy was higher. Although the edge of FCM was clear, the gray-scale deviation was too large, and the surface roughness features were lost.
Table 6 shows the histograms of the segmentation results of PERSSA-Renyi entropy, and those of FCM are shown in Table 5. The closer the distribution of the histogram of the segmented image to the histogram of the original image, the better. When K = 2 , the segmentation result of FCM was closer to the original image, but when K = 4 ,   6 , the segmentation results of MET were closer to the original image. For the aggregate image, when the number of thresholds was large, the similarity between the segmentation result and the original image was high. At this time, the histogram can be used as one of the methods to detect the segmentation result.
Table 7 shows the thresholds and fitness values calculated by segmenting the same image when K = 6 for PERSSA-MET and SSA-MET. The thresholds are the l b e s t ( 1 ,   2 ,   ,   K ) in Section 2.1, which were utilized to segment the images. This parameter cannot evaluate the pros and cons of the algorithm. It can only indicate that there are differences when the algorithm divides the image. Each set of thresholds corresponds to a unique fitness value, which is used to evaluate the algorithm, that is, the E value in Section 2.1. The higher fitness values of Renyi entropy and Kapur entropy are better, while the lower fitness values of symmetric cross entropy are better. The better of the two fitness values in Table 7 are highlighted in bold.
In Table 7, the ratio of PERSSA and SSA to obtain better fitness values was 15:0, which proved that PERSSA has a superior optimization ability. In the experiment, it was found that when K = 2 , PERSSA and SSA may obtain the same fitness value at different thresholds, but the segmentation results were slightly different. However, as K increased, the performance of PERSSA improved, especially at K = 6 , and PERSSA-MET outperformed SSA-MET in both fitness values and segmentation quality.
The optimization algorithm’s performance had a significant impact on the accuracy and stability of the segmentation results. Figure 6 and Figure 7, respectively, show the segmentation results of No. 2 and No. 5 on the aggregate image using seven algorithms when K = 6 . The white box is the typical rough texture information of the aggregate surface. It can be seen that the PERSSA-MET segmented image has more detailed features, and the color is closer to the original image, whereas the segmented images of SSA-MET lack many details, especially the aggregate surfaces. This mistake can result in erroneous roughness grading, which can waste material resources by mistakenly grinding or sandblasting.
The quality evaluation of a segmented image needs to consider the category of each pixel. The naked eye cannot label the ground truth of the aggregate image under each K . Therefore, the segmented image was compared with the original image.
The peak signal-to-noise ratio (PSNR), structure similarity (SSIM), feature similarity (FSIM), and time consumption (T) were all used as evaluation indicators. Due to the randomness of PERSSA and SSA during the position initialization, the average (Avg) and standard deviation (SD) of 60 experiments were used as the final experimental result. In Table 8, Table 9 and Table 10, the bold font represents the evaluation parameter value that is better between PERSSA-MET and SSA-MET, and the same value is not marked.
PSNR is a metric for measuring image noise, and its unit is dB. The larger the PSNR value, the more information, the less noise content, and the better the segmentation effect.
P S N R = 20 l o g 10 ( 255 R M S E )
The root-mean-square error (RMSE) is the distance between pixels:
R M S E = ( f ( i , j ) f ^ ( i , j ) ) 2 M × M
where f ( i , j ) is the original image and f ^ ( i , j ) is the segmented image.
SSIM was used to compare the overall similarity between the segmented image and the original image. S S I M [ 0 ,   1 ] , and the higher the SSIM value, the smaller the distortion:
S S I M ( I , I ^ ) = ( 2 μ f μ f ^ + C 1 ) × ( 2 σ f , f ^ + C 2 ) ( μ f 2 + μ f ^ 2 + C 1 ) × ( σ f 2 + σ f ^ 2 + C 2 )
where μ f and μ f ^ are the average gray-scale values of the original image and the segmented image, respectively; σ f and σ f ^ are the standard deviations of the original image and segmented image, respectively; σ f , f ^ is the covariance; and   C 1 and C 2 are the constants utilized to avoid instability at μ f 2 + μ f ^ 2 0 . We take C 1 = C 2 = 6.45.
FSIM compares the feature differences between images before and after segmentation. Phase consistency (PC) extracts stable features in the local structures, and gradient magnitude (GM) characterizes the direction. F S I M [ 0 ,   1 ] , and the higher the FSIM value, the better quality of the segmented images:
F S I M = S L × P C m P C m
where S L is the similarity score, S L ( w ) = S P C S G , S P C = 2 P C 1 P C 2 + T 1 P C 1 2 + P C 2 2 + T 1 , S G = 2 G 1 G 2 + T 2 G 1 2 + G 2 2 + T 2 , G is the image gradient, T 1 and T 2 are the constants, and we take T 1 = 0.85 and T 2 = 160 .
In Table 8, Table 9 and Table 10, the bold font represents the evaluation parameter value that is the better value between PERSSA and SSA, and the same value is not marked.
In Table 8, Table 9 and Table 10, the bold font represents the better value between PERSSA-MET and SSA-MET, and the statistical results of the number of each table are displayed in the penultimate row of the table. It can be known that PERSSA was better than SSA in the optimization of the three METs, which means that PERSSA-MET can segment more aggregate details than SSA-MET.
The blue word represents the optimal value of the horizontal comparison of the seven algorithms, and the statistical data are displayed in the last row of the table. The blue number ratio of the seven algorithms is 31:16:16:9:8:0:22. In general, PERSSA-Renyi entropy was relatively better.
In Table 8, SSA-symmetric cross entropy had more blue numbers than PERSSA-symmetric cross entropy, and their bold numbers were similar. The analysis was affected by the standard deviation (SD).
Figure 8 counts the SD of all of the algorithms. The smaller the standard deviation, the more stable the algorithm. Therefore, symmetric cross entropy had the best stability, followed by Renyi entropy. Kapur entropy was slightly less stable, and FCM was the least stable. In summary, the stability of PERSSA-MET was higher than that of SSA-MET.
The above is the evaluation of segmentation accuracy and stability. The running speed of the algorithm also needs to be considered in the application. In Table 11, the average time (T) for the algorithm to segment an image is counted, and the unit of T is seconds.
Comparing PERSSA with SSA, the better T value ratio was 6:3 (Table 11). The three strategies of PERSSA did not reduce the efficiency of SSA. Conversely, PERSSA was faster than SSA in optimizing symmetric cross entropy. The main reason lies in the particularity of the aggregate image histogram, which was not only gentle at the valley but also had many extreme points, leading to many locally optimal solutions in the search process. SSA is prone to fall into these local areas, whereas PERSSA can jump out of the local areas in time with the range-control elite mutation. Therefore, PERSSA-MET is more suitable for aggregate image segmentation.
Figure 9 shows a line chart of the evaluation values of the seven segmentation algorithms under four parameters. For the same segmentation method, it was not always optimal in the four parameters, which brings trouble to the practical evaluation of the algorithms.
Hence, PSNR, SSIM, FSIM, and T were fused into a new parameter, called the overall merit weight proportion (OMWP), which was used to represent the superiority degree of each algorithm among all algorithms, and O M W P [ 0 ,   1 ] . This parameter was the comprehensive evaluation result of the precision, stability, and speed. It met the application requirements. The OMWP value of algorithm I can be calculated by Equation (18), and the OMWP values of all the segmentation algorithms are summarized in Table 12.
O M W P ( I ) = 1 n ( i ) × n ( j ) | I i ,   j ¯ w o r s t i ,   j | b e s t i ,   j w o r s t i ,   j
where i = 2 ,   4 ,   6 are the numbers of the thresholds; j = P S N R ( A v g ) ,   S S I M ( A v g ) ,   F S I M ( A v g ) ,   P S N R ( S D ) ,   S S I M ( S D ) ,   F S I M ( S D ) ,   and   T are the evaluation parameters; the w o r s t situation is the worst value, such as the minimum values in PSNR(Avg), SSIM(Avg), and FSIM(Avg) and the maximum values in PSNR(SD), SSIM(SD), FSIM(SD), and T; b e s t is the optimal value, the opposite of w o r s t ; and n ( x ) is the number of x . In this study, n ( i ) = 3 and n ( j ) = 7 .
In Table 12, the OMWP values of PERSSA-MET were always higher than those of SSA-MET. Moreover, the PERSSA-Renyi entropy score was the best one, which proves that this method has the greatest practicability in the segmentation of aggregate images.
Figure 10 shows the segmentation results and segmentation threshold lines of PERSSA-Renyi entropy. It can be seen that the selection of the threshold was reasonable and the algorithm had high segmentation accuracy. In practical applications, the number of thresholds can be selected according to the requirements.

5. Conclusions

This paper proposed a segmentation model (i.e., PERSSA-MET) for aggregate images that effectively preserved the rough texture and edge features of the aggregate surface. It consisted of the swarm intelligence optimization algorithm PERSSA and multiple entropy thresholding (MET). First, the three evolutionary strategies of PERSSA were specially proposed for the flat and multi-extreme points of the aggregate image histogram. Piecewise mapping made the location distribution of the sparrows more uniform and random, the expansion parameter was suggested to expand the search range, and the range-control elite mutation strategy could effectively jump out of the local area, which greatly improved the optimization accuracy and stability of SSA. Then, PERSSA was utilized to swiftly calculate the MET thresholds, overcoming the disadvantage of MET’s long operation time as the number of thresholds (K) increases. The experiments compared the performance of three METs in aggregate image segmentation. In terms of precision, Renyi entropy was first, Kapur entropy was second, and symmetric cross entropy was third. In terms of stability, symmetric cross entropy was first, Renyi entropy was second, and Kapur entropy was third. Finally, in order to comprehensively evaluate the pros and cons of the algorithm, an evaluation parameter, OMWP, which combines segmentation precision, stability, and operation speed was studied and was comprehensive and fair. PERSSA-Renyi entropy achieved the highest OMWP values, with an optimal balance between precision, stability, and speed.
In future work, efforts will be made to enhance the image segmentation accuracy and to take these results as the input of deep learning to achieve the parallel classification of various aggregate features. Furthermore, this segmentation model, PERSSA-MET, can be extended to similar fields, such as tissue or cereal images, and has broad application prospects.

Author Contributions

Conceptualization, M.W. and W.W.; methodology, M.W.; software, M.W.; validation, M.W., L.L. and Z.Z.; formal analysis, W.W.; investigation, W.W.; resources, M.W.; writing—original draft preparation, M.W.; writing—review and editing, W.W.; visualization, L.L.; supervision, Z.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by National Natural Science Foundation of China (61170147), the Scientific Research Project of the Zhejiang Provincial Department of Education (Y202146796), the Natural Science Foundation of Zhejiang Province (LTY22F020003), the Wenzhou Major Scientific and Technological Innovation Project of China (ZG2021029), and the Scientific and Technological Projects of Henan Province, China (202102210172).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The datasets analyzed during the current study are available in the CEC-Benchmark-Functions repository (https://github.com/tsingke/CEC-Benchmark-Functions), accessed on 17 July 2020.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Li, Q.J.; Zhan, Y.; Yang, G.; Wang, K.C.P. Pavement skid resistance as a function of pavement surface and aggregate texture properties. Int. J. Pavement Eng. 2018, 21, 1159–1169. [Google Scholar] [CrossRef]
  2. Ying, J.; Ye, M.; Tao, L.; Wu, Z.; Zhao, Y.; Wang, H. Quantitative evaluation of morphological characteristics of road coarse aggregates based on image processing technology. In Proceedings of the 2022 IEEE Asia-Pacific Conference on Image Processing, Electronics and Computers (IPEC), Dalian, China, 14–16 April 2022; pp. 272–279. [Google Scholar] [CrossRef]
  3. Hinojosa, S.; Dhal, K.G.; Elaziz, M.A.; Oliva, D.; Cuevas, E. Entropy-based imagery segmentation for breast histology using the Stochastic Fractal Search. Neurocomputing 2018, 321, 201–215. [Google Scholar] [CrossRef]
  4. Guo, Q.; Wang, Y.; Yang, S.; Xiang, Z. A method of blasted rock image segmentation based on improved watershed algorithm. Sci. Rep. 2022, 12, 7143. [Google Scholar] [CrossRef]
  5. Cheng, Z.; Wang, J. Improved region growing method for image segmentation of three-phase materials. Powder Technol. 2020, 368, 80–89. [Google Scholar] [CrossRef]
  6. Wang, Y.; Tu, W.; Li, H. Fragmentation calculation method for blast muck piles in open-pit copper mines based on three-dimensional laser point cloud data. Int. J. Appl. Earth Obs. Geoinf. ITC J. 2021, 100, 102338. [Google Scholar] [CrossRef]
  7. Zhou, X.; Gong, Q.; Liu, Y.; Yin, L. Automatic segmentation of TBM muck images via a deep-learning approach to estimate the size and shape of rock chips. Autom. Constr. 2021, 126, 103685. [Google Scholar] [CrossRef]
  8. Li, M.; Wang, L.; Deng, S.; Zhou, C. Color image segmentation using adaptive hierarchical-histogram thresholding. PLoS ONE 2020, 15, e0226345. [Google Scholar] [CrossRef] [PubMed]
  9. Ju, Y.; Sun, H.; Xing, M.; Wang, X.; Zheng, J. Numerical analysis of the failure process of soil–rock mixtures through computed tomography and PFC3D models. Int. J. Coal Sci. Technol. 2018, 5, 126–141. [Google Scholar] [CrossRef] [Green Version]
  10. Liu, G.; Shu, C.; Liang, Z.; Peng, B.; Cheng, L. A Modified Sparrow Search Algorithm with Application in 3d Route Planning for UAV. Sensors 2021, 21, 1224. [Google Scholar] [CrossRef]
  11. Zhan, Y.; Zhang, G. An Improved OTSU Algorithm Using Histogram Accumulation Moment for Ore Segmentation. Symmetry 2019, 11, 431. [Google Scholar] [CrossRef] [Green Version]
  12. Abera, K.A.; Manahiloh, K.N.; Nejad, M.M. The effectiveness of global thresholding techniques in segmenting two-phase porous media. Constr. Build. Mater. 2017, 142, 256–267. [Google Scholar] [CrossRef]
  13. Nežerka, V.; Trejbal, J. Assessment of aggregate-bitumen coverage using entropy-based image segmentation. Road Mater. Pavement Des. 2019, 21, 2364–2375. [Google Scholar] [CrossRef]
  14. Peng, L.; Zhang, D. An adaptive Lévy flight firefly algorithm for multilevel image thresholding based on Rényi entropy. J. Supercomput. 2022, 78, 6875–6896. [Google Scholar] [CrossRef]
  15. Han, B.; Wu, Y. A novel active contour model based on modified symmetric cross entropy for remote sensing river image segmentation. Pattern Recognit. 2017, 67, 396–409. [Google Scholar] [CrossRef]
  16. Sharma, A.; Chaturvedi, R.; Kumar, S.; Dwivedi, U.K. Multi-level image thresholding based on Kapur and Tsallis entropy using firefly algorithm. J. Interdiscip. Math. 2020, 23, 563–571. [Google Scholar] [CrossRef]
  17. Liu, W.; Shi, H.; He, X.; Pan, S.; Ye, Z.; Wang, Y. An application of optimized Otsu multi-threshold segmentation based on fireworks algorithm in cement SEM image. J. Algorithms Comput. Technol. 2018, 13. [Google Scholar] [CrossRef] [Green Version]
  18. Liu, W.; Huang, Y.; Ye, Z.; Cai, W.; Yang, S.; Cheng, X.; Frank, I. Renyi’s Entropy Based Multilevel Thresholding Using a Novel Meta-Heuristics Algorithm. Appl. Sci. 2020, 10, 3225. [Google Scholar] [CrossRef]
  19. Abdel-Basset, M.; Mohamed, R.; Abouhawwash, M. A new fusion of whale optimizer algorithm with Kapur’s entropy for multi-threshold image segmentation: Analysis and validations. Artif. Intell. Rev. 2022, 55, 6389–6459. [Google Scholar] [CrossRef]
  20. Jain, N.K.; Nangia, U.; Jain, J. A Review of Particle Swarm Optimization. J. Inst. Eng. (India) Ser. B 2018, 99, 407–411. [Google Scholar] [CrossRef]
  21. García-Gutiérrez, G.; Arcos-Aviles, D.; Carrera, E.V.; Guinjoan, F.; Motoasca, E.; Ayala, P.; Ibarra, A. Fuzzy Logic Controller Parameter Optimization Using Metaheuristic Cuckoo Search Algorithm for a Magnetic Levitation System. Appl. Sci. 2019, 9, 2458. [Google Scholar] [CrossRef] [Green Version]
  22. Su, Y.; Liu, L.; Lei, Y. Structural Damage Identification Using a Modified Directional Bat Algorithm. Appl. Sci. 2021, 11, 6507. [Google Scholar] [CrossRef]
  23. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey Wolf Optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef] [Green Version]
  24. Mirjalili, S.; Lewis, A. The Whale Optimization Algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  25. Zervoudakis, K.; Tsafarakis, S. A mayfly optimization algorithm. Comput. Ind. Eng. 2020, 145, 106559. [Google Scholar] [CrossRef]
  26. Xue, J.; Shen, B. A novel swarm intelligence optimization approach: Sparrow search algorithm. Syst. Sci. Control. Eng. 2020, 8, 22–34. [Google Scholar] [CrossRef]
  27. Laskar, N.M.; Guha, K.; Chatterjee, I.; Chanda, S.; Baishnab, K.L.; Paul, P.K. HWPSO: A new hybrid whale-particle swarm optimization algorithm and its application in electronic design optimization problems. Appl. Intell. 2019, 49, 265–291. [Google Scholar] [CrossRef]
  28. Kumar, Y.; Singh, P.K. A chaotic teaching learning based optimization algorithm for clustering problems. Appl. Intell. 2018, 49, 1036–1062. [Google Scholar] [CrossRef]
  29. Chen, X.; Huang, X.; Zhu, D.; Qiu, Y. Research on chaotic flying sparrow search algorithm. J. Phys. Conf. Ser. 2021, 1848, 012044. [Google Scholar] [CrossRef]
  30. Zhang, H.; Yang, J.; Qin, T.; Fan, Y.; Li, Z.; Wei, W. A Multi-Strategy Improved Sparrow Search Algorithm for Solving the Node Localization Problem in Heterogeneous Wireless Sensor Networks. Appl. Sci. 2022, 12, 5080. [Google Scholar] [CrossRef]
  31. Qiao, L.; Jia, Z.; Cui, Y.; Xiao, K.; Su, H. Shear Sonic Prediction Based on DELM Optimized by Improved Sparrow Search Algorithm. Appl. Sci. 2022, 12, 8260. [Google Scholar] [CrossRef]
  32. Hou, J.; Wang, X.; Su, Y.; Yang, Y.; Gao, T. Parameter Identification of Lithium Battery Model Based on Chaotic Quantum Sparrow Search Algorithm. Appl. Sci. 2022, 12, 7332. [Google Scholar] [CrossRef]
  33. Zhang, C.; Ding, S. A stochastic configuration network based on chaotic sparrow search algorithm. Knowl.-Based Syst. 2021, 220, 106924. [Google Scholar] [CrossRef]
  34. Liu, X.; Yan, J.; Zhang, X.; Zhang, L.; Ni, H.; Zhou, W.; Wei, B.; Li, C.; Fu, L.-Y. Numerical upscaling of multi-mineral digital rocks: Electrical conductivities of tight sandstones. J. Pet. Sci. Eng. 2021, 201, 108530. [Google Scholar] [CrossRef]
  35. Wang, P.; Zhang, Y.; Yang, H. Research on Economic Optimization of Microgrid Cluster Based on Chaos Sparrow Search Algorithm. Comput. Intell. Neurosci. 2021, 2021, 5556780. [Google Scholar] [CrossRef] [PubMed]
  36. Pare, S.; Kumar, A.; Singh, G.K.; Bajaj, V. Image Segmentation Using Multilevel Thresholding: A Research Review. Iran. J. Sci. Technol. Trans. Electr. Eng. 2019, 44, 1–29. [Google Scholar] [CrossRef]
  37. Lv, X.; Mu, X.D.; Zhang, J. Multi-threshold image segmentation based on the improved sparrow search algorithm. Syst. Eng. Electron. 2021, 43, 318–327. [Google Scholar]
  38. Altay, E.V.; Alatas, B. Bird swarm algorithms with chaotic mapping. Artif. Intell. Rev. 2019, 53, 1373–1414. [Google Scholar] [CrossRef]
  39. McGee, M. Case for omitting tied observations in the two-sample t-test and the Wilcoxon-Mann-Whitney Test. PLoS ONE 2018, 13, e0200837. [Google Scholar] [CrossRef]
Figure 1. SSA optimization schematic.
Figure 1. SSA optimization schematic.
Entropy 24 01788 g001
Figure 2. Chaotic sequences of piecewise mapping: (a) P = 0.4 ; (b) P = 0.6 ; (c) P = 0.8 .
Figure 2. Chaotic sequences of piecewise mapping: (a) P = 0.4 ; (b) P = 0.6 ; (c) P = 0.8 .
Entropy 24 01788 g002
Figure 3. Flowchart for segmenting aggregate images by applying PERSSA-MET.
Figure 3. Flowchart for segmenting aggregate images by applying PERSSA-MET.
Entropy 24 01788 g003
Figure 4. Two-dimensional versions of the six benchmark functions: (af) correspond to F 1 ( x 1 , x 2 ) F 6 ( x 1 , x 2 ) .
Figure 4. Two-dimensional versions of the six benchmark functions: (af) correspond to F 1 ( x 1 , x 2 ) F 6 ( x 1 , x 2 ) .
Entropy 24 01788 g004
Figure 5. Iteration curves of the optimization algorithms on the benchmark functions: (af) F 1 F 6 . The abscissa is the number of iterations, and the ordinate is the value of the objective function.
Figure 5. Iteration curves of the optimization algorithms on the benchmark functions: (af) F 1 F 6 . The abscissa is the number of iterations, and the ordinate is the value of the objective function.
Entropy 24 01788 g005
Figure 6. The segmentation results of No. 2 by PERSSA-MET and SSA-MET when K = 6 : (a) PERSSA-Renyi entropy; (b) SSA-Renyi entropy; (c) PERSSA-symmetric cross entropy; (d) SSA-symmetric cross entropy; (e) PERSSA-Kapur entropy; (f) SSA-Kapur entropy; (g) FCM; (h) original image. White boxes are areas where roughness differences are noticeable.
Figure 6. The segmentation results of No. 2 by PERSSA-MET and SSA-MET when K = 6 : (a) PERSSA-Renyi entropy; (b) SSA-Renyi entropy; (c) PERSSA-symmetric cross entropy; (d) SSA-symmetric cross entropy; (e) PERSSA-Kapur entropy; (f) SSA-Kapur entropy; (g) FCM; (h) original image. White boxes are areas where roughness differences are noticeable.
Entropy 24 01788 g006
Figure 7. The segmentation results of No. 5 by PERSSA-MET and SSA-MET when K = 6 : (a) PERSSA-Renyi entropy; (b) SSA-Renyi entropy; (c) PERSSA-symmetric cross entropy; (d) SSA-symmetric cross entropy; (e) PERSSA-Kapur entropy; (f) SSA-Kapur entropy; (g) FCM; (h) original image. White boxes are areas where roughness differences are noticeable.
Figure 7. The segmentation results of No. 5 by PERSSA-MET and SSA-MET when K = 6 : (a) PERSSA-Renyi entropy; (b) SSA-Renyi entropy; (c) PERSSA-symmetric cross entropy; (d) SSA-symmetric cross entropy; (e) PERSSA-Kapur entropy; (f) SSA-Kapur entropy; (g) FCM; (h) original image. White boxes are areas where roughness differences are noticeable.
Entropy 24 01788 g007
Figure 8. Standard deviation profile: (a) PERSSA-Renyi entropy; (b) SSA-Renyi entropy; (c) PERSSA-symmetric cross entropy; (d) SSA-symmetric cross entropy; (e) PERSSA-Kapur entropy; (f) SSA-Kapur entropy; (g) FCM. These three red stars are the optimal SD values on PSNR, SSIM, and FSIM respectively.
Figure 8. Standard deviation profile: (a) PERSSA-Renyi entropy; (b) SSA-Renyi entropy; (c) PERSSA-symmetric cross entropy; (d) SSA-symmetric cross entropy; (e) PERSSA-Kapur entropy; (f) SSA-Kapur entropy; (g) FCM. These three red stars are the optimal SD values on PSNR, SSIM, and FSIM respectively.
Entropy 24 01788 g008
Figure 9. Fan charts of seven segmentation algorithms on four evaluation parameters. The red word is the optimal value under each K value and each parameter.
Figure 9. Fan charts of seven segmentation algorithms on four evaluation parameters. The red word is the optimal value under each K value and each parameter.
Entropy 24 01788 g009
Figure 10. The segmentation result of PERSSA-Renyi entropy for No. 5: (a) original image; (b) color segmentation map with K = 2 ; (c) color segmentation map with K = 4 ; (d) color segmentation map with K = 6 ; (e) segmentation threshold line.
Figure 10. The segmentation result of PERSSA-Renyi entropy for No. 5: (a) original image; (b) color segmentation map with K = 2 ; (c) color segmentation map with K = 4 ; (d) color segmentation map with K = 6 ; (e) segmentation threshold line.
Entropy 24 01788 g010
Table 1. The segmentation results of aggregate images by three METs. The blue box is the area where the segmentation results are significantly different, and the red arrow is the extreme value in the histogram.
Table 1. The segmentation results of aggregate images by three METs. The blue box is the area where the segmentation results are significantly different, and the red arrow is the extreme value in the histogram.
Original ImageHistogramRenyi EntropySymmetric Cross
Entropy
Kapur Entropy
Entropy 24 01788 i001Entropy 24 01788 i002Entropy 24 01788 i003Entropy 24 01788 i004Entropy 24 01788 i005
Entropy 24 01788 i006Entropy 24 01788 i007Entropy 24 01788 i008Entropy 24 01788 i009Entropy 24 01788 i010
Table 2. Six benchmark functions.
Table 2. Six benchmark functions.
FunctionNameDefinitionRangeOptimum
Uni-modal benchmark functionsSphere F 1 ( x ) = i = 1 D x i 2 [ 100 , 100 ] D 0
MaxMod F 2 ( x ) = max { | x i | ,   1 i D } [ 100 , 100 ] D 0
Rosen brock F 3 ( x ) = i = 1 D [ 100 ( x i + 1 x i 2 ) 2 + ( x i 1 ) 2 ] [ 30 , 30 ] D 0
Multi-modal benchmark functionsSchwefel-26 F 4 ( x ) = i = 1 D x i · sin ( | x i | ) [ 500 , 500 ] D −418.9829D
Ackley F 5 ( x ) = 20 exp ( 0.2 1 D i = 1 D x i 2 ) e x p ( 1 D i = 1 D cos ( 2 π x i ) ) + 20 + e [ 32 , 32 ] D 0
Generalized Penalized F 6 ( x ) = π D { 10 s i n 2 ( π y 1 ) + i = 1 D 1 ( y i 1 ) 2 ( 1 + 10 s i n 2 ( π y i + 1 ) ) + i = 1 D u ( x i ,   10 ,   100 ,   4 ) }  
y i = 1 + ( x i + 1 ) / 4
u ( x i ,   a ,   k , m ) = { k ( x i a ) m x i > a 0 a x i a k ( x i a ) m x i < a
[ 50 , 50 ] D 0
Table 3. Parameter evaluations of the optimization algorithms on benchmark functions.
Table 3. Parameter evaluations of the optimization algorithms on benchmark functions.
F 1P 2PSOGWOWOAMASSACASSACDLSSAPERSSA
Uni-modal benchmark functions
F 1 Best1.40E-104.73E-456.32E-1271.82E-104.94E-299000
Avg5.15E-097.10E-431.25E-1091.12E-089.82E-407.39E-5900
SD1.44E-081.40E-426.72E-1093.06E-087.60E-484.79E-5700
T7.00E+016.32E+012.29E+019.53E+011.98E+011.82E+014.06E+012.06E+01
F 2 Best6.20E-015.56E-123.58E-043.32E+012.19E-143.84E-271.51E-2590
Avg1.54E+008.40E-101.62E+015.25E+016.93E-104.59E-132.87E-2000
SD7.85E-011.74E-101.65E+018.05E+004.11E-093.43E-1200
T6.74E+015.98E+012.30E+018.17E+011.99E+012.04E+014.04E+012.04E+01
F 3 Best1.68E+012.52E+012.62E+011.89E+002.71E-047.87E-057.04E-031.16E-09
Avg2.19E+022.66E+012.72E+014.78E+015.21E-023.84E-021.41E-034.45E-04
SD6.64E+027.68E-017.02E-014.07E+011.05E-028.05E-024.42E-038.99E-04
T7.18E+016.36E+012.53E+011.05E+022.20E+012.19E+014.33E+012.26E+01
Multi-modal benchmark functions
F 4 Best−9.33E+03−7.71E+03−1.26E+04−1.15E+04−1.24E+03−1.06E+03−1.91E+03−1.26E+04
Avg−8.22E+03−6.21E+03−9.01E+03−1.08E+04−1.04E+03−9.18E+03−1.88E+03−1.23E+04
SD4.84E+028.06E+021.25E+033.66E+022.02E+031.68E+031.01E+029.48E+01
T7.26E+016.49E+012.57E+011.17E+022.22E+012.26E+014.35E+012.29E+01
F 5 Best2.18E-061.51E-148.88E-164.47E-078.57E-148.98E-148.88E-168.88E-16
Avg5.59E-012.53E-144.09E-155.84E-028.47E-161.18E-158.88E-168.88E-16
SD6.74E-014.39E-152.75E-156.79E-024.59E-169.90E-1600
T7.10E+016.28E+012.39E+019.22E+012.09E+012.07E+014.10E+012.16E+01
F 6 Best6.28E-112.00E-065.45E-046.00E-085.62E-123.33E-134.65E-106.58E-14
Avg1.32E-012.52E-022.79E-025.36E-024.94E-074.13E-072.84E-063.70E-08
SD2.75E-011.48E-026.97E-029.09E-022.06E-061.27E-061.25E-061.52E-06
T9.95E+019.37E+015.13E+011.54E+024.86E+014.87E+018.85E+014.86E+01
1 F is the function. 2 P is the parameter.
Table 4. The Wilcoxon rank-sum test between PERSSA and the other algorithms.
Table 4. The Wilcoxon rank-sum test between PERSSA and the other algorithms.
ParameterPERSSA-
PSO
PERSSA-
GWO
PERSSA-
WOA
PERSSA-
MA
PERSSA-
SSA
PERSSA-
CASSA
PERSSA-
CDLSSA
Best7.16E-041.71E-021.35E-021.22E-031.87E-014.84E-016.27E-01
Avg3.11E-045.04E-033.21E-031.94E-031.20E-011.41E-016.33E-01
SD1.58E-054.62E-034.11E-043.10E-044.72E-024.73E-029.12E-01
T1.72E-075.50E-036.73E-013.06E-088.13E-018.77E-013.21E-04
Table 5. Aggregate images and their partial segmentation results. The blue box is the area with better segmentation results, and the red box is the area with severe segmentation result distortion.
Table 5. Aggregate images and their partial segmentation results. The blue box is the area with better segmentation results, and the red box is the area with severe segmentation result distortion.
No. 1No. 2No. 3No. 4No. 5
Original imageEntropy 24 01788 i011Entropy 24 01788 i012Entropy 24 01788 i013Entropy 24 01788 i014Entropy 24 01788 i015
K22446
PERSSA-Renyi EntropyEntropy 24 01788 i016Entropy 24 01788 i017Entropy 24 01788 i018Entropy 24 01788 i019Entropy 24 01788 i020
PERSSA-Symmetric Cross EntropyEntropy 24 01788 i021Entropy 24 01788 i022Entropy 24 01788 i023Entropy 24 01788 i024Entropy 24 01788 i025
PERSSA-Kapur EntropyEntropy 24 01788 i026Entropy 24 01788 i027Entropy 24 01788 i028Entropy 24 01788 i029Entropy 24 01788 i030
FCMEntropy 24 01788 i031Entropy 24 01788 i032Entropy 24 01788 i033Entropy 24 01788 i034Entropy 24 01788 i035
Table 6. Histograms of the aggregate image segmentation results.
Table 6. Histograms of the aggregate image segmentation results.
No. 1No. 2No. 3No. 4No. 5
Original imageEntropy 24 01788 i036Entropy 24 01788 i037Entropy 24 01788 i038Entropy 24 01788 i039Entropy 24 01788 i040
K22446
PERSSA-Renyi EntropyEntropy 24 01788 i041Entropy 24 01788 i042Entropy 24 01788 i043Entropy 24 01788 i044Entropy 24 01788 i045
FCMEntropy 24 01788 i046Entropy 24 01788 i047Entropy 24 01788 i048Entropy 24 01788 i049Entropy 24 01788 i050
Table 7. Thresholds and fitness values searched by PERSSA and SSA on MET when K = 6 .
Table 7. Thresholds and fitness values searched by PERSSA and SSA on MET when K = 6 .
ValueImageRenyi EntropySymmetric Cross EntropyKapur Entropy
PERSSASSAPERSSASSAPERSSASSA
ThresholdsNo. 141 78 114 146 173 20540 82 112 141 170 19927 53 87 123 157 18939 68 97 129 148 18038 70 102 134 169 19837 59 96 129 197 162
No. 244 79 116 154 198 23150 74 99 125 176 22626 48 82 114 152 18630 55 83 113 148 18041 91 127 162 189 23341 90 127 161 186 234
No. 341 80 109 153 186 22936 62 100 121 189 22932 65 94 122 148 18131 63 100 130 160 18446 86 122 155 188 23056 95 117 143 170 231
No. 439 76 115 151 188 22343 78 111 145 183 22331 58 94 125 156 18232 58 83 126 154 17849 86 125 156 190 22738 73 99 116 156 189
No. 536 67 101 136 168 19839 59 87 124 160 19529 56 90 119 152 18730 45 66 89 133 17837 69 98 127 158 19139 63 88 118 159 192
Fitness valueNo. 12.4450E+012.4436E+012.2693E+052.4903E+052.4245E+012.4184E+01
No. 22.4799E+012.4493E+012.6417E+052.6713E+052.4424E+012.4409E+01
No. 32.4708E+012.4339E+012.6368E+052.6778E+052.4437E+012.4162E+01
No. 42.4325E+012.4316E+011.8345E+051.9189E+052.3906E+012.3718E+01
No. 52.4371E+012.4274E+012.5295E+052.9481E+052.4156E+012.4079E+01
Table 8. PSNR values.
Table 8. PSNR values.
ImageKParameterRenyi EntropySymmetric Cross EntropyKapur EntropyFCM
PERSSASSAPERSSASSAPERSSASSA
No. 12Avg1.27E+011.27E+011.19E+011.19E+011.13E+011.13E+011.44E+01
SD4.36E-021.59E-017.22E-026.75E-022.42E+003.10E+003.75E-03
4Avg1.71E+011.71E+011.50E+011.49E+011.67E+011.66E+011.17E+01
SD9.13E-011.05E+001.82E-011.23E-012.21E+001.24E+002.26E+00
6Avg2.12E+012.00E+011.70E+011.56E+011.97E+011.85E+011.08E+01
SD1.73E+001.68E+002.10E+002.07E+002.35E+001.94E+001.36E+01
No. 22Avg1.34E+011.34E+011.16E+011.16E+011.27E+011.27E+011.41E+01
SD9.24E-026.21E-025.02E-025.08E-021.57E-012.39E+002.32E-03
4Avg1.80E+011.69E+011.50E+011.50E+011.67E+011.66E+011.38E+01
SD1.10E+001.17E+001.26E+002.04E-011.55E+001.56E+003.21E+00
6Avg2.36E+012.12E+011.60E+011.53E+012.18E+012.17E+019.53E+00
SD1.95E+002.21E+002.23E+001.63E+001.96E+002.16E+003.62E+01
No. 32Avg1.35E+011.35E+011.21E+011.20E+011.33E+011.32E+011.34E+01
SD9.13E-021.67E-015.18E-025.40E-022.21E-012.27E+002.31E-03
4Avg1.78E+011.72E+011.50E+011.44E+011.73E+011.71E+011.21E+01
SD1.30E+001.21E+002.17E-011.96E-011.33E+001.11E+005.23E+00
6Avg2.37E+012.23E+011.62E+011.76E+012.23E+011.99E+011.04E+01
SD1.95E+002.22E+002.06E+001.96E+002.24E+002.59E+004.31E+01
No. 42Avg1.09E+011.09E+011.13E+011.14E+011.06E+011.06E+011.33E+01
SD5.25E-021.31E-015.48E-024.20E-023.57E-023.37E-021.36E-03
4Avg1.80E+011.78E+011.50E+011.43E+011.68E+011.66E+011.05E+01
SD8.56E-018.88E-011.92E-012.87E-017.46E-011.63E+006.33E+00
6Avg2.45E+012.45E+011.67E+011.59E+012.23E+011.73E+019.48E+00
SD1.87E+001.58E+001.43E+002.16E+001.22E+001.60E+005.26E+01
No. 52Avg1.41E+011.41E+011.30E+011.31E+011.38E+011.37E+011.43E+01
SD4.32E-024.33E-021.01E-011.16E-018.31E-029.78E-021.64E-03
4Avg1.83E+011.83E+011.58E+011.58E+011.77E+011.71E+011.37E+01
SD1.69E+001.72E+001.21E-011.96E-013.06E+003.21E+004.36E+00
6Avg2.15E+011.95E+011.80E+011.45E+011.92E+011.88E+019.80E+00
SD1.89E+001.40E+001.95E+001.86E+002.19E+002.61E+003.18E+01
Bold number1751412234-
Optimal values number12624109
Table 9. SSIM values.
Table 9. SSIM values.
ImageKParameterRenyi EntropySymmetric Cross EntropyKapur EntropyFCM
PERSSASSAPERSSASSAPERSSASSA
No. 12Avg3.00E-013.00E-012.91E-012.91E-012.83E-012.83E-013.12E-01
SD2.03E-032.52E-024.66E-045.11E-043.13E-023.64E-023.10E-04
4Avg4.73E-014.75E-014.69E-014.65E-014.72E-014.71E-012.03E-01
SD1.92E-022.64E-025.87E-038.35E-032.58E-022.77E-026.25E-01
6Avg5.93E-015.92E-015.84E-015.67E-015.94E-015.88E-011.25E-01
SD1.39E-021.82E-021.54E-021.72E-022.10E-022.25E-024.23E+00
No. 22Avg3.55E-013.55E-013.13E-013.13E-013.44E-013.44E-013.61E-01
SD1.54E-038.58E-046.57E-046.70E-044.82E-031.33E-022.98E-04
4Avg5.65E-015.62E-015.53E-015.52E-015.66E-015.65E-012.52E-01
SD1.43E-029.52E-036.92E-034.41E-031.68E-022.08E-026.78E-01
6Avg6.65E-016.20E-016.69E-016.57E-016.62E-016.58E-011.03E-01
SD1.42E-022.15E-022.65E-021.85E-021.97E-021.51E-023.28E+00
No. 32Avg3.18E-013.18E-013.05E-013.04E-013.18E-013.17E-013.17E-01
SD1.78E-033.87E-034.31E-044.72E-047.19E-032.10E-021.99E-04
4Avg5.01E-014.98E-014.91E-014.80E-015.02E-015.00E-012.17E-01
SD1.82E-021.87E-024.55E-034.66E-032.45E-021.58E-025.23E-01
6Avg6.01E-015.34E-016.02E-016.11E-016.00E-015.51E-011.44E-01
SD2.56E-021.70E-021.62E-022.03E-021.66E-022.95E-024.16E+00
No. 42Avg3.30E-013.30E-013.36E-013.36E-013.21E-013.21E-013.41E-01
SD6.91E-032.46E-021.37E-031.38E-031.10E-031.26E-033.32E-04
4Avg4.97E-015.01E-015.14E-015.10E-015.05E-015.08E-012.80E-01
SD2.02E-022.14E-024.10E-036.35E-031.97E-022.45E-028.15E-01
6Avg6.06E-016.07E-016.21E-016.16E-016.07E-015.97E-011.77E-01
SD2.55E-021.91E-029.83E-031.72E-022.19E-022.17E-022.59E+00
No. 52Avg4.06E-014.06E-013.98E-013.99E-014.03E-014.02E-014.04E-01
SD1.06E-031.02E-033.51E-043.65E-042.42E-033.11E-031.85E-04
4Avg5.97E-015.97E-015.89E-015.88E-015.97E-015.88E-012.93E-01
SD2.20E-023.44E-027.33E-036.52E-031.87E-021.74E-026.21E-01
6Avg7.08E-016.94E-016.99E-016.48E-017.01E-016.92E-011.89E-01
SD2.14E-022.19E-021.30E-021.93E-021.53E-023.63E-021.32E+00
Bold number168225216-
Optimal values number6493508
Table 10. FSIM values.
Table 10. FSIM values.
ImageKParameterRenyi EntropySymmetric Cross EntropyKapur EntropyFCM
PERSSASSAPERSSASSAPERSSASSA
No. 12Avg7.00E-017.00E-016.95E-016.95E-016.85E-016.86E-017.07E-01
SD1.27E-031.35E-031.27E-031.48E-031.12E-029.10E-035.02E-04
4Avg8.53E-018.53E-018.46E-018.43E-018.53E-018.52E-017.02E-01
SD9.56E-031.30E-027.15E-031.04E-021.44E-021.46E-026.14E-01
6Avg9.11E-019.06E-019.08E-018.87E-019.13E-019.11E-016.68E-01
SD1.04E-021.09E-021.81E-021.95E-021.14E-021.24E-025.12E+00
No. 22Avg7.68E-017.68E-017.37E-017.37E-017.55E-017.55E-017.68E-01
SD7.79E-041.17E-048.22E-047.59E-042.29E-038.40E-035.68E-04
4Avg8.97E-018.88E-018.88E-018.87E-018.94E-018.92E-017.57E-01
SD1.05E-029.30E-031.08E-024.74E-031.30E-021.31E-025.48E-01
6Avg9.44E-019.04E-019.29E-019.18E-019.32E-019.28E-016.75E-01
SD8.62E-031.45E-022.49E-021.82E-021.26E-021.19E-023.25E+00
No. 32Avg7.24E-017.23E-017.15E-017.14E-017.24E-017.23E-017.23E-01
SD4.27E-041.48E-031.21E-031.30E-033.45E-031.00E-025.36E-04
4Avg8.67E-018.62E-018.59E-018.53E-018.67E-018.63E-017.01E-01
SD1.29E-021.29E-024.80E-034.83E-031.49E-021.25E-027.35E-01
6Avg9.20E-018.77E-019.04E-019.07E-019.19E-018.67E-016.88E-01
SD1.23E-021.02E-021.76E-021.92E-021.27E-021.67E-024.23E+00
No. 42Avg7.38E-017.38E-017.46E-017.47E-017.28E-017.26E-017.44E-01
SD5.54E-045.20E-037.38E-047.70E-048.66E-031.06E-024.63E-04
4Avg8.77E-018.76E-018.73E-018.72E-018.75E-018.76E-016.90E-01
SD1.61E-021.19E-022.96E-034.09E-031.08E-021.27E-026.42E-01
6Avg9.37E-019.32E-019.23E-019.14E-019.34E-019.15E-016.08E-01
SD1.16E-021.26E-029.78E-031.63E-021.61E-021.42E-023.56E+00
No. 52Avg7.85E-017.85E-017.74E-017.76E-017.79E-017.78E-017.93E-01
SD9.03E-048.74E-049.98E-049.30E-041.14E-039.20E-045.34E-04
4Avg9.02E-019.03E-018.91E-018.90E-018.99E-018.92E-017.42E-01
SD1.53E-021.89E-029.04E-039.13E-031.22E-021.06E-025.13E-01
6Avg9.46E-019.36E-019.39E-018.96E-019.39E-019.34E-016.84E-01
SD1.42E-021.12E-021.35E-021.90E-021.31E-021.62E-022.30E+00
Bold number177217208-
Optimal values number13652205
Table 11. T values.
Table 11. T values.
KRenyi EntropySymmetric Cross EntropyKapur EntropyFCM
PERSSASSAPERSSASSAPERSSASSA
21.701.771.781.841.751.871.69
41.851.811.842.001.951.734.37
61.911.751.891.911.861.888.43
Table 12. OMWP values.
Table 12. OMWP values.
AlgorithmPERSSASSA
Renyi Entropy8.79E-018.33E-01
Symmetric Cross Entropy8.05E-017.79E-01
Kapur Entropy7.96E-017.23E-01
FCM3.96E-01
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Wang, M.; Wang, W.; Li, L.; Zhou, Z. Optimizing Multiple Entropy Thresholding by the Chaotic Combination Strategy Sparrow Search Algorithm for Aggregate Image Segmentation. Entropy 2022, 24, 1788. https://doi.org/10.3390/e24121788

AMA Style

Wang M, Wang W, Li L, Zhou Z. Optimizing Multiple Entropy Thresholding by the Chaotic Combination Strategy Sparrow Search Algorithm for Aggregate Image Segmentation. Entropy. 2022; 24(12):1788. https://doi.org/10.3390/e24121788

Chicago/Turabian Style

Wang, Mengfei, Weixing Wang, Limin Li, and Zhen Zhou. 2022. "Optimizing Multiple Entropy Thresholding by the Chaotic Combination Strategy Sparrow Search Algorithm for Aggregate Image Segmentation" Entropy 24, no. 12: 1788. https://doi.org/10.3390/e24121788

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop