Abstract
In view of the slow convergence speed of traditional particle swarm optimization algorithms, which makes it easy to fall into local optimum, this paper proposes an OTSU multi-threshold image segmentation based on an improved particle swarm optimization algorithm. After the particle swarm completes the iterative update speed and position, the method of calculating particle contribution degree is used to obtain the approximate position and direction, which reduces the scope of particle search. At the same time, the asynchronous monotone increasing social learning factor and the asynchronous monotone decreasing individual learning factor are used to balance global and local search. Finally, chaos optimization is introduced to increase the diversity of the population to achieve OTSU multi-threshold image segmentation based on improved particle swarm optimization (IPSO). Twelve benchmark functions are selected to test the performance of the algorithm and are compared with the traditional meta-heuristic algorithm. The results show the robustness and superiority of the algorithm. The standard dataset images are used for multi-threshold image segmentation experiments, and some traditional meta-heuristic algorithms are selected to compare the calculation efficiency, peak signal to noise ratio (PSNR), structural similarity (SSIM), feature similarity (FSIM), and fitness value (FITNESS). The results show that the running time of this paper is 30% faster than other algorithms in general, and the accuracy is also better than other algorithms. Experiments show that the proposed algorithm can achieve higher segmentation accuracy and efficiency.
1. Introduction
Image segmentation is widely used as the basis of computer vision. Image segmentation refers to describing an image as a collection of some connected areas so that the image features are different in different areas. At present, image segmentation methods mainly include the threshold method, edge detection method, region method, morphological watershed method, and so on [1]. The threshold method is at the core of image segmentation applications because of its simple implementation and fast calculation speed [2]. Thresholds can be divided into two forms, namely, two-stage thresholds (BT) and multilevel thresholds (MT). The two-level threshold uses a single threshold to divide the image into two categories, and the multi-level threshold uses multiple thresholds to divide the image into more than two uniform segments [3]. The threshold can be determined by kapur entropy [4], Tsallis entropy [5], fuzzy entropy [6], and OTSU variance [7]. This method uses the information in the histogram and does not need any ground live to classify pixels. However, threshold policy is a time-consuming process, especially when increasing the number of thresholds. The running time will greatly increase, so this has become a problem that must be solved in future research.
Recently, Houssein et al. proposed an improved balance optimizer to solve the problem of imbalance between the exploration stage and the development stage of the balance optimizer, which solves the problem that the algorithm optimization is easy to fall into local optimization and shows the excellent performance of this algorithm [8]. Sharma, Sushmita, and others improved the original butterfly optimization algorithm by combining the mutualism and parasitic stages of the traditional butterfly algorithm and the symbiotic biological search algorithm. Using kapur entropy as the fitness function, they selected a group of benchmark images to find the optimal threshold. The results show that the improved butterfly algorithm is superior to other algorithms in all evaluation indicators [9]. In addition, Elaziz et al., in view of the shortcomings of the Harris Hawks Optimizer algorithm, proposed an improved version of the HHO algorithm, which solved the problem of poor search ability of traditional algorithms and makes it easy to fall into local optimization [10]. Zhang et al. proposed an improved PSO algorithm to solve the problem of premature convergence of traditional PSO and effectively realize adaptive image segmentation [11]. Zhao et al. proposed a cross strategy-based ant colony algorithm (CCACO) in order to solve the continuity problem of the ant colony algorithm, which uses Kapur entropy as the objective function for image segmentation. Experiments show that the proposed CCACO achieved excellent segmentation results in both low-threshold and high-threshold [12].
Of course, in the process of image segmentation, it is far from enough to rely only on threshold segmentation. Because of the low processing speed and segmentation accuracy of threshold segmentation, people will use optimization algorithms to optimize the segmentation process. Among many optimization algorithms, the meta-heuristic optimization algorithm is widely used because of its low cost, high accuracy, and fast speed. So far, various optimization algorithms have been introduced to deal with nonlinear and practical applications, such as genetic algorithm (GA) [13], particle swarm optimization (PSO) [14], whale optimization algorithm (WOA) [15], butterfly optimization algorithm (BOA) [9], sine cosine optimization algorithm (SCA) [16], crow optimization algorithm (CSA) [17], gray wolf optimization algorithm (GWO) [18], and bee colony optimization algorithm (ABC) [19]. In recent years, Raj et al. proposed the Whale Optimization Algorithm (WOA) to optimize TCSC and SVC reactive power planning for the problems of transmission loss and high operating costs. The results show that this method has fewer iterations, will not fall into local minimum, and has good convergence characteristics [15]. Shiva and Gudadappanavar proposed an opposite crow search algorithm for transmission loss, which has better performance in reducing active power and system operation cost [16]. In order to solve the problems of insufficient reactive power and unstable voltage of transmission lines, Babu and Kumar et al. proposed an improved sine cosine optimization algorithm, which uses the techniques of the sine cosine algorithm (SCA) and quasi inverse sine cosine algorithm (QOSCA) to minimize transmission losses and operating costs [17]. Xu et al. proposed an improved hunger game search algorithm (IHGS) for solar photovoltaic system parameter identification, which solved the problem of stability of the algorithm when solving the global optimal solution. It shows the feasibility and effectiveness of the improved HGS algorithm [20]. In addition, Trojovsky et al. proposed a new population-based optimization algorithm, the Pelican Algorithm (POA). This algorithm has high development and strong search ability [21]. Shabani and Asgarian put forward the search and rescue optimization algorithm (SAR) for the problem of single objective continuous optimization and demonstrated the feasibility of the algorithm through experiments [22]. Oliva and Elizabeth put forward a new solution to the problem that the traditional algorithm population is small and easy to fall into the local optimum. They used chaos mapping and opposite learning to initialize the solution of the given problem and improve the diversity of the population by constantly following the new initial population position through the interference operator. The experimental results show that the proposed method has high efficiency in dealing with the optimal solution problem [23].
In order to improve the performance of PSO on image threshold segmentation, a new OTSU multi-threshold segmentation method based on Improved PSO was proposed. The selection process of thresholds were optimized and compared with many classical PSO methods. The results show that our algorithm reduces the running time and improves the accuracy in threshold segmentation. The main contributions in this paper can be summarized as follows:
- (1)
- An Improved PSO algorithm is proposed, in which (a) chaos optimization was added to reduce premature convergence; (b) elite particle search strategy to particle swarm optimization algorithm was used to reduce optimization time and improve efficiency; (c) learning factors were improved to balance local search and global search.
- (2)
- Combining PSO with OTSU algorithm, a gray image segmentation algorithm based on improved particle swarm optimization is proposed. The proposed improved particle swarm optimization segmentation algorithm can search for a more accurate threshold, thus promoting better component division of gray-scale images.
- (3)
- Some classical test functions are selected to verify the robustness and development of the algorithm in solving single peak, multi-peak, and multi-peak fixed dimension functions.
- (4)
- Compared with the multi threshold segmentation of some standard algorithms, the performance of the improved PSO segmentation algorithm was verified, and the effectiveness of the algorithm on images was verified through multi-threshold image segmentation experiments on PASCAL 2012 dataset images. Experiments show that the method in this paper is faster than other meta heuristic algorithms in OTSU threshold segmentation, and PSNR FSIM and SSIM performance indicators verify that the algorithm in this paper has higher accuracy in image segmentation.
The rest of this paper is organized as follows: Section 2 summarizes the multi-threshold OTSU segmentation model. Section 3 describes the PSO algorithm. Section 4 proposes a multi-threshold segmentation algorithm based on the Improved PSO algorithm. Section 5 describes the experimental results of the segmentation method based on the improved particle swarm optimization. Section 6 presents conclusions and future work.
2. Multi-Threshold OTSU Segmentation Model
The OTSU image segmentation method is the maximum inter-class variance method, which aims to determine the optimal threshold for image segmentation. In OTSU, the image is divided into foreground and background by threshold, and the threshold with the largest class variance in the foreground and background regions is taken as the best segmentation threshold. Assume that the gray scale range is , representing L different gray levels in a digital image, whose size is pixels. Let be the proportion of grayscale i in the entire image, calculated as
where is the number of pixels with grayscale i. Let the threshold t divide the image into two parts, foreground and background.
where is the average grayscale of the image, which can be expressed as:
where and are the proportion of the foreground and background in the image, respectively, and and are the average grayscale of the image foreground and background, respectively. is the inter class variance, which can be expressed as:
when the interclass variance reaches the maximum, t is the optimal threshold. In the case of multiple thresholds, a threshold set of is the optimal combination of thresholds. The optimization problem can be expressed as:
The OTSU multi-threshold method uses the exhaustive method to solve the optimal thresholds combination with a total computational cost of O(). It can be seen that the computational cost will increase exponentially with the increase in the number of thresholds. In order to improve the efficiency of image segmentation, this paper proposes an improved particle swarm optimization algorithm to optimize the optimal threshold combination and then perform multi-threshold image segmentation.
3. Particle Swarm Algorithm
Particle swarm optimization (PSO) is a popular intelligent optimization algorithm [24] whose origins originate from the research on the predatory behaviors of birds. The PSO algorithm is similar to the genetic algorithm in that the solution of the objective function needs to be initialized randomly and optimized by iteration, without the crossover and mutation of chromosomes. Each particle is abstracted as a bird in the search space, which represents the solution of the optimization problem. The particle has two attributes: speed and position. The former represents the speed of movement and the latter represents the direction of movement. Substituting particle position information and velocity information into the fitness function to be optimized, the fitness value can be obtained. In the process of optimization, the particle determines its next motion through its own flight experience (individual extreme value) and group experience (group extreme value).
In a D-dimensional search space, the particle population size m, particle position , and velocity are initialized, where D = 1, m = 50, range of position X is [0, 255], and the range of velocity V is . Particle i individual extremes and Global Extremums is the optimal position currently searched by the particle, and is the optimal position currently searched by all of the particles. Over the course of each iteration, particles update their velocity and position according to the following equation:
where , , k is the number of iterations, and and are random numbers between ; these two parameters are used to maintain the diversity of the population. Furthermore, and are learning factors. Take , inertia weight reflects the ability of particles to inherit the previous velocity, with a linear decrement method, as follows:
where , , k is the number of iterations, is the total number of iterations, and .
4. Multi-Threshold Segmentation Algorithm Based on Improved PSO
The classical PSO algorithm is an effective method to deal with the optimization problem of threshold segmentation. However, due to the mechanism of particle swarm optimization algorithm, when used in image processing, a large number of calculations will make the image processing time longer, and premature convergence will fall into local optimization, thus reducing the overall work efficiency and affecting the accuracy of threshold segmentation. This paper proposes an improved particle swarm optimization algorithm to optimize OTSU multi-threshold image segmentation. The speed and accuracy of multi-threshold image segmentation are improved by optimizing the basic parameters of the particle swarm optimization algorithm and proposing an elite particle search strategy based on the classical particle swarm optimization algorithm.
4.1. Linear Optimization Learning Factor
For the problem of low segmentation accuracy of the classical particle swarm algorithm, a linear function is introduced in the process of parameter optimization of the particle swarm algorithm to update the learning factor of particles, and the changes in learning factors and affect individual and swarm particles to approach the optimal solution, respectively. Most of the search results from global search to local search are particles, and the centers of gravity of particle searches are also different at different stages. To mitigate this issue, this paper uses asynchronous linear decrement and asynchronous linear incrementing learning factors to balance global search and local search, so that the learning factor can change with the procedure of iterations. According to the characteristics of the PSO algorithm, in the early iterations, it is more likely to rely on the mobile experiences of particle individuals for global search, while in the later iteration, it mainly relies on the mobile experience of the group for local search. The PSO algorithm can enhance the global search ability by taking the larger value of the learning factor the smaller value of in the early iterations, and the smaller value of the larger value of in the later stages can enhance the local search ability. In order to better play the global search ability and local search ability at different stages and improve the accuracy of the segmentation, we use the asynchronous change learning factor. The formula is:
where and are the maximum and minimum values of the learning factor, , ; k and are the current number of iterations and the maximum number of iterations, respectively. The improved speed update formula is:
4.2. Elite Particle Search Strategy
In view of the low efficiency and slow speed of classical particle swarm optimization in multi-threshold segmentation, this paper improves classical particle swarm optimization by updating the speed and position of particles and proposes a search strategy for elite particles, which is divided into three steps. Firstly, the threshold segmentation contribution of each particle is calculated; secondly, 20% elite particles are selected according to the threshold segmentation contribution of each particle; and finally, the optimal solution of this iteration is found through chaos optimization.
In this paper, 20% of the particles are selected as elite particles according to the contribution. The contribution of each particle is obtained by comparing the fitness value of each particle with the fitness value of the global extremum in the current iteration. We calculated the distance difference between the location of a single particle and the location of the global limit particle. The closer the distance was to the optimal threshold, the greater the contribution to the optimization. Assuming that the global extremum particle is in each iteration, the distance difference between the location of a single particle and the location of the global extreme particle and the threshold segmentation contribution value of each single particle are obtained according to the following formula:
where z represents the distance between the individual particle of the ith particle and the particle individual, corresponding to the global extremum, is the x-axis of the particle individual, corresponding to the global extremum in the target search space, is the y-axis coordinate of the particle individual, corresponding to the global extremum in the target search space, is the x-axis of the i-th particle individual in the target search space, is the y-axis coordinate of the ith particle individual in the target search space, (actual contribution degree) is the threshold segmentation contribution of each particle, threshold segmentation contribution, and distance difference of z, which is inversely proportional. The closer the i-th particle individual is to the particle individual, corresponding to the global extreme value, the greater the contribution of the i-th particle individual to the threshold segmentation of this optimization. The top 20% of elite particles are selected in order of contribution from largest to smallest by threshold segmentation, reducing optimization time. For details, see Figure 1. Where the arrows represent the change in the number of thresholds, and the three circles indicate the optimization when the threshold is a different value.
Figure 1.
Particle swarm narrows the search range according to the contribution of particles.
Taking the gray value range of the segmented image as the coordinate range of x and y values, the black dot as all particles, and the central red dot as the global optimal solution of the current iteration, the search range can be reduced to a black circular region, that is, the region where the elite particles are located, and k is the number of threshold values. Each threshold requires a corresponding particle swarm to determine, while n threshold values require n particle swarms. In the same iteration, determine the global optimal particle of the n-particle swarm in turn.
Put the selected elite particles into chaos optimization. During each iteration, the is scrambled and used as the update position of the particles, which makes them search locally for the global optimal solution. The purpose is to increase the diversity of the population, enhance the local optimization ability of the algorithm, and realize the local depth search of the particle swarm algorithm. In order to solve the problem of premature convergence of particle swarm optimization algorithm and easy to fall into local optimization, so as to improve the accuracy of image segmentation chaos, the specific steps of chaos optimization elite particles are as follows:
Step 1. Set the number of chaotic iterations to M and map through Equation (15) to the defining domain of the Logistic equation [0, 1];
where, is the sequence element obtained by the first iteration, is the global extremum in the whole population, is the minimum value of the particle position, and is the maximum value of the particle position.
Step 2. is obtained after M iterations through Equation (16), and the chaotic sequence is obtained.
where is the sequence element obtained by the (n+1)-thi iteration, is the control parameter, and is the sequence element obtained by the M-th iteration.
Step 3. The chaotic sequence is inversely mapped back to the original solution space through Equation (17) to obtain the feasible solution sequence of chaotic variables
where is the sequence element obtained by the m-th iteration, .
Step 4. Calculate the fitness value of each feasible solution vector in the feasible solution sequence and retain the optimal vector, denoted as .
Step 5. Randomly select a particle from the current particle swarm and replace the position vector of the selected particle with a .
Step 6. Iterate until the maximum number of iterations is reached or gets a sufficiently satisfactory solution.
4.3. Our Method and Process
Compared with the classical particle swarm optimization algorithm, our improved particle swarm optimization algorithm is superior to the classical algorithm in terms of both speed and accuracy. At the same time, it also avoids the problem that the particle swarm optimization algorithm is prone to fall into local optimization, greatly improving the efficiency of image segmentation. The flow chart of our improved particle swarm optimization algorithm is shown in Figure 2.
Figure 2.
Improved particle swarm algorithm flowchart.
To a certain extent, linear learning factors and elite particle search strategies are added to the classical particle swarm algorithm, which improves the problems of slow optimization speed, low precision, low efficiency, and easy optimization in the classical particle swarm. The detailed process based on the improved particle swarm algorithm is as follows:
Step 1. Set the parameters, which include the population size (m), dimension , velocity range , position range , and maximum number of iterations .
Step 2. Initialize the population. Initialize the dimensions and vectors of particle individuals according to the threshold number . The range of position vectors is [0255].
Step 3. Update and with asynchronous monotonically decreasing individual learning factor and asynchronous monotonically increasing social learning factor, and , instead of Equations (10) and (11) to individual, thereby improving the accuracy of particle optimization. Calculate the fitness values for individual particles, updating the individual extremums of particles and the global extremums of particles.
Step 4. Iteratively update the position of the particle individual and the speed of the particle individual.
Step 5. 20% elite particles are selected according to the threshold segmentation contribution of individual particles.
Step 6. Use chaotic iterative optimization to find the best fitness value for elite particles by comparing the chaotic optimization results.
Step 7. Determine whether the current number of chaotic iterations meets the set maximum number of chaos iterations or whether the chaos optimization reaches preset accuracy. If the current number of chaos iterations meets the maximum number of chaotic iterations or the chaos optimization reaches the set accuracy, go to step 8; otherwise, go to step 2.
Step 8. Map the optimal fitness value inversely back into the particle cluster group, calculate the fitness value of all particles in the particle cluster, and output the optimal solution if the set conditions are met; otherwise, go to step 1.
5. Experimental Analysis
5.1. Experimental Environment
Experiments demonstrate the effectiveness of our improved multi-threshold image segmentation algorithm. The test image is taken from the PASCAL 2012 data set [25]. The experimental environment consists of MATLAB 2018a, Windows 10, and an Intel(R) Core(TM) i5-5600H @ 2.4GHz CPU with 8GB of RAM. We set population size , maximum number of iterations , particle position range , velocity range , individual learning factor , and social learning factor .
5.2. Benchmark Function
This paper selects 12 standard test functions for comparative experiments to test the robustness, exploration, and development performance of the functions. These functions were derived from the CEC benchmark function [26], covering unimodal functions (F1–F4), multimodal functions (F5–F8), and fixed dimensional multimodal functions (F9–F12), all of which had minimum values. The algorithm compared with this paper uses the traditional meta-heuristic algorithms, which are whale optimization algorithm (WOA) [27], butterfly optimization algorithm [28], and particle swarm optimization algorithm [14]. In order to make a fair evaluation, this paper conducted 30 experiments on four optimization algorithms at the same time, and the experimental results less than are represented by 0.
Table 1 shows the selected test functions: Dim is the number of variables and N\A is unsolvable. Table 2 shows the average value and standard deviation of the benchmark function calculated by this method and the traditional meta heuristic algorithm. According to the data in Table 2, our method is robust in dealing with the mechanism of benchmark function, whether it is unimodal, multimodal, or a fixed-dimensional multimodal function. From Table 2, we can see that our method has better performance than WOA, BOA, and PSO. In particular, there is a smaller error in multimodal function, which shows that the method in this paper has better exploration capability. We found that our method did not find the global minimum in several functions, but the error was smaller than other algorithms. To sum up, our method was more exploitative and exploratory than other algorithms. The specific experiments are as follows:
Table 1.
Benchmark Function.
Table 2.
Compare the mean and variance of different algorithms through benchmark functions.
5.3. Segmentation Experiment and Results
We took the optimization time and image segmentation quality to evaluate the performance of algorithms for comparison. The segmentation quality was evaluated with peak signal-to-noise ratio (PSNR) [29,30], structural similarity (SSIM) [31,32], and feature similarity (FSIM) [33,34], and fitness values of all algorithms were calculated. PSNR was used to evaluate the degree of image distortion, and the higher its value, the smaller the image distortion. SSIM evaluated image segmentation performance from image contrast and structural information. The larger the value, the better the performance. FSIM was used to evaluate the feature similarity between the original image and the segmented image. The larger the , the better the image segmentation quality.
In this paper, the images in PASCAL 2012 [25] were selected as the experimental data set, including Figure 3a–f (numbered 000019, 000436, 001478, 003579, 004423, 006404). They were typical images with relatively centralized gray distribution and small inter class variance, while Figure 3g–l (numbered 001236, 001876, 002036, 004231, 004610, 006946) selected images with obvious differences in gray distribution, that is, the inter class variance was relatively large, as shown in Figure 3. Theoretically, the greater the number of segmentation thresholds, the greater the accuracy of the image segmentation. In order to quantitatively analyze the performance of segmentation, in the experiment, four groups of different segmentation thresholds were used. k is, 2, 4, 6, and 8 for index analysis. Our improved particle swarm optimization algorithm was compared with the classical particle swarm optimization algorithm [13], whale optimization algorithm [25], and butterfly optimization algorithm [26]. With several evaluation indicators, such as the classical particle swarm optimization algorithm, the whale optimization algorithm and the butterfly optimization algorithm are both bionic algorithms. The differences are that the whale optimization algorithm achieves the goal of searching by searching, surrounding, chasing, and attacking prey, while the butterfly algorithm locates the source of the target through its own perception. In this way, we can better understand the segmentation effect of the improved particle swarm optimization OTSU image segmentation method compared with other algorithms under different thresholds. Table 3, Table 4, Table 5, Table 6, Table 7, Table 8, Table 9 and Table 10 show the comparison results of PSNR, SSIM, FSIM, and fitness values, respectively. It can be seen that with the increase of the number of thresholds k, the distortion after image segmentation gradually decreases, which is closer to the original image. SSIM and FSIM gradually increase, and the segmentation performance gets better and better, which shows the advantages of multi-threshold image segmentation methods. To verify the universality of the algorithm in this paper, we randomly selected 100 images from the PASCAL 2012 dataset for the same experiment and listed the average values of the experimental parameters in Table 11. In addition, the algorithm that performs best on the evaluation criteria is marked in bold.
Figure 3.
Histogram distribution.
Table 3.
Comparison of PSNR in images a–f.
Table 4.
Comparison of SSIM in images a–f.
Table 5.
Comparison of FSIM in images a–f.
Table 6.
Comparison of Fitness in images a–f.
Table 7.
Comparison of PSNR in Images g–l.
Table 8.
Comparison of SSIM in Images g–l.
Table 9.
Comparison of FSIM in Images g–l.
Table 10.
Comparison of Fitness in Images g–l.
Table 11.
Average of 100 image segmentation results.
When the number of thresholds is small (k = 2 or k = 4), the PSNR, SSIM, and FSIM values of the improved particle swarm optimization algorithm are slightly better than other algorithms. However, after the threshold number increases (k = 6 or k = 8), the advantages of improved particle swarm optimization are reflected. The calculated PSNR, SSIM, and FSIM values were the highest among the four algorithms, which indicates that the segmentation, when the improved particle swarm algorithm solves the optimal threshold combination, has the highest similarity with the original image, and the optimization accuracy performance of the algorithm can be reflected. The algorithm runtime is shown in Figure 4 and Figure 5.
Figure 4.
Algorithm running time (Images a–f in Figure 3).
Figure 5.
Algorithm running time (Images g–l in Figure 3).
From Table 3, Table 4, Table 5, Table 6, Table 7, Table 8, Table 9 and Table 10, it can be seen that the segmentation effect of the proposed algorithm is better. Image 1 (000019) was selected as an example, and the PSNR value of the image, k = 8, was 21.3771, which was significantly better than other comparison algorithms. The SSIM value of the image was 0.7333, which was also slightly better than other algorithms, indicating that the image distortion is low and ensures the image quality after segmentation. Under the premise of maintaining the segmentation effect, it can be seen from Figure 4 that the running time of the proposed scheme was shorter than that of other algorithms, especially in the case of a large number of thresholds. When k = 8, the algorithm segmentation running time was shortened from 3.7 s to 2.8 s, which decreased by more than 30%, which greatly improved the running speed. To verify the universality of the algorithm in this paper, we randomly selected 100 images from the PASCAL 2012 dataset for parameter comparison tests. The experimental results show that our method ia faster and more accurate than other algorithms in terms of PSNR, SSIM, FSIM, and runtime parameters. Experimental results show that the proposed method can improve the segmentation accuracy and segmentation efficiency. Table 6 and Table 10 show the fitness values obtained by different algorithms under different thresholds. The segmented image is shown in Figure 6 and Figure 7.
Figure 6.
When threshold , the original image and the segmented image of different algorithms are shown (images a–f in Figure 3).
Figure 7.
When threshold , the original image and the segmented image of different algorithms are shown (images g–l in Figure 3).
6. Conclusions
This paper proposes an improved particle swarm optimization algorithm and applies it to the field of multi-threshold imaging. A new elite particle search strategy is introduced to narrow the scope of particle search, asynchronous monotone increasing social learning factor and asynchronous monotone decreasing individual learning factor are introduced to balance global and local search, and chaos optimization is introduced to increase population diversity. An optimal multi-threshold combination of image segmentation based on an improved particle swarm optimization algorithm is used for optimization, and multi-threshold image segmentation is realized. The test function proves that the algorithm has good development and exploration. The experimental results show that the proposed algorithm can achieve higher segmentation accuracy and efficiency.
Although our method has a faster speed and higher accuracy in image segmentation, compared with threshold segmentation methods, such as fuzzy entropy, our algorithm has a greater speed improvement, but the segmentation accuracy is slightly insufficient, which is one of the directions we will study in the future. Next, it is important to consider applying the improved particle swarm optimization algorithm to image segmentation and image processing in other fields (such as new material image processing, target detection, lightweight embedded robots, and other fields), study the ability of the algorithm to solve practical problems, and further enhance the practicability of the algorithm.
Author Contributions
Conceptualization, J.Z. (Jianfeng Zheng) and Y.G.; methodology, J.Z. (Jianfeng Zheng), Y.G. and Y.L.; software, Y.G.; validation, J.Z. (Jianfeng Zheng), Y.G. and J.Z. (Ji Zhang); formal analysis, Y.G. and H.Z.; investigation, Y.G. and Y.L.; resources, J.Z. (Jianfeng Zheng) and J.Z. (Ji Zhang); data curation, Y.G., Y.L. and H.Z.; writing—original draft preparation, Y.G.; writing—review and editing, J.Z. (Jianfeng Zheng) and Y.G.; visualization, Y.G.; supervision, J.Z. (Jianfeng Zheng) and J.Z. (Ji Zhang). All authors have read and agreed to the published version of the manuscript.
Funding
This research was funded by the Postgraduate Research & Practice Innovation Program of Jiangsu Province under the grant number [SJCX21_1282].
Institutional Review Board Statement
Not applicable.
Informed Consent Statement
Not applicable.
Data Availability Statement
Not applicable.
Conflicts of Interest
The authors declare no conflict of interest.
Abbreviations
The following abbreviations are used in this manuscript:
| PSO | Particle swarm algorithm |
| PSNR | Peak-Signal-to-Noise-Ratio |
| FSIM | Feature similarity |
| SSIM | Structural Similarity |
References
- Cheng, Y.; Li, B. Image Segmentation Technology and Its Application in Digital Image Processing. In Proceedings of the 2021 IEEE Asia-Pacific Conference on Image Processing, Electronics and Computers (IPEC), Dalian, China, 14–16 April 2021; pp. 1174–1177. [Google Scholar]
- Chakraborty, F.; Roy, P.K.; Nandi, D. Oppositional elephant herding optimization with dynamic Cauchy mutation for multilevel image thresholding. Evol. Intell. 2019, 12, 445–467. [Google Scholar]
- Abd Elaziz, M.; Oliva, D.; Ewees, A.A.; Xiong, S. Multi-level thresholding-based grey scale image segmentation using multi-objective multi-verse optimizer. Expert Syst. Appl. 2019, 125, 112–129. [Google Scholar]
- Upadhyay, P.; Chhabra, J.K. Kapur’s entropy based optimal multilevel image segmentation using crow search algorithm. Appl. Soft Comput. 2020, 97, 105522. [Google Scholar]
- Yazid, H.; Basah, S.N.; Rahim, S.A.; Safar, M.J.A.; Basaruddin, K.S. Performance analysis of entropy thresholding for successful image segmentation. Multimed. Tools Appl. 2022, 81, 6433–6450. [Google Scholar]
- Mahajan, S.; Mittal, N.; Pandit, A.K. Image segmentation using multilevel thresholding based on type II fuzzy entropy and marine predators algorithm. Multimed. Tools Appl. 2021, 80, 19335–19359. [Google Scholar] [CrossRef]
- Wu, B.; Zhou, J.; Ji, X.; Yin, Y.; Shen, X. An ameliorated teaching–learning-based optimization algorithm based study of image segmentation for multilevel thresholding using Kapur’s entropy and Otsu’s between class variance. Inf. Sci. 2020, 533, 72–107. [Google Scholar]
- Houssein, E.H.; Helmy, B.E.d.; Oliva, D.; Jangir, P.; Premkumar, M.; Elngar, A.A.; Shaban, H. An efficient multi-thresholding based COVID-19 CT images segmentation approach using an improved equilibrium optimizer. Biomed. Signal Process. Control 2022, 73, 103401. [Google Scholar]
- Sharma, S.; Saha, A.K.; Majumder, A.; Nama, S. MPBOA—A novel hybrid butterfly optimization algorithm with symbiosis organisms search for global optimization and image segmentation. Multimed. Tools Appl. 2021, 80, 12035–12076. [Google Scholar]
- Elaziz, M.E.A.; Heidari, A.A.; Fujita, H.; Moayedi, H. A competitive chain-based Harris Hawks Optimizer for global optimization and multi-level image thresholding problems. Appl. Soft Comput. 2020, 95, 106347. [Google Scholar] [CrossRef]
- Zhang, L.; Wang, J.; An, Z. FCM fuzzy clustering image segmentation algorithm based on fractional particle swarm optimization. J. Intell. Fuzzy Syst. 2020, 38, 3575–3584. [Google Scholar]
- Zhao, D.; Liu, L.; Yu, F.; Heidari, A.A.; Wang, M.; Oliva, D.; Muhammad, K.; Chen, H. Ant colony optimization with horizontal and vertical crossover search: Fundamental visions for multi-threshold image segmentation. Expert Syst. Appl. 2021, 167, 114122. [Google Scholar] [CrossRef]
- Holland, J.H. Adaptation in Natural and Artificial Systems: An Introductory Analysis with Applications to Biology, Control, and Artificial Intelligence; MIT Press: Cambridge, MA, USA, 1992. [Google Scholar]
- Shami, T.M.; El-Saleh, A.A.; Alswaitti, M.; Al-Tashi, Q.; Summakieh, M.A.; Mirjalili, S. Particle swarm optimization: A comprehensive survey. IEEE Access 2022, 10, 10031–10061. [Google Scholar] [CrossRef]
- Raj, S.; Bhattacharyya, B. Optimal placement of TCSC and SVC for reactive power planning using Whale optimization algorithm. Swarm Evol. Comput. 2018, 40, 131–143. [Google Scholar]
- Shiva, C.K.; Gudadappanavar, S.S.; Vedik, B.; Babu, R.; Raj, S.; Bhattacharyya, B. Fuzzy-Based Shunt VAR Source Placement and Sizing by Oppositional Crow Search Algorithm. J. Control. Autom. Electr. Syst. 2022, 33, 1576–1591. [Google Scholar]
- Babu, R.; Kumar, V.; Shiva, C.K.; Raj, S.; Bhattacharyya, B. Application of Sine–Cosine Optimization Algorithm for Minimization of Transmission Loss. Technol. Econ. Smart Grids Sustain. Energy 2022, 7, 6. [Google Scholar] [CrossRef]
- Heidari, A.A.; Mirjalili, S.; Faris, H.; Aljarah, I.; Mafarja, M.; Chen, H. Harris hawks optimization: Algorithm and applications. Future Gener. Comput. Syst. 2019, 97, 849–872. [Google Scholar] [CrossRef]
- Su, H.; Zhao, D.; Yu, F.; Heidari, A.A.; Zhang, Y.; Chen, H.; Li, C.; Pan, J.; Quan, S. Horizontal and vertical search artificial bee colony for image segmentation of COVID-19 X-ray images. Comput. Biol. Med. 2022, 142, 105181. [Google Scholar] [CrossRef] [PubMed]
- Xu, B.; Heidari, A.A.; Kuang, F.; Zhang, S.; Chen, H.; Cai, Z. Quantum Nelder-Mead Hunger Games Search for optimizing photovoltaic solar cells. Int. J. Energy Res. 2022, 46, 12417–12466. [Google Scholar]
- Trojovskỳ, P.; Dehghani, M. Pelican optimization algorithm: A novel nature-inspired algorithm for engineering applications. Sensors 2022, 22, 855. [Google Scholar] [CrossRef] [PubMed]
- Shabani, A.; Asgarian, B.; Gharebaghi, S.A.; Salido, M.A.; Giret, A. A new optimization algorithm based on search and rescue operations. Math. Probl. Eng. 2019, 2019, 2482543. [Google Scholar] [CrossRef]
- Oliva, D.; Elaziz, M.A. An improved brainstorm optimization using chaotic opposite-based learning with disruption operator for global optimization and feature selection. Soft Comput. 2020, 24, 14051–14072. [Google Scholar] [CrossRef]
- Sun, Y.; Yang, Y. An Adaptive Bi-Mutation-Based Differential Evolution Algorithm for Multi-Threshold Image Segmentation. Appl. Sci. 2022, 12, 5759. [Google Scholar]
- Shetty, S. Application of convolutional neural network for image classification on Pascal VOC challenge 2012 dataset. arXiv 2016, arXiv:1607.03785. [Google Scholar]
- Liang, J.J.; Suganthan, P.N.; Deb, K. Novel composition test functions for numerical global optimization. In Proceedings of the 2005 IEEE Swarm Intelligence Symposium, SIS 2005, Pasadena, CA, USA, 8–10 June 2005; pp. 68–75. [Google Scholar]
- Mirjalili, S.; Lewis, A. The whale optimization algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar]
- Arora, S.; Singh, S. Butterfly optimization algorithm: A novel approach for global optimization. Soft Comput. 2019, 23, 715–734. [Google Scholar]
- Huynh-Thu, Q.; Ghanbari, M. Scope of validity of PSNR in image/video quality assessment. Electron. Lett. 2008, 44, 800–801. [Google Scholar]
- Hore, A.; Ziou, D. Image quality metrics: PSNR vs. SSIM. In Proceedings of the 2010 IEEE 20th International Conference on Pattern Recognition, Istanbul, Turkey, 23–26 August 2010; pp. 2366–2369. [Google Scholar]
- Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar]
- Brunet, D.; Vrscay, E.R.; Wang, Z. On the mathematical properties of the structural similarity index. IEEE Trans. Image Process. 2011, 21, 1488–1499. [Google Scholar] [CrossRef] [PubMed]
- Sara, U.; Akter, M.; Uddin, M.S. Image quality assessment through FSIM, SSIM, MSE and PSNR—A comparative study. J. Comput. Commun. 2019, 7, 8–18. [Google Scholar] [CrossRef]
- Zhang, L.; Zhang, L.; Mou, X.; Zhang, D. FSIM: A feature similarity index for image quality assessment. IEEE Trans. Image Process. 2011, 20, 2378–2386. [Google Scholar]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).






