Next Article in Journal
Locating Anchor Drilling Holes Based on Binocular Vision in Coal Mine Roadways
Previous Article in Journal
On System of Root Vectors of Perturbed Regular Second-Order Differential Operator Not Possessing Basis Property
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Enhancing Pneumonia Segmentation in Lung Radiographs: A Jellyfish Search Optimizer Approach

1
Departamento de Tecnologías de la Información, Universidad Tecnológica de Jalisco, Campus CCD, Guadalajara C.P. 44100, Jalisco, Mexico
2
Departamento de Eléctro-Fotónica, Universidad de Guadalajara, Campus CUCEI, Guadalajara C.P. 44430, Jalisco, Mexico
*
Author to whom correspondence should be addressed.
Mathematics 2023, 11(20), 4363; https://doi.org/10.3390/math11204363
Submission received: 21 September 2023 / Revised: 17 October 2023 / Accepted: 17 October 2023 / Published: 20 October 2023

Abstract

:
Segmentation of pneumonia on lung radiographs is vital for the precise diagnosis and monitoring of the disease. It enables healthcare professionals to locate and quantify the extent of infection, guide treatment decisions, and improve patient care. One of the most-employed approaches to effectively segment pneumonia in lung radiographs is to treat it as an optimization task. By formulating the problem in this manner, it is possible to use the interesting capabilities of metaheuristic methods to determine the optimal segmentation solution. Although these methods produce interesting results, they frequently produce suboptimal solutions owing to the lack of exploration of the search space. In this paper, a new segmentation method for segmenting pneumonia in lung radiographs is introduced. The algorithm is based on the jellyfish search optimizer (JSO), which is characterized by its excellent global exploration capability and robustness. This method uses an energy curve based on cross-entropy as a cost function that penalizes misclassified pixels more heavily, leading to a sharper focus on regions where segmentation errors occur. This is particularly important because it allows for the accurate delineation of objects or regions of interest. To validate our proposed approach, we conducted extensive testing on the most widely available datasets. The results of our method were compared with those obtained using other established techniques. The results of our evaluation demonstrate that our approach consistently outperforms the other methods at levels 8, 16, and 32, with a difference of more than 10%.

1. Introduction

Pneumonia has a significant impact on public health worldwide, particularly in vulnerable populations such as children, the elderly, and individuals with compromised immune systems [1]. It is a leading cause of morbidity and mortality, placing a substantial burden on healthcare systems and economies.
The classical methods for diagnosing pneumonia [2] typically involve a combination of clinical assessment, patient history, physical examination, and traditional imaging techniques, such as radiography or chest radiography.
Assessing pneumonia visually through X-rays or chest radiographs poses challenges owing to the subtle and variable radiographic features of the disease [3]. Pneumonia can mimic other lung conditions, making it difficult to distinguish it visually. Coexisting medical conditions, overlapping symptoms, and the potential for radiologist variability further complicate accurate diagnosis.
The advantages of using segmentation techniques to detect pneumonia in medical images are significant [3]. These techniques enable the precise identification and isolation of infected regions within the lungs, allowing for an accurate assessment of the extent and severity of the disease. By providing quantitative data, segmentation aids in clinical decision-making, treatment planning, and monitoring of patient progress. This enhances the ability to differentiate pneumonia-related abnormalities from healthy tissues, thereby reducing the risk of misdiagnoses. Automation through segmentation methods improves efficiency and consistency in analysis, offering healthcare professionals valuable support for the early and accurate detection of pneumonia, ultimately leading to improved patient care and outcomes [4].
Segmenting images of pneumonia in X-rays or chest radiographs typically involves image processing and computer vision techniques. Common methods include thresholding to separate infected regions from healthy tissues based on pixel intensity, edge detection to outline the boundaries of lesions, and region growth to cluster pixels with similar characteristics. Recently, approaches have incorporated machine learning, such as convolutional neural networks (CNNs), to automate the segmentation processes [5]. These CNNs are trained on labeled datasets to learn patterns and features indicative of pneumonia [6], thereby allowing for accurate and efficient segmentation [7]. By combining these methods with preprocessing steps to enhance image quality, researchers and healthcare professionals can achieve the reliable and precise segmentation of pneumonia-related abnormalities in radiographic images [8]. UNet [9] is a popular convolutional neural network (CNN) architecture primarily used for image segmentation tasks, particularly in the field of medical image analysis. The original UNet architecture consists of two main parts: an encoder and decoder with a contracting path and an expansive path. Variants of UNet have been developed to improve the original architecture and adapt it to various segmentation tasks. Notable variants include UNet++ [10], ResUNet [11], and DenseUNet [12]. Recently, several methods based on CNNs and deep-learning techniques have been proposed for the detection of pulmonary diseases. Representative examples include those in [13,14]. Although methods based on convolutional neural networks (CNNs) or deep learning have shown promise for segmenting pneumonia in X-rays or chest radiographs, they have some notable disadvantages. CNNs often require substantial amounts of labeled training data, which can be scarce and expensive to obtain, particularly for rare pneumonia cases [15]. Additionally, CNNs may struggle with generalization when applied to diverse patient populations or different imaging settings, potentially leading to inconsistent segmentation results.
An alternative approach to the segmentation of pneumonia images in X-rays or chest radiographs is to consider the segmentation problem as an optimization task and employ metaheuristic methods. These methods iteratively analyze the solution space to determine the optimal segmentation configuration based on an objective function. This approach can effectively tackle the complex nature of the problem, aiding in the precise localization and quantification of pneumonia lesions within radiographic images, ultimately enhancing diagnostic accuracy and clinical decision making. Examples of these approaches include techniques such as particle swarm optimization (PSO) [16], evolutionary arithmetic optimization [17], sunflower optimization [18], and differential evolution [19].
It is important to acknowledge that no single metaheuristic algorithm can solve all optimization problems competitively. The specific strategies employed to solve an optimization problem are influenced by the distinct complexities embedded within its objective function. Some problems require an extensive exploration of the search space, whereas others require a finer focus on the refinement of already explored areas through exploitation mechanisms. Employing a metaheuristic method ill-suited to a specific problem often results in suboptimal solutions. Consequently, identifying the most suitable method requires a comprehensive and systematic evaluation process, wherein several methods are rigorously tested. This evaluation should involve rigorous testing and a statistical assessment of the solutions provided by each method, ultimately leading to the selection of the most appropriate metaheuristic algorithm tailored to the unique characteristics of the optimization problem.
The jellyfish search optimizer (JSO) [20] is a newly developed metaheuristic algorithm that solves complex real-world optimization problems. This is based on the foraging behavior of jellyfish in the ocean. Compared with other metaheuristic methods, the global exploration capability of the JSO algorithm is one of its most important abilities as an optimization technique. Owing to this remarkable characteristic, its use has been extended to different areas such as power systems [21], structural optimization [22], and renewable energy sources [23].
When a problem is formulated as an optimization task to obtain its solution, the fundamental problem is to formulate a suitable objective function. This function must clearly reflect the important details of the solution, as the search algorithm evaluates each candidate solution in terms of this function, providing a value that defines the quality of the solution. Most objective functions used by metaheuristic methods for segmentation include the use of cross-entropy provided by a histogram of intensity levels. Cross-entropy is a versatile mathematical function employed across fields such as information theory, statistics, and machine learning [24]. It serves as a metric for quantifying disparities between the probability distributions (histograms) and models. However, the results obtained with objective functions based on histograms and cross-entropy do not clearly highlight the image details that can be used to contrast and evaluate regions in the images.
In this paper, a novel approach for segmenting pneumonia in lung radiographs is introduced. This method uses the jellyfish search optimizer (JSO) as the optimization method. Unlike other metaheuristic methods, the JSO is known for its strong global exploration capabilities and robustness. With the use of the JSO, there is a better expectation of finding optimal solutions than with the other metaheuristic methods. As an objective function, this approach proposes the use of an energy curve based on cross-entropy as a cost function. This energy curve associates the cross-entropy with the spatial information of the pixels. Therefore, this objective function places greater emphasis on penalizing misclassified pixels, thereby enhancing the precision of delineating regions prone to segmentation errors. This emphasis on accuracy is crucial for identifying objects or regions of interest. To validate the proposed method, extensive testing was conducted using the widely available datasets for pneumonia segmentation. A comparative analysis with established techniques revealed that our approach consistently delivered superior results in terms of both accuracy and robustness, underscoring its potential significance in medical image analysis.
The remainder of this article is organized as follows. Section 2 describes the jellyfish search optimizer (JSO). Section 3 discusses the segmentation procedure under the cross-entropy perspective. Section 4 develops the proposed energy curve as an objective function. Section 5 explains the proposed method. Section 6 presents the experimental results. Finally, Section 7 discusses the conclusions.

2. Jellyfish Search Optimizer (JSO)

The JSO is a novel metaheuristic algorithm inspired by the behavior of jellyfish in the ocean. It was developed by Jui-Sheng Chou and Dinh-Nhat Truong [18,20]. The algorithm simulates the search behavior of jellyfish, which involves their following the ocean current, their motions inside a jellyfish swarm (active and passive motions), a time control mechanism for switching among these movements, and their convergence into jellyfish blooms. The new algorithm has only two control parameters: the population size and the number of iterations. Therefore, the JSO is very simple to use and is potentially an excellent metaheuristic algorithm for solving optimization problems.
The jellyfish population of n elements can be characterized and modeled using the following model:
X i ( k + 1 ) = 4 P 0 ( 1 X i ( k ) )
where i 1 , , n , and P 0 is factor with values within [0,1]. The evaluation of the time control function, C F ( k ) , value is conducted following the procedure outlined in Equation (2), and it undergoes a temporal variation, ranging from 0 to 1:
C F ( k ) = 1 k k m a x · 2 · r a n d ( 0,1 ) 1
where k m a x represents the maximal number of iterations and r a n d ( 0,1 ) represents a random uniformly generated number.
When C F ( k ) exceeds a fixed value C 0 (normally 0.5), the equation provided in Equation (3) is used to calculate the new position for each jellyfish:
X i ( k + 1 ) = R · X * 3 · R · μ + X i ( k )
where X * represents the current the best location of the complete population. μ corresponds to the average position of the population, whereas R is a random uniformly distributed number.
When C F ( k ) is lower than the C 0 threshold, the update of each jellyfish’s position is contingent upon its movement within the swarm, as elucidated in Equations (4) and (5).
X i ( k + 1 ) = 0.1 · R · U b L b + X i ( k )
X i ( k + 1 ) = X i ( k ) + R · X j ( k ) X i ( k ) i f   f ( X i ( k ) ) f ( X j ( k ) ) X i ( k ) + R · X i ( k ) X j ( k ) i f   f ( X i ( k ) ) < f ( X j ( k ) )
where U b and L b are the upper bound and lower bounds of the decision variables, respectively. X j symbolizes a randomly selected element of the population, so that j i .
If an element moves beyond the boundaries of the search zone, it promptly retraces its path, as exemplified by Equation (6), back to the opposite boundary.
In contrast to conventional classical metaheuristic techniques, the JSO stands out because of its distinctive attributes, particularly its exceptional global exploration capability and robustness. Unlike other methods that may struggle to explore diverse regions of the solution space effectively, the JSO excels in its ability to comprehensively search for optimal solutions across a wide range of possibilities. Its robustness ensures that it can adapt and perform consistently in various problem domains and under different conditions.

3. Image Segmentation with Minimum Cross-Entropy

Cross-entropy, as an information-theoretical concept, was introduced by Kullback in 1997. It quantifies the distance between two probability distributions, denoted as J = j 1 , j 2 , , j N and G = g 1 , g 2 , , g N . This measure, known as cross-entropy or divergence, serves as a valuable tool for assessing the dissimilarity between the two distributions.
The threshold plays a crucial role in effectively distinguishing different regions within an image. This threshold is determined by determining the point that adequately separates these regions. It is often computed by analyzing an image’s histogram or power curve, which represents the probability distribution and appearance of pixel intensity values. Statistical techniques can be applied to discern the differences in the shapes of these distributions. One such technique is the use of cross-entropy as a parametric criterion for distribution analysis, as introduced by Kullback [25]. Cross-entropy compares two distributions, denoted as J and G , which are defined over the same set. It quantifies the dissimilarity between these distributions, offering a valuable tool for measuring the differences and assessing the effectiveness of various thresholding methods.
D J , G = i = 1 N j i log j i g i
In 1993, Li and Lee [26] introduced the concept of cross-entropy and applied it to binary image segmentation. Their approach, known as minimum cross-entropy (MCET), utilizes the image histogram, dividing it into subsets. The primary objective is to identify the optimal set of thresholds that would minimize the cross-entropy within each partition, effectively enhancing the accuracy of image segmentation. In the context of an 8-bit grayscale digital image, where pixel values range from 0 to 255 and L is the maximum value of 255, a threshold value, th, was employed to segment the image and separate pixels into distinct regions based on their grayscale intensity levels. This innovative technique has been instrumental in improving the image segmentation accuracy and has found practical applications in various fields, particularly in computer vision and image analysis.
I t x , y =               μ ( 1 , t h ) ,     I ( x , y ) < t h , μ ( t h , L + 1 ) ,     I ( x , y ) t h ,
where:
μ a , b = i = a b 1 i h G r i / i = a b 1 h G r i
Under such conditions, the expression can be rewritten as the objective function:
f C r o s s t h = i = 1 t h 1 i h G r i log i μ 1 , t h + i = t h L i h G r i log i μ t h , L + 1
Although minimum cross-entropy (MCET) was originally designed to determine a single threshold value for partitioning image histograms into two classes, many real-world scenarios demand more nuanced segmentation involving multiple classes. To address this, the MCET problem can be transformed into a multilevel formulation. This adaptation allows for the identification of a set of threshold values that can efficiently segment images into multiple regions, accommodating the complexity and diversity of scenes often encountered in image analysis and computer vision applications. This extension of MCET enhances its applicability and utility in tackling more intricate image segmentation challenges.
f C r o s s t h = i = 1 L i h G r i log i i = 1 t h 1 i h G r i log μ 1 , t h i = t h L i h G r i log μ t h , L + 1
The multilevel of the minimum cross-entropy takes a set of k thresholds in a vector form t h = t h 1 , t h 2 , , t h k .
f C r o s s t h = i = 1 L i h G r i log i i = 1 n t H i
where q corresponds to the entropy values and thresholds
H 1 = i = 1 t h 1 1 i h G r i log μ 1 , t h 1
H q = i = t h q 1 t h q 1 i h G r i log μ t h q 1 , t h q , 1 < q < k
H k = i = t h k L i h G r i l o g μ t h q , L + 1
The thresholding approach is the simplest, most robust, and most accurate [27,28].

4. Energy Curve

The introduction of the energy curve in thresholding methods, as described in [29,30], represents a significant advancement by incorporating spatial information along with pixel intensity. Unlike histogram-based segmentation, energy curves consider both the intensity and spatial characteristics of pixels, resulting in smoother curves that faithfully preserve the valleys and peaks. The goal of the image thresholding process is to identify the threshold values corresponding to the valleys in the energy curve. Each valley exists between two adjacent modes, and each mode represents an object within the image. In the context of a digital image, represented as a matrix I = { I i , j , 1     i     m , 1     j     n } , I i , j denotes the gray level of a pixel at position ( i , j ) , and L represents the maximum gray intensity value in the image. To define a neighborhood N of order d at a given position ( i ,   j ) , the notation N i j d = i + u , j + v , u , v ϵ N d is used, where the value of d determines the configuration of the neighborhood system [29]. Table 1 shows the configuration of the pixels used in the neighborhood.
The energy of the image at the gray intensity value I   ( 0     l     L ) is calculated by generating a two-dimensional matrix for every intensity value as B = { b i , j , 1     i     m , 1     j     n } , where b i , j = 1 if the intensity at the current position is greater than the intensity value, where the intensity value ( I i , j > l), or where b i , j = 1 and the intensity value is:
E l = i = 1 m j = 1 n p q ε N i j 2 b i j b p q + i = 1 m j = 1 n p q ε N i j 2 C i j C p q
In Equation (15), the right-hand side comprises a constant term designed to ensure a non-negative energy value, denoted as E l 0 . This term guarantees that for a given image, the energy will be zero if all elements of the binary image, B 1 , are either 1 or −1. This innovative approach allows for the calculation of the energy associated with every intensity value in the image, generating a curve that considers the spatial contextual information of the image [31]. By incorporating both the intensity and spatial data, this energy curve provides a more comprehensive representation of the image analysis and threshold determination.

5. Proposed Method

Our proposed approach focuses on the segmentation of X-ray digital images using two distinct datasets consisting of five images each, one representing healthy lungs and the other comprising X-ray digital images from lungs affected by pneumonia. In the subsequent subsections, we delve into the essential components and intricacies of our algorithm and elucidate its design and functionality in detail. A graphical description of our approach can be seen in Figure 1.
(a) 
Problem formulation
The method under consideration utilizes a multilevel thresholding technique to perform image segmentation on X-ray digital images. This method entails an analysis of pixel intensities within an image, ultimately resulting in the identification of a limited set of threshold values positioned along the energy curve. To accomplish this, we consider the capabilities of JSO algorithm to produce a variety of potential segmentation configurations. These configurations are subsequently assessed using the minimum cross-entropy (MCE) criteria to determine their effectiveness. Through a collaborative and iterative process, MCE and JSO work in tandem to pinpoint the optimal threshold values that deliver the most precise segmentation of the digital image. Essentially, the problem of minimum cross-entropy thresholding is formulated as an optimization problem, aiming to optimize the following expression:
min th X f C r o s s t h
where Equation (11), used as objective function, corresponds to the MCE formulation. The set of restrictions for the feasible space is given by the possible intensity values of a pixel encoded in an 8-bit (0–255) representation X = { t h     R | 0     t h j     255 ,   j = 1 ,   2 ,   ,   n } .
(b) 
Encoding
In our methodology, we represent the threshold values by encoding them within a designated format. Each candidate solution within the population corresponds to a distinct set of these threshold values, which, in turn, dictate the division of pixel intensities in the image into various segments or categories. This encoding procedure guarantees that the thresholds are presented in a manner that allows for manipulation and optimization by the JSO algorithm.
Within the field of stochastic optimization algorithms, our approach initiates an optimization improvement process by creating an initial population of candidate solutions in a random manner. As the optimization progresses, the JSO method employs search strategies that explore positions likely to contain near-optimal or best-obtained solutions. The solutions within the population are continually updated based on information from the best solution identified during the optimization process. Once the JSO search process satisfies the predetermined termination criterion, it concludes, completing the iterative search for the optimal threshold values.
(c) 
MCE-JSO implementation
The MCE-JSO algorithm was effectively designed to address thresholding challenges in X-ray lung digital images. The workflow is summarized as follows. Initially, the target image I ( x ,   y ) is loaded into memory, and its grayscale energy curve is computed. The iterative process of our approach begins with the introduction of a randomly generated population, and the solutions are progressively improved through JSO operators and a fitness function evaluation based on the MCE criteria. The procedure continues until a predefined stopping criterion is satisfied, which often involves a fixed number of generations. Finally, the optimal set of thresholds, t h b e s t , is employed to generate the segmented image, effectively delineating regions of interest within the X-ray image. A graphical representation of the MCE-JSO approach is shown in Figure 1.
(d) 
Segmented image
Following the identification of an optimal set of threshold values by the JSO for a given image, I ( x , y ) , the process advances to performing multilevel thresholding, resulting in the construction of the segmented image, I t h ( x , y ) . The general rule governing this procedure is outlined in Equation (17). The entire methodology of our approach is effectively encapsulated in a clear and concise flowchart, as illustrated in Figure 2, providing a visual representation of the workflow of the algorithm and the sequential steps involved in the image-segmentation process.
I t h               I ( x , y )             i f                     I ( x , y )   t h 1                                                                               t h j 1                     i f             t h j 1 I x , y < t h j ,             j = 2 ,   3 ,   , n 1               I ( x , y )             i f                               I ( x , y ) > t h n                                                                      

6. Experiments

In this section, we discuss the experiments used to assess the efficacy of our approach in segmenting pneumonia on lung radiographs. These experiments encompassed a comprehensive analysis of the mean fitness and standard deviation results, utilizing an experimental dataset comprising lung radiographs. The dataset consisted of five images representing healthy lungs and another set of images featuring pneumonia-affected lungs. The experimentation process involved two distinct phases: the initial implementation of algorithms based on histograms extracted from each digital image and, subsequently, the utilization of energy curves derived from each lung image. These experiments aimed to provide a thorough evaluation of the performance of the proposed approach in accurately segmenting pneumonia-affected regions within the radiographs.
We conducted a comparative analysis of six alternative algorithms to evaluate the effectiveness of the proposed method. These experiments were performed using a dataset comprising ten digital lung images. X-ray images of healthy lungs and lungs with pneumonia are shown in Figure 2 and Figure 3, respectively.
By systematically comparing the outcomes of our approach with those generated by existing algorithms, we aimed to provide a comprehensive assessment of the performance of the proposed method and its potential advantages in the context of segmenting lung images. The algorithms considered in the comparison were as follows:
  • Levy flight distribution (LFD) [32];
  • Particle swarm optimization (PSO) [33,34,35];
  • Arithmetic optimization algorithm (AOA) [36,37];
  • Sunflower optimization (SFO) [38,39];
  • Jellyfish search optimizer (JSO) [40,41];
  • Differential evolution (DE) [42,43].
In our experimental setup, all algorithms employed were configured using specific parameters, as detailed in Table 2. These parameters were selected based on the prior references of the respective algorithms, and the best performance was established. Each algorithm utilized these parameter settings to achieve optimal performance, ensuring a standardized and fair comparison of their effectiveness in the context of image segmentation. This methodology allowed us to assess the algorithms under conditions that aligned with their historically successful configurations, facilitating a meaningful evaluation of their performance against one another.
In our experimental setup, all the optimization algorithms utilized minimum cross-entropy (MCE) as the designated fitness function following the approach outlined in [26]. These metaheuristic (stochastic) algorithms employed the minimum cross-entropy criterion as their objective function while maintaining a consistent solution representation. The assessment of these methodologies relied on a range of image quality metrics including the peak signal-to-noise ratio (PSNR) [44], structural similarity index measure (SSIM) [45], and feature similarity index (FSIM) [46]. The experiments were conducted on MATLAB R2018a running on a 2.4 GHz Intel Core i5 CPU with 12 GB of RAM memory. For the experiments, we considered different threshold levels ( n t = 2 ,   3 ,   4 ,   5 ,   8 ,   16 , and 32 values) using the minimum cross-entropy function [26] to segment digital X-Ray Images. These specific threshold values were chosen because we observed a substantial impact on algorithm performance as the number of thresholds increased, particularly for the 16 and 32 thresholds. This comprehensive evaluation aimed to provide valuable insights into the comparative performance of the algorithms across various threshold scenarios.
The experimental section of our study is divided into three distinct components, each serving a specific purpose. In the first part (Section 6.1), we present the outcomes of our proposed approach and compare its performance with those of six alternative methods. Additionally, we conducted a detailed analysis of the pivotal role of the energy curve in enhancing the performance of our algorithm. To achieve this, we draw comparisons between the results obtained using the energy curve and those obtained using a simpler histogram-based approach. The second part (Section 6.2) of our experimental analysis shifts the focus towards an in-depth assessment of the segmentation quality. Here, we employed a range of metric indices tailored to evaluate the quality of the segmented images. These metrics provide valuable insights into the precision and effectiveness of the proposed approach, offering a comprehensive evaluation of its image segmentation capabilities. Finally, the third part of our experimental section (Section 6.3) is dedicated to rigorous statistical analyses. This crucial component was designed to establish the validity and reliability of the experimental results. By subjecting our findings to statistical scrutiny, we aim to demonstrate that the observed outcomes were not the result of experimental error or chance but are indeed indicative of the inherent mechanisms and capabilities of the algorithms under examination. In essence, this multifaceted experimental approach ensures a thorough and well-rounded assessment of our proposed method for pneumonia segmentation in lung radiographs.

6.1. Results from Lung Radiographs

In this section, we evaluate the efficiency of the proposed approach using the MCE-jellyfish search algorithm. Our experiment involved a reference group of ten lung radiographs, as detailed in the dataset [47], consisting of five images from healthy lungs and five from lungs affected by pneumonia. To rigorously test the effectiveness of our approach, we employed digital images extracted from a chest X-ray images (pneumonia) dataset accessible on Kaggle, an online community of data scientists and machine learning professionals [47]. To assess and compare the performance of our method against five other optimizer algorithms, we performed digital image segmentation at seven different threshold levels: L t h = 2, 3, 4, 5, 8, 16, and 32. This comprehensive analysis allowed us to gauge the robustness and accuracy of our proposed approach across a range of thresholding scenarios, offering valuable insights into its capabilities in the context of lung image segmentation.
The results obtained from our proposed algorithm in conjunction with the outcomes from the other six methods are presented in Table 3. The results were systematically organized considering the various thresholds employed in the segmentation process. These results unequivocally highlight the superior performance of the proposed algorithm in terms of both accuracy and robustness. In terms of accuracy, our approach consistently exhibited the lowest values of the average objective function across all threshold scenarios, surpassing the other six methods. Notably, Table 3 underscores the exceptional robustness of our algorithm, as evidenced by its minimal standard deviation values compared with the alternative methods. This lower standard deviation indicates that our algorithm consistently yielded highly consistent results, with minimal variability. By contrast, the other algorithms exhibited a broader range of values, indicating their inability to consistently deliver the same solution. These findings confirm that our algorithm consistently produced reliable results across all the threshold configurations considered in the experiments, underlining its robust and dependable performance. The competitive results achieved by the proposed approach can be attributed to two distinct and complementary elements. First, the outstanding search capabilities of the JSO algorithm play a pivotal role. This algorithm, with its capacity for systematic exploration and optimization within the solution space, effectively navigates the complex terrain of image segmentation. Second, our approach leverages an innovative energy curve, which is a crucial component. This energy curve, designed to precisely measure the elements, significantly contributes to the quality of segmentation. It not only considers pixel intensities but also incorporates spatial contextual information from the image. This comprehensive approach ensures that the elements critical for accurate segmentation are properly evaluated and utilized, ultimately leading to the robust and competitive results observed in our experiments.
Figure 4 provides a visual representation of the segmentation outcomes achieved by the proposed approach, focusing on thresholds 2, 8, and 16 and featuring two representative images, namely, images 01 and 09. This visual showcase effectively demonstrates the performance of the algorithm in segmenting specific images. After a close examination of the images, it was evident that the proposed approach enhanced the crucial regions within the X-ray images. The segmentation results exhibited a detailed and refined delineation of the key areas, which is of paramount importance in assisting physicians with disease diagnosis. This visual evidence underscores the approach’s ability to provide precise and meaningful segmentation, contributing to more accurate medical assessments, and ultimately improving patient care.
To gain a comprehensive understanding of how each algorithm performs, Figure 5 illustrates its behavior across a range of different thresholding levels. In this visualization, the performance of each algorithm was evaluated in terms of its objective function, and the results were aggregated by averaging across all the images considered in the experiments. The striking observation from this figure is the consistent superiority of the proposed algorithm, which consistently achieved the best results, as indicated by the lowest objective function values. Importantly, Figure 5 indicates that the most substantial disparities among the various methods manifested when they were tasked with segmenting images using a smaller number of thresholds, specifically, 3, 4, 5, and 8. These threshold levels are pivotal for numerous segmentation applications and clinical applications. The exceptional performance of the proposed algorithm in these critical scenarios underscores its effectiveness and reliability in accurately identifying regions of interest within the images. This visual representation provides a clear and compelling depiction of the algorithmic performance across the different thresholding levels, reinforcing the outstanding capabilities of the proposed approach in image segmentation tasks.
To investigate the impact of the energy curve on the segmentation outcomes, we conducted an additional experiment in which the energy curve was replaced by a conventional histogram. The aim of this experiment was to demonstrate the superior performance achieved by using the energy curve. The results of this experiment are presented in Table 4, where it is evident that the values obtained by all the algorithms were generally higher than those reported in Table 3. This shift indicates that the utilization of the energy curve led to solutions that penalized less important elements, consequently highlighting the most critical areas within the X-ray image. As such, it can be deduced that the incorporation of the energy curve is a pivotal factor in achieving competitive results and, more importantly, in assisting physicians in the accurate detection of pneumonia. The ability of the energy curve to consider both the intensity and spatial information elevates its significance in image segmentation tasks, as evidenced by the contrasting results between the two experiments.
To compare the impact of utilizing the energy curve against the conventional histogram in the objective function, Figure 6 provides a comprehensive comparison. This figure highlights the outcomes of the cost function minimization for the proposed algorithm (JSO method), with the results averaged across the 10 images employed in the experiment. The key observation from this illustration is the discernible difference in the performance between the two approaches.
As depicted in the image, the use of the energy curve consistently yielded superior results compared to the use of the histogram, particularly at thresholds 2, 3, 4, and 5. These specific threshold levels are critical in image segmentation tasks because they often correspond to essential regions of interest. The clear contrast in performance underscores the efficacy of incorporating the energy curve, as it led to more accurate and meaningful segmentation results. This visual representation provides compelling evidence of the advantages offered by the energy curve in enhancing the performance of the algorithm, particularly in scenarios in which precise segmentation is paramount.

6.2. Evaluating Segmentation Quality

The assessment of image segmentation results requires an evaluation of various quality indices to provide an objective measure of performance. In this study, we employed a set of well-established metrics to evaluate the quality of segmented images. These metrics included the peak signal-to-noise ratio (PSNR), structural similarity index method (SSIM), and feature similarity index method (FSIM). These indexes were chosen for their ability to objectively and quantitatively assess the quality of the segmentation outcomes. Using these metrics, we could effectively gauge the accuracy and fidelity of the segmented images, providing a robust and objective basis for evaluating the performance of segmentation algorithms.
(A)
Peak Signal-To-Noise Ratio (PSNR)
The peak signal-to-noise ratio (PSNR) [44] is a widely used metric for evaluating the quality of segmented images. It quantifies the level of noise or distortion present in a segmented image by comparing it to the original unsegmented image. A higher PSNR value indicates a lower level of distortion or error, which implies that the segmented image closely resembles the original image. In essence, the PSNR provides a quantitative measure of how faithfully the segmentation process preserves the details and information of the image. This metric is particularly valuable for assessing the accuracy and fidelity of segmentation results, making it an essential tool for image analysis and processing tasks. The PSNR is computed as follows:
P S N R = 20 l o g 10 255 R M S E
R M S E = i = 1 r o j = 1 c o I G r i , j I t h i , j r o × c o
Figure 7 provides a visual representation of the peak signal-to-noise ratio (PSNR) results, offering valuable insights into the quality of the segmented images. These PSNR values were obtained by averaging the index over the ten images utilized in the experiments, resulting in a comprehensive assessment of the performance. This figure shows several important observations. First, when considering the two thresholds, the results of all algorithms appeared similar, indicating comparable performance in this specific scenario. However, the most significant disparities became evident when examining the outcomes for the 8, 16, and 32 thresholds. In these cases, the proposed approach consistently outperformed its competitors, exhibiting markedly higher PSNR values. This observation underscores the effectiveness of our approach in preserving image quality and minimizing distortion during the segmentation process. The superior PSNR values for the higher threshold configurations reaffirm the capability of the algorithm to deliver accurate and high-quality segmented images, which is a critical factor in medical image analysis and diagnosis. From these results, it can be seen that our method obtained a better PSNR value for the case of the 32 levels, which on average was 15%.
(B)
Structural Similarity Index Method (SSIM)
The structural similarity index method (SSIM) [45] is a robust metric used to assess the quality of segmented images. It measures the structural similarity between a segmented image and a reference image, often the original unsegmented image. The SSIM considers various factors, including luminance, contrast, and structure, to evaluate how well the segmented image preserves the important structural information present in the reference image. A higher SSIM score indicates a closer resemblance between the segmented and reference images, implying a better segmentation quality. The SSIM provides a holistic assessment of both local and global structural similarities, making it a valuable tool for objectively evaluating the accuracy and fidelity of segmented images. The SSIM is computed as follows:
S S I M   I G r , I t h = 2 μ I G r μ I t h + C 1 2 σ I G r σ I t h + C 2 ( μ I G r 2 + μ I t h 2 + C 1 )   ( σ I G r 2 + σ I t h 2 + C 2 )
σ I 0 I G r = 1 N 1 i = 1 N ( I G r i + μ G r ) ( I t h i + μ I t h )
where μ I G r and μ I t h   are the mean values of the original and the segmented image, respectively, where for each image, the values of σ I G r   and σ I t h correspond to the standard deviation. C 1 and C 2 are constants used to avoid instability when μ I G r 2 + μ I t h 2   0 , both values of which in this experiment were C 1 = C 2 = 0.065 .
Figure 8 visually presents the results of the structural similarity index method (SSIM) index, providing an insightful assessment of the quality of the segmented images. These SSIM values represent the average index across the ten images involved in the experiments, offering a comprehensive evaluation of the performance. This figure presents several noteworthy observations.
Initially, when considering scenarios with only two thresholds, the results of all the algorithms appeared uniform, indicating similar performance levels in this particular context. However, significant disparities emerged when evaluating the outcomes for the 8, 16, and 32 thresholds. In these cases, the proposed approach consistently outperformed its competitors and demonstrated substantially higher SSIM values. This underscores the proficiency of the algorithm in preserving both the local and global structural similarities between the segmented and reference images. The superior SSIM values for the higher-threshold configurations reaffirm the effectiveness of the proposed approach in delivering accurate and high-quality segmented images, which is crucial for applications in medical image analysis and diagnosis. From these results, it can be seen that our method obtained a better value of the SSIM for the case of the 32 levels, which on average was 8%.
(C)
Feature Similarity Index Method (FSIM)
The feature similarity index method (FSIM) [46] is employed to evaluate the quality of segmented images. The FSIM focuses on assessing the quality of structural and textural information in a segmented image by comparing it to a reference image, typically the original unsegmented image. Unlike traditional metrics that consider luminance and structural factors, the FSIM places strong emphasis on capturing the perceptual quality of a segmented image. It analyzes features, such as edges, textures, and patterns, to determine how faithfully these essential aspects are preserved during the segmentation process. A higher FSIM score signifies closer similarity between the segmented and reference images, reflecting better segmentation quality in terms of preserving critical visual features and textures. The FSIM is particularly valuable for evaluating the perceptual quality of segmented images, making it a crucial tool for various image analysis applications. The FSIM index is computed as follows:
F S I M = w Ω S L w P C m w w Ω P C m w
where Ω denotes the domain of the image.
S L w = S P C w S G w
S G w = 2 G 1 w G 2 w + T 2 G 1 2 w + G 2 2 w + T 2
G = G x 2 + G y 2
P C w = E w ε + n A n w
Figure 9 illustrates the outcomes of the feature similarity index method (FSIM) applied to segmented lung X-ray images generated by each of the six algorithms. This presentation offers valuable insights into the perceptual quality of the segmented images. In the case of the FSIM test, there was a notable consistency in the results produced by each optimization algorithm, particularly when considering the scenarios with lower threshold counts. However, a striking contrast emerged when the number of thresholds increased. In these instances, the proposed algorithm consistently exhibited superior results, with FSIM scores approaching the maximum value of 1. This observation emphasizes the effectiveness of the algorithm in preserving essential visual features, textures, and patterns within segmented images, even when faced with the challenges posed by higher-threshold configurations. The close proximity of the results of the proposed algorithm to the maximum FSIM value underscores its capacity to deliver perceptually high-quality segmented images, which is of paramount importance in medical image analysis and clinical diagnosis. From these results, it can be seen that our method obtained a better FSIM value for the case of the 32 levels, which on average was 6%.

6.3. Statistical Analysis

The primary objective of the statistical analysis in this study was to establish that the superior results achieved by the proposed algorithm in comparison to the other methods were attributable to the algorithm’s inherent capabilities rather than random chance or experimental variability. To address this objective rigorously, we employed the Kruskal–Wallis test [48].
The Kruskal–Wallis test is a non-parametric statistical test used to assess whether there are significant differences between three or more independent groups. It is often employed when the data do not meet the assumptions of the normal distribution or homogeneity of variances required for parametric tests such as analysis of variance (ANOVA) [49].
In our comprehensive analysis, data were collected from a series of experiments, each comprising thirty independent runs for each of the six algorithms under evaluation. These experiments encompassed a wide range of threshold values, including 2, 3, 4, 5, 8, 16, and 32 thresholds, ensuring a thorough assessment of the algorithm performance across the various configurations. To determine the statistical significance of the observed results, we established a predefined significance level denoted as α. This value played a crucial role in the hypothesis testing process. The hypothesis under consideration was the null hypothesis ( H 0 ), which posits that there were no significant differences among the algorithms, implying that the observed variations were merely due to chance or randomness. To ascertain whether we can reject this null hypothesis, we conducted a Kruskal–Wallis test and compared its output with α . If the test yielded a p -value lower than α   ( p < α ) , it signified that the algorithms exhibited statistically significant differences in their performances. This outcome empowered us to conclude that the variations in the results were not mere chance occurrences but rather reflect genuine distinctions in the capabilities of the evaluated algorithms. This rigorous statistical approach ensured the validity and reliability of the comparative analyses.
Based on the comprehensive statistical analysis, a robust conclusion was drawn regarding the performance of the proposed method in comparison with the other algorithms. These findings unequivocally established that the superior results achieved by the proposed method were not a consequence of experimental error or random variability. Instead, they were attributed to the inherent capabilities and efficacy of the proposed algorithm.
To gain a deeper understanding of the statistical results that elucidate the performance of the proposed approach, Figure 10 provides a comprehensive visual representation through a box plot. This graphical presentation offers an insightful depiction of each algorithm’s performance, featuring central tendencies (visualized as red lines) denoting the average results and the dispersion represented by the width of the box. After close examination of the figure, a compelling pattern emerged. The proposed algorithm conspicuously stands out as the method with the lowest average value compared with the other methods. This signifies that on average, the proposed method consistently outperformed its counterparts, offering superior solutions to the segmentation problem at hand. A noteworthy observation relates to the dispersion of the results among the methods. Although the other methods exhibited a considerable spread in their outcomes, indicating substantial variability in terms of performance, the proposed method distinguished itself by consistently yielding results with minimal dispersion. In other words, the proposed algorithm consistently provided results that were closely clustered, underscoring its remarkable stability and reliability. In summary, Figure 10 provides a compelling visual testament to the exceptional performance and robustness of the proposed approach, highlighting its capacity to consistently deliver superior solutions while maintaining a high degree of result consistency, which is a crucial attribute in the realms of optimization and image segmentation.

7. Conclusions

In this study, we introduced a novel and highly effective segmentation method designed for the precise identification of pneumonia on lung radiographs. Our approach uses the JSO, a metaheuristic algorithm renowned for its exceptional global exploration capabilities and robust performance. The core of our methodology is the utilization of an energy curve as the cost function, which hinged upon the principles of cross-entropy. This energy curve plays a pivotal role in enhancing segmentation accuracy by imposing heavier penalties on misclassified pixels, thus directing the algorithm’s attention towards regions where segmentation errors are most likely to occur. The strategic application of this energy-based cross-entropy cost function is of paramount importance, as it ensures the precise delineation of objects or regions of interest within radiographs—a critical requirement for the accurate diagnosis and monitoring of pneumonia. Our approach not only demonstrates the capacities of JSO but also underscores the significance of advanced cost functions in the area of medical image segmentation.
To validate the proposed approach, a comprehensive assessment was conducted using a diverse dataset of X-ray images sourced from both healthy individuals and patients with pneumonia. In these experiments, our MCE-JSO method was subjected to a thorough comparative analysis of several alternative stochastic optimization techniques. The algorithms used in the comparison included levy flight distribution (LFD), particle swarm optimization (PSO), the arithmetic optimization algorithm (AOA), sunflower optimization (SFO), and differential evolution (DE). The evaluation process revolved around an in-depth examination of the quality of image segmentation achieved through these algorithms. To evaluate the performance objectively, we employed key image quality metrics, such as the peak signal-to-noise ratio (PSNR), structural similarity index measure (SSIM), and feature similarity index. These metrics provided a robust quantitative assessment of the segmentation results, allowing us to discern the nuances in algorithm performance. To ensure the robustness and reliability of our findings, the results were subjected to statistical scrutiny, encompassing multiple tests and analyses. This multifaceted approach to validation not only reaffirmed the effectiveness of our proposed method but also offered a comprehensive and data-driven evaluation of its performance in the context of pneumonia detection in X-ray images.
We conducted a dedicated experiment to expand our analysis and to elucidate the pivotal role of the energy curve in the segmentation process. This experiment served as a crucial means to highlight the significance of our chosen approach compared to a more straightforward method using a simple histogram.
The exceptional performance achieved by our proposed approach can be attributed to the synergy between the two distinct yet highly complementary elements. First, the JSO algorithm takes the spotlight owing to its remarkable search capabilities. With its systematic and exhaustive exploration of the solution space, JSO effectively navigates the intricate landscape of image segmentation and optimizes the process with precision. Second, our approach strategically incorporates an innovative energy curve, which is a critical component that significantly enhances segmentation quality. Unlike traditional methods that rely solely on pixel intensities, this energy curve goes a step further by incorporating the spatial contextual information of an image. Considering these spatial nuances, the curve ensures that elements crucial for accurate segmentation are meticulously evaluated and leveraged. This comprehensive and holistic approach, encompassing both advanced search capabilities and the energy curve, synergizes to produce the robust and competitive results observed in our experiments.

Author Contributions

Software, O.Z.; Validation, O.Z.; Formal analysis, D.Z.; Investigation, E.C. and M.P. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by SICyT-Jalisco and COECYTJALMathematics 11 04363 i001, FODECYTJAL 2022, DISEÑO DE NUEVOS VEHÍCULOS SUSTENTABLES E INTELIGENTES EN JALISCO grant number 10300.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Gao, C.A.; Markov, N.S.; Stoeger, T.; Pawlowski, A.E.; Kang, M.; Nannapaneni, P.; Grant, R.A.; Pickens, C.; Walter, J.M.; Kruser, J.M.; et al. Machine learning links unresolving secondary pneumonia to mortality in patients with severe pneumonia, including COVID-19. J. Clin. Investig. 2023, 133, e170682. [Google Scholar] [CrossRef]
  2. Abdullah, S.H.; Abedi, W.M.S.; Hadi, R.M. Enhanced feature selection algorithm for pneumonia detection. Period. Eng. Nat. Sci. 2023, 10, 168–180. [Google Scholar] [CrossRef]
  3. Xie, P.; Zhao, X.; He, X. Improve the performance of CT-based pneumonia classification via source data reweighting. Sci. Rep. 2023, 13, 9401. [Google Scholar] [CrossRef] [PubMed]
  4. Yan, N.; Tao, Y. Pneumonia X-ray detection with anchor-free detection framework and data augmentation. Int. J. Imaging Syst. Technol. 2023, 33, 1235–1246. [Google Scholar] [CrossRef]
  5. Asswin, C.R.; KS, D.K.; Dora, A.; Ravi, V.; Sowmya, V.; Gopalakrishnan, E.A.; Soman, K.P. Transfer learning approach for pediatric pneumonia diagnosis using channel attention deep CNN architectures. Eng. Appl. Artif. Intell. 2023, 123, 106416. [Google Scholar]
  6. Kaya, Y.; Gürsoy, E. A MobileNet-based CNN model with a novel fine-tuning mechanism for COVID-19 infection detection. Soft Comput. 2023, 27, 5521–5535. [Google Scholar] [CrossRef] [PubMed]
  7. Chen, S.; Ren, S.; Wang, G.; Huang, M.; Xue, C. Interpretable CNN-Multilevel Attention Transformer for Rapid Recognition of Pneumonia from Chest X-ray Images. IEEE J. Biomed. Health Inform. 2023; online ahead of print. [Google Scholar]
  8. Sanghvi, H.A.; Patel, R.H.; Agarwal, A.; Gupta, S.; Sawhney, V.; Pandya, A.S. A deep learning approach for classification of COVID and pneumonia using DenseNet-201. Int. J. Imaging Syst. Technol. 2023, 33, 18–38. [Google Scholar] [CrossRef]
  9. AnbuDevi, M.K.A.; Suganthi, K. Review of Semantic Segmentation of Medical Images Using Modified Architectures of UNET. Diagnostics 2022, 12, 3064. [Google Scholar] [CrossRef]
  10. Zhou, Z.; Siddiquee, M.M.R.; Tajbakhsh, N.; Liang, J. Unet++: Redesigning skip connections to exploit multiscale features in image segmentation. IEEE Trans. Med. Imaging 2019, 39, 1856–1867. [Google Scholar] [CrossRef]
  11. Diakogiannis, F.I.; Waldner, F.; Caccetta, P.; Wu, C. ResUNet-a: A deep learning framework for semantic segmentation of remotely sensed data. ISPRS J. Photogramm. Remote Sens. 2020, 162, 94–114. [Google Scholar] [CrossRef]
  12. Li, X.; Chen, H.; Qi, X.; Dou, Q.; Fu, C.W.; Heng, P.A. H-DenseUNet: Hybrid densely connected UNet for liver and tumor segmentation from CT volumes. IEEE Trans. Med. Imaging 2018, 37, 2663–2674. [Google Scholar] [CrossRef] [PubMed]
  13. Aiadi, O.; Khaldi, B. A fast lightweight network for the discrimination of COVID-19 and pulmonary diseases. Biomed. Signal Process. Control 2022, 78, 103925. [Google Scholar] [CrossRef]
  14. Basu, A.; Sheikh, K.H.; Cuevas, E.; Sarkar, R. COVID-19 detection from CT scans using a two-stage framework. Expert Syst. Appl. 2022, 193, 116377. [Google Scholar] [CrossRef]
  15. Xue, X.; Chinnaperumal, S.; Abdulsahib, G.M.; Manyam, R.R.; Marappan, R.; Raju, S.K.; Khalaf, O.I. Design and Analysis of a Deep Learning Ensemble Framework Model for the Detection of COVID-19 and Pneumonia Using Large-Scale CT Scan and X-ray Image Datasets. Bioengineering 2023, 10, 363. [Google Scholar] [CrossRef] [PubMed]
  16. Issa, M.; Helmi, A.M.; Elsheikh, A.H.; Abd Elaziz, M. A biological sub-sequences detection using integrated BA-PSO based on infection propagation mechanism: Case study COVID-19. Expert Syst. Appl. 2022, 189, 116063. [Google Scholar] [CrossRef] [PubMed]
  17. Abualigah, L.; Diabat, A.; Sumari, P.; Gandomi, A.H. A novel evolutionary arithmetic optimization algorithm for multilevel thresholding segmentation of COVID-19 ct images. Processes 2021, 9, 1155. [Google Scholar] [CrossRef]
  18. Kumar, N.M.; Premalatha, K.; Suvitha, S. Lung disease detection using Self-Attention Generative Adversarial Capsule network optimized with sun flower Optimization Algorithm. Biomed. Signal Process. Control 2023, 79, 104241. [Google Scholar]
  19. Singh, D.; Kumar, V.; Vaishali; Kaur, M. Classification of COVID-19 patients from chest CT images using multi-objective differential evolution–based convolutional neural networks. Eur. J. Clin. Microbiol. Infect. Dis. 2020, 39, 1379–1389. [Google Scholar] [CrossRef]
  20. Chou, J.-S.; Ngo, N.-T. Modified firefly algorithm for multidimensional optimization in structural design problems. Struct. Multidiscip. Optim. 2017, 55, 2013–2028. [Google Scholar] [CrossRef]
  21. Shaheen, A.M.; El-Sehiemy, R.A.; Alharthi, M.M.; Ghoneim, S.S.; Ginidi, A.R. Multi-objective jellyfish search optimizer for efficient power system operation based on multi-dimensional OPF framework. Energy 2021, 237, 121478. [Google Scholar] [CrossRef]
  22. Kaveh, A.; Biabani Hamedani, K.; Kamalinejad, M.; Joudaki, A. Quantum-based jellyfish search optimizer for structural optimization. Int. J. Optim. Civil. Eng. 2021, 11, 329–356. [Google Scholar]
  23. Farhat, M.; Kamel, S.; Atallah, A.M.; Khan, B. Optimal power flow solution based on jellyfish search optimization considering uncertainty of renewable energy sources. IEEE Access 2021, 9, 100911–100933. [Google Scholar] [CrossRef]
  24. Oliva, D.; Hinojosa, S.; Osuna-Enciso, V.; Cuevas, E.; Pérez-Cisneros, M.; Sanchez-Ante, G. Image segmentation by minimum cross entropy using evolutionary methods. Soft Comput. 2019, 23, 431–450. [Google Scholar] [CrossRef]
  25. Kullback, S. Information Theory and Statistics; Courier Corporation: Chelmsford, MA, USA, 1997. [Google Scholar]
  26. Li, C.H.; Lee, C.K. Minimum cross entropy thresholding. Pattern Recognit. 1993, 26, 617–625. [Google Scholar] [CrossRef]
  27. Hammouche, K.; Diaf, M.; Siarry, P. A comparative study of various meta-heuristic techniques applied to the multilevel thresholding problem. Eng. Appl. Artif. Intell. 2010, 23, 676–688. [Google Scholar] [CrossRef]
  28. Sankur, B.; Sezgin, M. Survey over image thresholding techniques and quantitative performance evaluation. J. Electron. Imaging 2004, 13, 146–168. [Google Scholar] [CrossRef]
  29. Ghosh, S.; Bruzzone, L.; Patra, S.; Bovolo, F.; Ghosh, A. A contextsensitive technique for unsupervised change detection based on hopfield hopfieldtype neural networks. IEEE Trans. Geosci. Remote. Sens. 2007, 45, 778–789. [Google Scholar] [CrossRef]
  30. Patra, S.; Gautam, R.; Singla, A. A novel context sensitive multilevel thresholding for image segmentation. Appl. Soft Comput. 2014, 23, 122–127. [Google Scholar] [CrossRef]
  31. Oliva, D.; Hinojosa, S.; Abd Elaziz, M.; Ortega-Sánchez, N. Context based image segmentation using antlion optimization and sine cosine algorithm. Multimed. Tools Appl. 2018, 77, 25761–25797. [Google Scholar] [CrossRef]
  32. Houssein, E.H.; Saad, M.R.; Hashim, F.A.; Shaban, H.; Hassaballah, M. L′evy flight distribution: A new metaheuristic algorithm for solving engineering optimization problems. Eng. Appl. Artif. Intell. 2020, 94, 103731. [Google Scholar] [CrossRef]
  33. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the ICNN’95-International Conference on Neural Networks, Perth, WA, Australia, 27 November–1 December 1995; IEEE: Piscataway, NJ, USA, 1995; Volume 4, pp. 1942–1948. [Google Scholar]
  34. Maitra, M.; Chatterjee, A. A hybrid cooperative–comprehensive learning based pso algorithm for image segmentation using multilevel thresholding. Expert Syst. Appl. 2008, 34, 1341–1350. [Google Scholar] [CrossRef]
  35. Xiang, T.; Liao, X.; Wong, K.-W. An improved particle swarm optimization algorithm combined with piecewise linear chaotic map. Appl. Math. Comput. 2007, 190, 1637–1645. [Google Scholar] [CrossRef]
  36. Abualigah, L.; Diabat, A.; Mirjalili, S.; Abd Elaziz, M.; Gandomi, A.H. The arithmetic optimization algorithm. Comput. Methods Appl. Mech. Eng. 2021, 376, 113609. [Google Scholar] [CrossRef]
  37. Khatir, S.; Tiachacht, S.; Le Thanh, C.; Ghandourah, E.; Mirjalili, S.; Wahab, M.A. An improved artificial neural network using arithmetic optimization algorithm for damage assessment in fgm composite plates. Compos. Struct. 2021, 273, 114287. [Google Scholar] [CrossRef]
  38. Gomes, G.F.; da Cunha, S.S., Jr.; Ancelotti, A.C. A sunflower optimization (sfo) algorithm applied to damage identification on laminated composite plates. Eng. Comput. 2019, 35, 619–626. [Google Scholar] [CrossRef]
  39. Yuan, Z.; Wang, W.; Wang, H.; Razmjooy, N. A new technique for optimal estimation of the circuit-based pemfcs using developed sunflower optimization algorithm. Energy Rep. 2020, 6, 662–671. [Google Scholar] [CrossRef]
  40. Chou, J.-S.; Truong, D.-N. A novel metaheuristic optimizer inspired by behavior of jellyfish in ocean. Appl. Math. Comput. 2021, 389, 125535. [Google Scholar] [CrossRef]
  41. Fossette, S.; Gleiss, A.C.; Chalumeau, J.; Bastian, T.; Armstrong, C.D.; Vandenabeele, S.; Karpytchev, M.; Hays, G.C. Current-oriented swimming by jellyfish and its role in bloom maintenance. Curr. Biol. 2015, 25, 342–347. [Google Scholar] [CrossRef]
  42. Storn, R.; Price, K. Differential evolution—A simple and efficient heuristic for global optimization over continuous spaces. J. Glob. Optim. 1997, 11, 341–359. [Google Scholar] [CrossRef]
  43. Cuevas, E.; Zaldivar, D.; Pérez-Cisneros, M. A novel multi-threshold segmentation approach based on differential evolution optimization. Expert Syst. Appl. 2010, 37, 5265–5271. [Google Scholar] [CrossRef]
  44. Avcibas, I.; Sankur, B.; Sayood, K. Statistical evaluation of image quality measures. J. Electron. Imaging 2002, 11, 206–223. [Google Scholar]
  45. Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef]
  46. Zhang, L.; Zhang, L.; Mou, X.; Zhang, D. FSIM: A feature similarity index for image quality assessment. IEEE Trans. Image Process. 2011, 20, 2378–2386. [Google Scholar] [CrossRef] [PubMed]
  47. Kaggle. Chest X-ray Images (Pneumonia). 10 February 2018. Available online: https://www.kaggle.com/datasets/paultimothymooney/chest-xray-pneumonia (accessed on 24 August 2023).
  48. Theodorsson-Norheim, E. Kruskal-wallis test: Basic computer program to perform nonparametric one-way analysis of variance and multiple comparisons on ranks of several independent samples. Comput. Methods Programs Biomed. 1986, 23, 57–62. [Google Scholar] [CrossRef] [PubMed]
  49. Scheffe, H. The Analysis of Variance; John Wiley & Sons: Hoboken, NJ, USA, 1999; Volume 72. [Google Scholar]
Figure 1. MCE-JSO flow chart of proposed methodology.
Figure 1. MCE-JSO flow chart of proposed methodology.
Mathematics 11 04363 g001
Figure 2. Five X-rays of healthy lungs.
Figure 2. Five X-rays of healthy lungs.
Mathematics 11 04363 g002
Figure 3. Five radiographs of lungs with pneumonia.
Figure 3. Five radiographs of lungs with pneumonia.
Mathematics 11 04363 g003
Figure 4. Visual results of the segmentation under the proposed method for two representative images.
Figure 4. Visual results of the segmentation under the proposed method for two representative images.
Mathematics 11 04363 g004
Figure 5. Performance comparison among the methods considering energy curve.
Figure 5. Performance comparison among the methods considering energy curve.
Mathematics 11 04363 g005
Figure 6. Comparative results of the energy curve vs. histogram under the JSO.
Figure 6. Comparative results of the energy curve vs. histogram under the JSO.
Mathematics 11 04363 g006
Figure 7. Comparative analysis in terms of P S N R .
Figure 7. Comparative analysis in terms of P S N R .
Mathematics 11 04363 g007
Figure 8. Comparative analysis in terms of S S I M
Figure 8. Comparative analysis in terms of S S I M
Mathematics 11 04363 g008
Figure 9. Comparative analysis in terms of F S I M .
Figure 9. Comparative analysis in terms of F S I M .
Mathematics 11 04363 g009
Figure 10. Statistical results of averaged values and dispersion through a box plot.
Figure 10. Statistical results of averaged values and dispersion through a box plot.
Mathematics 11 04363 g010aMathematics 11 04363 g010b
Table 1. The spatial representation of the neighborhood system, N 2 .
Table 1. The spatial representation of the neighborhood system, N 2 .
( i 1 ,   j 1 ) ( i 1 ,   j ) ( i 1 ,   j + 1 )
( i ,   j 1 ) ( i ,   j ) ( i ,   j + 1 )
( i + 1 ,   j 1 ) ( i + 1 ,   j ) ( i + 1 ,   j + 1 )
Table 2. Parameter settings of the algorithms.
Table 2. Parameter settings of the algorithms.
AlgorithmParametersValue
Levy flight distribution (LFD)Search agents no30
Max iterations1000
Particle swarm optimization (PSO)Social coefficient2
Cognitive coefficient2
Velocity clamp2
Maximum inertia value0.2
Minimum inertia value0.9
Arithmetic optimization algorithm (AOA)Materials number30
Max iterations1000
Optimization functions2, 6
Sunflower optimization (SFO)Number of sunflowers60
Number of experiments30
Pollination values0.05
Mortality rate, best values0.1
Survival rate1-(p + m)
Iterations/generations1000
Jellyfish search optimizer (JSO)Number of decisions30
Maximum number of iterations1000
Population size30
Differential evolution (DE)Crossover rate0.5
Number of experiments30
Scale factor0.2
Table 3. Averaged fitness and standard deviation values for lung X-ray images using energy curve from LFD, PSO, AOA, SFO, JSO, and DE.
Table 3. Averaged fitness and standard deviation values for lung X-ray images using energy curve from LFD, PSO, AOA, SFO, JSO, and DE.
Image LFDPSOAOASFOJSODE
ntMeanStdMeanStdMeanStdMeanStdMeanStdMeanStd
20.7450.0050.8540.0720.7440.0040.7440.0010.7430.0000.7520.017
30.4620.0270.6870.1120.4570.0150.5420.0740.4450.0000.4860.044
Image 0140.3290.0180.4710.0760.3240.0120.4180.0740.3090.0000.3570.044
50.2510.0190.4010.0860.2470.0190.3340.0460.2260.0000.3020.040
80.1420.0160.2430.0340.1420.0150.1920.0290.1180.0000.1710.024
160.0550.0100.0920.0170.0650.0240.0720.0120.0390.0050.0720.009
320.0180.0040.0350.0060.0420.0320.0240.0030.0150.0020.0290.004
ntMeanStdMeanStdMeanStdMeanStdMeanStdMeanStd
20.9260.0040.9850.0250.9260.0060.9260.0020.9240.0000.9300.009
30.5930.0180.7280.1050.5890.0190.6580.0780.5760.0000.6070.038
Image 0240.4210.0280.5670.0860.4090.0130.4880.0700.3940.0000.4280.024
50.3120.0230.4750.0920.3150.0430.3750.0640.2770.0000.3370.033
80.1690.0150.2590.0340.1810.0540.2090.0310.1350.0000.1990.028
160.0650.0110.1030.0160.0820.0360.0720.0120.0500.0040.0800.011
320.0230.0030.0380.0050.0430.0210.0240.0040.0180.0030.0320.004
ntMeanStdMeanStdMeanStdMeanStdMeanStdMeanStd
21.5080.0161.6530.0991.5160.0281.5060.0031.5040.0001.5190.020
30.9620.0240.9960.0120.9600.0280.9850.0440.9320.0000.9630.032
Image 0340.6530.0650.7960.1120.6510.0600.6380.0460.5740.0000.6300.056
50.4690.0480.6140.0890.4650.0450.4690.0620.3950.0000.4820.035
80.2320.0340.3270.0280.2370.0590.2420.0390.1760.0000.2560.026
160.0860.0130.1300.0230.1210.0490.0900.0150.0600.0050.1030.010
320.0320.0050.0430.0070.0960.0600.0280.0060.0210.0020.0380.004
ntMeanStdMeanStdMeanStdMeanStdMeanStdMeanStd
21.1290.0071.2940.1591.1270.0061.1270.0011.1250.0001.1350.014
30.7170.0260.8820.0900.7180.0200.7880.1060.7000.0000.7410.050
Image 0440.5020.0380.7070.0890.4920.0210.5580.0750.4650.0000.5200.038
50.3950.0370.5360.0770.3710.0280.4280.0580.3350.0000.4190.047
80.2080.0280.3120.0550.2090.0530.2380.0310.1620.0000.2330.027
160.0780.0100.1220.0140.1040.0430.0850.0140.0560.0050.0960.012
320.0270.0040.0400.0050.0570.0420.0270.0040.0190.0020.0350.004
ntMeanStdMeanStdMeanStdMeanStdMeanStdMeanStd
21.0070.0051.1410.1151.0090.0251.0070.0021.0040.0001.0160.028
30.6730.0200.7920.0930.6660.0140.7110.0560.6550.0000.6850.028
Image 0540.4700.0340.6140.0800.4640.0250.5000.0700.4320.0000.4670.029
50.3620.0290.4910.0710.3550.0230.3960.0470.3250.0000.3870.035
80.1890.0190.2790.0370.2030.0470.2110.0300.1550.0000.2260.029
160.0730.0120.1110.0150.0970.0340.0700.0090.0480.0040.0870.009
320.0250.0050.0390.0060.0710.0320.0240.0030.0170.0020.0330.004
ntMeanStdMeanStdMeanStdMeanStdMeanStdMeanStd
20.7670.0050.8400.0570.7650.0010.7650.0010.7640.0000.7680.005
30.4480.0290.5950.1070.4360.0170.4810.0540.4220.0000.4450.026
Image 0640.3160.0240.4520.0650.3030.0150.3580.0530.2850.0000.3290.035
50.2370.0270.3330.0550.2420.0310.2800.0350.2050.0000.2660.036
80.1320.0120.1940.0380.1560.0500.1490.0250.1060.0030.1500.017
160.0500.0080.0780.0130.0670.0170.0550.0080.0340.0030.0620.011
320.0170.0030.0300.0040.0390.0150.0190.0030.0140.0020.0230.002
ntMeanStdMeanStdMeanStdMeanStdMeanStdMeanStd
20.8390.0080.9330.0640.8350.0020.8360.0020.8340.0000.8420.013
30.5540.0170.7030.0870.5480.0100.5990.0560.5410.0000.5690.032
Image 0740.4120.0220.5450.0540.3980.0140.4300.0330.3820.0000.4300.030
50.3130.0200.4360.0540.3070.0250.3290.0330.2810.0000.3390.035
80.1610.0180.2380.0330.1620.0310.1760.0270.1300.0010.1890.025
160.0580.0080.0960.0130.0810.0340.0620.0100.0420.0020.0750.008
320.0210.0040.0330.0040.0440.0220.0230.0040.0140.0010.0270.003
ntMeanStdMeanStdMeanStdMeanStdMeanStdMeanStd
21.0390.0041.1870.1441.0360.0001.0380.0031.0360.0001.0460.021
30.6660.0240.8190.0990.6520.0110.6950.0560.6430.0000.6650.021
Image 0840.4510.0230.6340.0910.4490.0200.4820.0510.4210.0000.4770.049
50.3390.0400.4900.0640.3200.0300.3580.0450.2920.0000.3810.059
80.1710.0260.2850.0420.1710.0310.1840.0230.1370.0040.2070.028
160.0640.0100.1120.0190.0730.0190.0750.0110.0470.0030.0830.011
320.0220.0040.0390.0060.0540.0390.0260.0050.0160.0010.0320.004
ntMeanStdMeanStdMeanStdMeanStdMeanStdMeanStd
21.0280.0041.1530.1211.0290.0071.0280.0021.0260.0001.0330.011
30.6660.0200.8300.0920.6590.0170.7190.0750.6460.0000.6730.036
Image 0940.4760.0320.6150.0910.4630.0220.5090.0760.4300.0000.4690.026
50.3440.0240.4700.0640.3600.0390.3940.0670.3070.0000.3720.039
80.1830.0210.2750.0340.1860.0290.2090.0330.1480.0000.2130.024
160.0730.0130.1100.0160.1140.0640.0720.0100.0510.0030.0860.011
320.0250.0050.0380.0050.0580.0270.0230.0030.0180.0020.0340.003
ntMeanStdMeanStdMeanStdMeanStdMeanStdMeanStd
20.9640.0030.9960.0100.9620.0000.9630.0010.9620.0000.9680.009
30.6030.0160.7950.1030.5980.0100.6510.0570.5880.0000.6170.026
Image 1040.4180.0260.5640.1170.4190.0150.4670.0490.3950.0000.4390.041
50.3220.0240.4370.0450.3220.0200.3500.0310.2930.0000.3560.037
80.1770.0150.2640.0410.1720.0280.2030.0380.1380.0000.1990.023
160.0660.0130.1070.0210.0940.0450.0670.0090.0460.0040.0790.009
320.0210.0040.0360.0050.0480.0370.0240.0040.0160.0020.0300.003
Table 4. Averaged fitness and standard deviation values for lung X-ray images using histogram from LFD, PSO, AOA, SFO, JSO, and DE.
Table 4. Averaged fitness and standard deviation values for lung X-ray images using histogram from LFD, PSO, AOA, SFO, JSO, and DE.
Image LFDPSOAOASFOJSODE
ntMeanStdMeanStdMeanStdMeanStdMeanStdMeanStd
21.17760.00641.32420.12461.17590.00331.17660.00181.17480.00001.18870.0246
30.80710.02660.89200.06150.81760.02280.82360.04430.77410.00000.80360.0332
Image 0140.53950.06570.68690.09130.52980.03960.52620.06280.47040.00000.51630.0395
50.37130.05400.52860.07700.37310.05110.37580.04810.31830.00000.40150.0541
80.18940.03240.29560.04390.19930.03380.21410.04580.14410.00020.21440.0317
160.06900.00930.11860.01630.11110.08240.07870.01310.05270.00450.09480.0116
320.02830.00450.04040.00500.06950.03570.02640.00400.02000.00190.03510.0045
ntMeanStdMeanStdMeanStdMeanStdMeanStdMeanStd
22.31870.04172.51760.22692.30220.03872.29250.00322.28960.00002.31270.0428
31.29470.07941.51370.18001.34360.12101.33960.08891.26420.00001.33180.0932
Image 0240.86030.08010.97260.05100.91140.16400.80970.04170.76450.00010.84310.0525
50.62010.08900.80950.12720.63010.12010.58290.06010.51100.00000.60960.0681
80.31420.05160.43280.07140.36240.16780.28210.03850.23140.00020.36760.0459
160.10760.01820.16870.02780.17500.06010.09970.01210.07850.00450.13660.0197
320.03700.00530.05930.00770.12390.05450.03310.00580.02510.00230.04970.0063
ntMeanStdMeanStdMeanStdMeanStdMeanStdMeanStd
22.47430.05792.66560.22902.46450.04722.44940.00382.44570.00002.45640.0244
31.37330.06351.72530.25501.40080.15161.37800.04951.32190.00001.39500.0830
Image 0340.97100.06510.99530.01690.97230.11140.95450.04520.88930.00000.98890.0697
50.74690.06370.93000.09150.74010.10240.72040.04200.65950.00010.75680.0645
80.43040.04220.53920.07220.49400.14700.38160.04740.31070.00070.46300.0361
160.16330.02310.21890.03020.31180.14920.14200.02070.09990.00430.18170.0237
320.05550.00810.08410.01050.23870.12310.04860.00860.03230.00220.07050.0091
ntMeanStdMeanStdMeanStdMeanStdMeanStdMeanStd
22.95750.02653.18000.14972.94840.01772.94480.00312.94240.00002.96950.0484
31.65390.12591.86820.22651.60620.04841.62740.07901.56130.00011.62520.0827
Image 0441.05460.07181.09760.20861.10920.15021.05620.06720.97770.00001.12780.1465
50.79760.08310.97390.05460.84550.13740.74010.05480.67690.00010.79770.0947
80.38750.05550.55860.09610.43420.10490.35530.04930.28860.00040.42390.0519
160.13160.02420.21160.03210.25720.12390.11250.01060.08450.00330.16070.0183
320.04300.00660.06650.00880.19770.18590.03660.00620.02680.00160.05730.0074
ntMeanStdMeanStdMeanStdMeanStdMeanStdMeanStd
22.69020.04332.99840.40642.66790.00372.66760.00312.66520.00002.68400.0382
31.56430.08911.93720.28321.59310.18411.56540.08301.49560.00001.56720.0839
Image 0541.01500.08901.00000.00001.03140.13750.96100.06370.89110.00001.04700.1239
50.74200.04950.94750.07710.73300.13280.72260.07300.66330.00040.82920.0900
80.42420.05540.58490.09310.47350.21780.38790.05070.29610.00180.45880.0525
160.14890.03430.24280.04020.25250.05010.12800.02520.09320.00550.18240.0292
320.05140.00910.08020.00960.16800.08980.04280.00790.03070.00200.06650.0083
ntMeanStdMeanStdMeanStdMeanStdMeanStdMeanStd
22.31230.02792.59470.24142.29910.00412.30010.00372.29680.00002.31440.0275
31.29000.09721.72080.26841.28550.06421.30070.10171.20790.00011.29940.0759
Image 0640.88010.11100.99720.01520.88240.11160.82180.07790.73050.00000.85450.1110
50.62200.10010.85070.14690.64550.11030.61070.09400.50290.00000.65010.1031
80.31010.04770.47690.09640.34020.08550.29650.04590.21720.00780.34390.0674
160.11060.02000.18720.03580.19790.11200.10580.01550.06860.00370.14600.0185
320.03630.00720.06550.01100.12840.05930.03610.00530.02230.00160.05130.0069
ntMeanStdMeanStdMeanStdMeanStdMeanStdMeanStd
22.91130.02523.35210.36792.90290.00292.90670.00852.90150.00002.95460.0767
31.58460.07342.17180.40241.60810.08601.63250.08941.54070.00011.60510.0685
Image 0741.05400.09731.32160.45851.03610.07531.10580.12220.96070.00011.14190.1740
50.75920.09170.99080.02530.77460.07750.77490.08170.64920.00010.88100.1170
80.39070.05440.66400.14950.43430.10390.38730.04930.30340.00050.47770.0788
160.14950.02810.24030.04560.21080.07510.14460.01930.08980.00370.18190.0254
320.04890.01140.08750.01800.13370.07620.05050.01140.02910.00170.06920.0093
ntMeanStdMeanStdMeanStdMeanStdMeanStdMeanStd
22.09880.01722.46820.40612.08960.00112.09290.00362.08900.00002.10750.0382
31.22930.08121.65430.33321.18980.04951.24080.09991.15900.00001.20530.0645
Image 0840.85400.08130.99940.00310.84800.07920.91610.12340.77310.00010.89080.1053
50.61970.02200.91180.09950.63140.03560.72290.13840.58320.00010.75150.0984
80.37520.03900.57840.09000.35790.04600.42120.04070.28410.00030.43220.0418
160.14380.02300.24080.04160.20340.07720.16700.03050.09480.00460.18630.0253
320.05210.00920.09100.01470.13010.07210.05800.00840.03090.00240.07570.0108
ntMeanStdMeanStdMeanStdMeanStdMeanStdMeanStd
22.85580.06113.12970.36252.82930.02992.82110.00282.81850.00002.84890.0414
31.68390.09011.99000.21701.74620.19121.68260.06251.62800.00001.67250.0401
Image 0941.09990.12161.00000.00001.11480.16600.99500.05820.93940.00001.05220.1032
50.77890.10090.93930.08690.81770.20850.70650.06340.63190.00010.79690.0994
80.36940.06120.54880.07960.41500.11760.34340.04400.27310.00070.41140.0566
160.12980.01850.19960.02960.18760.06710.10830.01310.08430.00530.16520.0173
320.04360.00910.06410.00850.19710.10780.03560.00520.02700.00160.05610.0071
ntMeanStdMeanStdMeanStdMeanStdMeanStdMeanStd
22.64700.03322.89250.26182.63320.00192.63490.00302.63190.00002.66060.0425
31.55020.08881.98240.37861.53260.06241.57800.10181.47940.00001.56850.0842
Image 1041.05290.11551.02530.13871.02010.06021.00550.07400.92570.00001.08230.1358
50.75280.09090.95250.06980.73380.09210.70270.04910.63270.00000.78830.0925
80.39330.07750.57200.11590.45570.22390.33520.03010.27880.00080.44190.0620
160.13420.01990.21350.03210.21300.07460.11740.01480.08450.00320.16140.0199
320.04260.00820.07000.00840.16390.14110.04130.00880.02810.00210.05760.0084
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zarate, O.; Zaldívar, D.; Cuevas, E.; Perez, M. Enhancing Pneumonia Segmentation in Lung Radiographs: A Jellyfish Search Optimizer Approach. Mathematics 2023, 11, 4363. https://doi.org/10.3390/math11204363

AMA Style

Zarate O, Zaldívar D, Cuevas E, Perez M. Enhancing Pneumonia Segmentation in Lung Radiographs: A Jellyfish Search Optimizer Approach. Mathematics. 2023; 11(20):4363. https://doi.org/10.3390/math11204363

Chicago/Turabian Style

Zarate, Omar, Daniel Zaldívar, Erik Cuevas, and Marco Perez. 2023. "Enhancing Pneumonia Segmentation in Lung Radiographs: A Jellyfish Search Optimizer Approach" Mathematics 11, no. 20: 4363. https://doi.org/10.3390/math11204363

APA Style

Zarate, O., Zaldívar, D., Cuevas, E., & Perez, M. (2023). Enhancing Pneumonia Segmentation in Lung Radiographs: A Jellyfish Search Optimizer Approach. Mathematics, 11(20), 4363. https://doi.org/10.3390/math11204363

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop