Next Article in Journal
Cachexia Index as a Prognostic Indicator in Patients with Gastric Cancer: A Retrospective Study
Previous Article in Journal
Antigens Expressed by Breast Cancer Cells Undergoing EMT Stimulate Cytotoxic CD8+ T Cell Immunity
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Particle Swarm Optimization and Two-Way Fixed-Effects Analysis of Variance for Efficient Brain Tumor Segmentation

1
Department of Electrical Engineering, University Mohamed Khider of Biskra, Biskra 07000, Algeria
2
Electrical Engineering Department, University of Skikda, BP 26, El Hadaiek, Skikda 21000, Algeria
3
University of Tours, 60 rue du Plat D’Etain, CEDEX 1, 37020 Tours, France
4
College of Engineering, Royal University for Women, West Riffa 37400, Bahrain
5
Department of Physics, Benyoucef Benkhedda University of Algiers, Algiers 16000, Algeria
6
UMR 1253, iBrain, INSERM, Université of Tours, 37000 Tours, France
*
Author to whom correspondence should be addressed.
Cancers 2022, 14(18), 4399; https://doi.org/10.3390/cancers14184399
Submission received: 12 July 2022 / Revised: 4 September 2022 / Accepted: 7 September 2022 / Published: 10 September 2022
(This article belongs to the Section Cancer Causes, Screening and Diagnosis)

Abstract

:

Simple Summary

Segmentation of brain tumor images from magnetic resonance imaging (MRI) is a challenging topic in medical image analysis. The brain tumor can take many shapes, and MRI images vary considerably in intensity, making lesion detection difficult for radiologists. This paper proposes a three-step approach to solving this problem: (1) pre-processing, based on morphological operations, is applied to remove the skull bone from the image; (2) the particle swarm optimization (PSO) algorithm, with a two-way fixed-effects analysis of variance (ANOVA)-based fitness function, is used to find the optimal block containing the brain lesion; (3) the K-means clustering algorithm is adopted, to classify the detected block as tumor or non-tumor. An extensive experimental analysis, including visual and statistical evaluations, was conducted, using two MRI databases: a private database provided by the Kouba imaging center—Algiers (KICA)—and the multimodal brain tumor segmentation challenge (BraTS) 2015 database. The results show that the proposed methodology achieved impressive performance, compared to several competing approaches.

Abstract

Segmentation of brain tumor images, to refine the detection and understanding of abnormal masses in the brain, is an important research topic in medical imaging. This paper proposes a new segmentation method, consisting of three main steps, to detect brain lesions using magnetic resonance imaging (MRI). In the first step, the parts of the image delineating the skull bone are removed, to exclude insignificant data. In the second step, which is the main contribution of this study, the particle swarm optimization (PSO) technique is applied, to detect the block that contains the brain lesions. The fitness function, used to determine the best block among all candidate blocks, is based on a two-way fixed-effects analysis of variance (ANOVA). In the last step of the algorithm, the K-means segmentation method is used in the lesion block, to classify it as a tumor or not. A thorough evaluation of the proposed algorithm was performed, using: (1) a private MRI database provided by the Kouba imaging center—Algiers (KICA); (2) the multimodal brain tumor segmentation challenge (BraTS) 2015 database. Estimates of the selected fitness function were first compared to those based on the sum-of-absolute-differences (SAD) dissimilarity criterion, to demonstrate the efficiency and robustness of the ANOVA. The performance of the optimized brain tumor segmentation algorithm was then compared to the results of several state-of-the-art techniques. The results obtained, by using the Dice coefficient, Jaccard distance, correlation coefficient, and root mean square error (RMSE) measurements, demonstrated the superiority of the proposed optimized segmentation algorithm over equivalent techniques.

1. Introduction

1.1. What Is a Brain Tumor?

A brain tumor is a cluster of uncontrolled cancer cells that grow in or around the brain. Brain tumors are divided into two categories: primary tumors, originating in the brain or spinal cord, and secondary tumors, also called brain metastases, which develop elsewhere in the body and spread to the brain [1]. In the first category (i.e., primary tumors), the probability of a person developing this type of tumor in their lifetime is less than 1% [2]. This probability is low; however in 2020, for example, it still represented just over 308,000 people diagnosed worldwide [3]. A figure that should also alert us, is the increased incidence of brain tumors at all ages over the last 20 years. For example, the incidence has increased by more than 40% in adults. In the second category (i.e., secondary tumors), the cancers that most often spread to the brain are breast, kidney, and lung cancers, as well as leukemia, lymphoma, and melanoma [4].
A brain tumor can take many forms; it is therefore difficult for radiologists and physicians to diagnose it indisputably, because medical imaging images can vary in intensity. Several approaches for detecting and segmenting brain tumors from magnetic resonance imaging (MRI) images have been proposed in the literature, to help practitioners make their diagnoses [5,6].
In addition to MRI, functional ultrasound is a modality that is gaining recognition in medicine. Functional ultrasound can allow the imaging of the neuronal activity of the brain in small, awake, and mobile animals. Nevertheless, such a modality requires long ultrasound acquisitions at high frequency, to have an acceptable sensitivity; hence, possible material constraints [7].
During brain tumor surgery, two types of difficulties may arise: (i) identification of the tumor and its boundaries related to the healthy brain; (ii) identification of functional brain regions, i.e., those involved in neurological functions (skills, sensitivity, language, vision, cognition, etc.). The standard gold method currently used to improve the quality of brain tumor resection, while minimizing neurological risk, is so-called ‘standby’ surgery with direct electrical brain stimulation. Practitioners commonly use ultrasound to localize the tumor in the brain; however, to date, there are no pre- or intra-operative imaging tools to identify functional brain regions [8]; hence the need for innovative imaging in this area, such as high-frequency Doppler ultrasound, in the surgical management of patients with awake brain tumors. Ultrahigh frequency achieves a spatial resolution of 30 μm, and is thus more than five times better than MRI. The Doppler mode [9,10,11,12] detects microvascular flows at velocities less than 1 mm/second.
Gliomas are the most common primary brain tumors in adults. Nearly 3000 new cases are diagnosed each year in France. Men are more frequently affected. Most cases are sporadic, but in rare cases they are associated with certain family cancers [13,14].
About 75% of gliomas diagnosed are high-grade (III or IV of the World Health Organization (WHO) classification) [15].
Gliomas can develop in any region of the brain. They progressively infiltrate the brain parenchyma, and cause a mass effect.
Today, if the clinical examination should suggest a tumoral process, the diagnosis of a brain tumor relies on magnetic resonance imaging (MRI), due to its shooting in all orientations, its intrinsic 3D, its non-use of ionizing radiation, and its precision.

1.2. MRI Sequences for Brain Tumors

Brain MRI, with or without the injection of contrast products such as gadolinium, is systematic in cases of a suspected brain tumor. Brain MRI enables:
-
the localization of the expansive process of the tumor, and the specification of its local extension;
-
the specification of its characteristics, e.g., is it homogeneous or heterogeneous; is there perilesional edema, calcifications, necrosis, or intratumoral hemorrhage?;
-
the establishment of a differential diagnosis between a brain tumor and a circumscribed lesion of another nature, e.g., an abscess;
-
the establishment of the diagnosis of certain evolving tumor complications (hemorrhage, hydrocephalus, tumor meningitis, etc.).
-
the establishment of the histological grade, in cases of a glial tumor;
-
the definition of the quality of the tumor removal, and the continuation of the therapeutic strategy after the surgical time.
The most common MRI sequences are T1- and T2-weighted scans, where T1 and T2 are tissue-specific time constants.
T1-weighted images are produced using short TE and TR times, and vice versa for T2-weighted images, where TR is the repetition time, defined as the time interval between two excitations, and TE is the echo time, defined as the interval between the excitation and the appearance of the MRI signal. Generally, T1- and T2-weighted images can be easily differentiated, by observing the cerebrospinal fluid (CSF). The CSF is dark on T1-weighted images, and clear on T2-weighted images.
A third sequence that we will use in our work is the FLAIR sequence (i.e., fluid attenuated inversion recovery), which is an inversion-recovery sequence well-adapted to brain imaging, in which the cerebrospinal fluid signal is suppressed or strongly attenuated, and a long TE is used, to give it a solid T2 weighting. The FLAIR sequence has significantly improved the detection of brain parenchymal lesions, particularly those located at the parenchymal–CSF interface. White-matter pathologies (softening, demyelination processes, etc.) appear hyperintense. This sequence is particularly interesting for the early diagnosis of ischemic events; it allows us to obtain an image of excellent definition in a few minutes and can, contrary to the diffusion or perfusion sequences that we will not use in this work, be performed on all MRI machines. Currently available in 3D volume acquisition, it is part of the basic MRI workup of the brain.
Table 1 compares T1, T2, and FLAIR sequences in the context of brain tissue consisting of gray matter, CSF, and white matter.
Figure 1A shows an example of a FLAIR MRI sequence where a cyst is highlighted (see arrow); Figure 1B illustrates a cross-section in the axial plane of the human brain. Interestingly, the FLAIR sequence is very ‘sensitive’ to pathology, and clearly detects the cyst.
Figure 2 shows an example of three MRI sequences: T1-Weighted, T2-Weighted, and FLAIR; in Figure 3, we have represented a T1-Weighted MRI sequence (noted simply ‘T1’) with and without contrast medium. The T1 sequence with contrast medium is noted ‘T1c’—sometimes noted ‘T1-Gd’, when the contrast agent is Gadolinium. This image represents a metastatic malignant melanoma. T1-weighted MRI shows multiple secondary lesions that are spontaneously hyper-signal. After injection of a contrast medium, the T1c image shows that the lesions have been enhanced, and are therefore better visualized, and new lesions are detected.
Figure 4 shows an example of MRI sequences from the BraTS 2015 database [16], representing brain tumor pathologies. These sequences are FLAIR, T1, T1c (or, more precisely, T1-Gd, because the contrast agent used is Gadolinium), T2, and Ground Truth performed by specialists on which the FLAIR sequence is superimposed. The colors represent classes of tumors: red for necrosis; green for edema; and yellow for tumor.

1.3. Why We Should Be Interested in Brain Tumor Segmentation?

Image segmentation is the action of grouping pixels according to predefined criteria, in order to build regions or classes of pixels. There are several methods of image segmentation: methods based on contours, regions, classification, or hybrid. Segmentation and its automation remain today one of the major challenges in MRI, mainly in relation to brain tumor images, in order to help the practitioner in his daily practice, in the presence of a huge volume of images. These segmentation methods have long been manual or semi-automatic but, with the advent of new methods, some are now fully automated.
The segmentation of brain tumors is vital, because the patient’s life depends on it; the direct implication of our research is to propose efficient and safe methods to respond to this delicate operation fully. Indeed, the segmentation of brain tumors from MRI images, presented in our work, has direct practical implications for establishing an efficient diagnosis, following the tumor’s progression, or for evaluating the relevance of the prescribed treatment and therapy. It is well known that manual segmentation and analysis of structural MRI images of brain tumors is tedious and time-consuming; therefore, automated and robust segmentation of brain tumors will have a significant impact on disease management, by providing essential information about the nature, volume, location, and shape of the tumor [17,18].
Figure 5 shows an example of successful segmentation. The Ground Truth segmentation is a manual segmentation performed by experts, and globally it corresponds to the proposed automatic segmentation (see Section 4 on the experimental analysis). The visualized MRI sequences are from the BraTS 2015 database, and correspond to T1, T2, T1c, and FLAIR sequences.
Figure 6 shows the detection result (center image) of our approach, from a T1 MRI sequence with contrast medium (left image) and segmentation (in green) binarization (right image).

1.4. Brain Tumor Segmentation Algorithms

As reported in [19], brain tumor segmentation algorithms can be grouped into three main categories: (1) conventional techniques; (2) classification and clustering techniques; and (3) deformable model techniques [20,21]. Threshold-based techniques, which compare pixel intensity to one or more intensity thresholds, belong to the first category: conventional techniques [22,23]. For example, an Otsu thresholding approach, combined with some morphological operations (i.e., dilation and erosion), was proposed in [24], to detect brain tumor diseases from MRI images. An extended method that would give more accurate thresholding performance was proposed in [25]. In addition, region-based techniques, in which disjoint regions are formed by merging neighboring pixels, based on a similarity criterion, are also classified in the first category [26]; these techniques include region-growth and watershed segmentation techniques. For example, an adaptive region-growing approach was proposed in [27], to solve the problem of manual threshold selection and weakness against noise; this approach was based on the variances and gradients of the inter- and intra-boundary curves. In another work, [28], the authors presented a region-growth approach based on a fixed threshold value for MRI segmentation, enhanced by an efficient pre-processing framework. In work presented by Biratu et al. [29], the authors modified the principle of the classical region-growing segmentation method; they applied it to the detection of abnormality regions in brain images. The seed point initialization, in the suggested method, was designed to be generated automatically for any input brain images, in contrast to the classical approach, where the seed point should be initialized manually. Khosravanian et al. [30] suggested a level set segmentation technique, based on the super-pixel fuzzy clustering and lattice Boltzmann method for autonomously segmenting brain tumors, which is strongly resistant to image intensity and noise.
The second category of brain tumor segmentation algorithms—classification and clustering techniques—includes several effective algorithms [31], such as K-means, support vector machines (SVMs), Markov random fields, artificial neural networks, convolutional neural networks (CNNs), and fuzzy C-means. For example, an SVM classification scheme, combined with a kernel-space feature selection methodology, was proposed in [32] for brain tumor segmentation. An unsupervised framework, based on random forests, was proposed in [33], to extract the tumor location, followed by a pattern classification phase. To define the tumor area, 86 features were used to create a training dataset, to be presented as input to the classifier. Currently, CNNs, or deep-learning-based models, have performed impressively in several medical imaging applications [34,35,36], as they assist in understanding complex patterns precisely. For example, a fully supervised system for brain tumor segmentation, based on a CNN architecture that exploits local and global contextual features, was proposed in [37]. In a recently published paper [38], fast convolutional neural networks were used to train, classify, and distinguish tumor from non-tumor patterns; training focused on patches and slices of axial, coronal, and sagittal brain views. Hussain et al. [39] created a correlation architecture of a parallel CNN layer and a linear CNN layer, by including an induction structure. Segmenting MR images of brain tumors, using this structure, has produced positive results. Using unpaired adversarial training, Li et al. [40] developed the innovative framework called TumorGAN, to provide efficient image segmentation of pairs. They added a regional perceptual loss, to improve the discriminator’s performance and the quality of the output images. Additionally, they created a localized L1 loss, to limit the color of the observed brain tissue. Arora et al. [41] proposed an automatic system to tackle the task of segmenting gliomas from MRI scans, based on a U-Net-based deep-learning model. Before presenting the input image to the deep model, the system transformed it by applying various approaches, including feature scaling, subset division, restricted object region, category brain slicing, and watershed segmentation. An approach for automatic segmentation, based on texture and contour, was proposed by Nabizadeh and Kubat [42]. The machine learning classifier was trained using landmark points, after determining high-level features. In the present work, we opted for K-means, which is an unsupervised learning model applying non-hierarchical data partitioning. This algorithm categorizes the data into multiple clusters, respecting the principle of exclusivity of membership: a single observation can only belong to one cluster. The advantage of K-means lies in its simplicity and the fact that it is used daily in the socio-economic world for data segmentation.
The third category of brain tumor segmentation algorithms consists of deformable model techniques, including parametric and geometric deformable models. These techniques have been proposed, among other things, to support intuitive interaction and high variability of mechanisms. For example, metaheuristic techniques have also been used for brain tumor segmentation, exploiting their ability to solve challenging optimization problems in minimal time, while avoiding local optima. Several techniques for brain tumor segmentation based on metaheuristics have been reported in the literature [43,44]. In [43], the Cuckoo search algorithm, an efficient optimization model, was applied to brain tumor segmentation, from MRI images. The ant colony optimization metaheuristic and a fuzzy classification approach were combined in [44], to segment and extract the suspicious region from the brain MRI image containing the tumor position. In a recent paper, [6], the authors proposed a novel region of interest (ROI)-based brain tumor segmentation method. The region of interest was first identified, using grid decomposition, and then only the region of interest was segmented, using the spectral clustering method. Segmenting the brain tumor from a region of interest, rather than from the entire image, is an attractive concept. Nevertheless, this approach was limited by the increased computational complexity resulting from the ROI identification step. In this study, we performed an analysis of variance (ANOVA) on the data, to quantify the differences between the results. ANOVA is a statistical method generally used to assess the similarity of means in different groups, by comparing variances [45,46]. The main advantages of ANOVA over other statistical methods lie in the following four points:
  • It is easy to implement, using simple algebra.
  • It can be used to compare more than two samples.
  • It can be applied to groups with different numbers of observations.
  • It has been widely used, and has proven effective in various research fields, such as pharmacology and medicine.

1.5. Main Contributions

This paper proposes an original brain tumor segmentation method, based on the particle swarm optimization (PSO) technique that uses fixed two-way ANOVA as the fitness function. The segmentation of a brain tumor is vital, because the patient’s life depends on it, and therefore the fundamental motivation of our work was to identify efficient and safe methods of responding to this delicate operation fully. This objective was achieved by our choice of PSO with ANOVA. The proposed algorithm consisted of three main steps:
  • The first step was to remove the skull bones from the image, to eliminate unnecessary information.
  • In the second step, which was the main contribution of this study, the PSO technique was applied, to detect the lesion’s brain image block. The two-way fixed ANOVA technique used a fitness function to determine the best among all candidate blocks, resulting in automatic brain tumor segmentation comparable to the Ground Truth performed by radiologists. All image blocks were tested, and the one that gave the minimum variance was considered. To overcome the computational complexity, PSO was used as a metaheuristic technique, that identified the best block in minimum time. The choice of PSO was based on the high performance of this optimization technique, when applied to many real-world applications. The satisfactory solution to a complex optimization problem, which includes many sub-optimal solutions, justified using a powerful metaheuristic, like PSO. The PSO algorithm, which is simple to understand, program, and use in minimal time, is particularly effective for practical optimization problems, such as image segmentation [47]. Therefore, the problem was posed as a maximization of a fitness function, and the well-known ANOVA method was chosen to measure the variance between the candidate block and the non-diseased block.
  • In the final step, K-means clustering—an efficient and straightforward partitioning technique—was applied to the lesion block, to classify it as tumor or non-tumor.
Our approach was original; to date, no other research has considered the techniques that were chosen, to provide a satisfactory answer to the segmentation of brain tumors. Therefore, the strengths of our approach were the originality and the quality of the results obtained.
To illustrate the essential role of ANOVA in the proposed algorithm, the experimental results obtained with the ANOVA-based fitness function were compared to those obtained with the well-known dissimilarity criterion, the sum of absolute differences (SAD). The comparison results—set against classical segmentation algorithms and recently published papers (i.e., state-of-the-art approaches), and using a private database provided by the Kouba imaging center, Algiers (KICA), and the multimodal brain tumor segmentation challenge (BraTS) 2015 database—demonstrated the efficiency and robustness of ANOVA.
The remainder of the paper is organized as follows: Section 2 presents the background of the method we propose here; in Section 3, the proposed algorithm is presented in detail, as well as the reasons for using PSO and ANOVA as the underlying techniques; the experimental results, and a comparison to the state-of-the-art, are presented in Section 4; a conclusion, summarizing the work, is given in Section 5.

2. Review of the Background of the Proposed Approach

The employed algorithm used different methods to segment the brain tumor image. The PSO metaheuristic technique was applied to identify the ROI, and the fitness function that determined the best block that could be considered as ROI was based on ANOVA. Finally, the K-means method was used to segment the ROI. This section presents the main concepts of these methods in detail.

2.1. Particle Swarm Optimization

Metaheuristic techniques have been developed to solve complex optimization problems, when mathematical techniques fail or require high computational time. The process of any metaheuristic technique starts with one or more random solutions initialized in the search-space. Then, powerful tools, inspired by natural phenomena, are used to iteratively converge the solution(s) to the optimal solution. The success of a metaheuristic technique relies on its ability to explore the search-space in depth, and to exploit promising areas.
Metaheuristics are grouped into three categories, based on the type of inspired natural phenomenon: (1) evolutionary algorithms inspired by genetic inheritance for survival, such as genetic algorithms (GAs) [48]; (2) swarm intelligence that mimics the social behavior of a group of animals, such as particle swarm optimization (PSO) [49]; (3) physics-based techniques, which are based on physical rules, such as simulated annealing (SA) [50]. Among the many metaheuristics developed in the literature, PSO has proven its efficiency in many applications [51,52].
PSO is a swarm intelligence technique proposed by Eberhart and Kennedy in 1995 [49], inspired by the flight behavior of birds. It has been successfully used in many application areas, due to its simplicity of implementation and effectiveness.
In PSO, many particles or candidate solutions are randomly initialized in the search-space, and then each particle iteratively adjusts its position, according to its own and its colleagues’ flight experience [49,53,54]. PSO’s fundamental concept is to randomly initialize N p candidate solutions in the search-space. Their velocities and positions are then updated, using (1) and (2):
V i = ω × V i + c 1 × R 1 × P i X i + c 2 × R 2 × G X i
X i = X i + V i
where V i and X i are the velocity and position of particle i , and P i and G are the individuals and global solutions; R 1 and R 2 are two random numbers, and ω , c 1 , and c 2 are the weights of physical, cognitive, and social influences, respectively—they control the exploration and exploitation phases in PSO.
Each candidate solution is evaluated through a fitness function. The individual and global best candidate solutions are updated, based on the fitness function values.
A pseudo-code for PSO is described in Algorithm 1:
Algorithm 1. PSO algorithm
1: Initialize the total number of
candidate solutions N p , and the maximum number of iterations t m a x
2: Random initializaiton of candidate solutions
3: for t = 1 : t m a x do
4:   Update the velocities V with (1)
5:   Update the positions X with (2)
6:   Evaluate the positions X with the fitness function
7:   Update the individual ( P i ) and global (G) solutions
8: End for

2.2. Analysis of Variance (ANOVA)

Analysis of variance (ANOVA) is a statistical method developed by Larson in 2008 [45], to analyze the variation of a response variable determined under different conditions, defined by discrete factors. More precisely, the principle of ANOVA is to determine, using a statistical test, whether the share of dispersion attributable to the factor under study is significantly greater than the residual share. If the factorial dispersion is significantly larger than the residual dispersion, it means that the dispersion of the data around the means of each modality is small, compared to the dispersion of the means around the overall mean. In this case, if the means relative to each modality are highly dispersed, while the intra-class variability is low, it means that the means are globally different. Conversely, if the factorial dispersion is of the same order of magnitude as the residual dispersion, it means that the means are not different overall. Finally, the ANOVA is used to test the equality of the means of the different groups, by comparing their variances.
The ANOVA technique has two models that we will detail: the one-way fixed-effects ANOVA and the two-way fixed-effects ANOVA [46].

2.2.1. One-Way Fixed-Effects ANOVA

The results of a one-way ANOVA are only significant if the following three assumptions are met:
-
Each sample comes from a normally distributed population.
-
The variances of the populations from which the samples come are equal.
-
The observations in each group are independent, and the observations in the groups were obtained by random sampling.
The null and alternative hypotheses defined in a one-way ANOVA are as follows:
-
Null hypothesis or H 0 : the means of the k groups in the study population are equal.
-
Alternative hypothesis or H 1 : at least one group means differs from the others.
A statistical test—for example, Fisher’s F test, with k 1   [factorial part] and N k [residual part, where N is the size of the study population] degrees of freedom (provided that the normality and homogeneity of the residuals are respected), at a given α risk (generally 5%)—allows us to reject, or not to reject, the null hypothesis. The key elements of the one-way ANOVA method are summarized in Table 2. The main definitions associated with this table are defined below:
-
The total sum of squares ( S S T ) is the sum of the squared distances between each observed value and the overall mean; it is the sum of weight attributable to the factor ( S S F ) and weight attributable to the residues ( S S E ); it can therefore be summarized by (3).
-
The factorial sum of squares ( S S F ), which measures the differences between the group averages and the overall average, is defined in (4).
-
The residual sum of squares ( S S R ) is defined in (5).
-
p is the p -value corresponding to F k 1 , N k .
Ultimately, let us suppose that the p -value is less than the threshold value that has been defined (usually 5%): in that case, the null hypothesis can be rejected, implying that at least one of the means in a population group differs from the others.
S S T = S S F + S S E = i = 1 k j = 1 n j y i j y ¯ 2
-
S S F : factorial sum of squares.
-
S S E : error sum of squares.
-
i : index of the modalities (groups), i.e., from 1 to k .
-
j : observation index in a modality.
-
y i j : observations.
-
y ¯ : overall mean of observations.
S S R = i = 1 k j = 1 n j y i j y i ¯ 2
S S F = i = 1 k n i y i ¯ y ¯ 2
-
n i : numbers of data for each of the modalities.
-
y i ¯ : mean of the n i -values of the considered modality.
Table 2. Key elements of one-way ANOVA calculations.
Table 2. Key elements of one-way ANOVA calculations.
Source of VariationSum of SquaresDegrees of FreedomMean Squares F -Value p -Value
Factor S S F (attributable to factor) k 1 S S F k 1
Residues or error S S E (attributable to residues) N k S S E N k F k 1 , N k = S S F k 1 S S E N k p
Total S S T = S S F + S S E N 1

2.2.2. Two-Way Fixed-Effects ANOVA

The principle of the two-way fixed effects ANOVA model is, first, to decompose the total dispersion of the data into four sources: the contribution attributable to the first factor, noted A ; the contribution attributable to the second factor, noted B ; the contribution attributable to the interaction of the two factors; and the unexplained or residual share. In this method, each level of a factor is combined with the other factor: the two factors are said to be crossed. Then, in a second step, it is necessary to evaluate, with the help of a statistical test, if the factorial shares, and the one linked to the interaction, are significantly higher than the residual share.
Before going into the details of the method, it is necessary to define the following parameters:
-
The first categorical variable studied (often called factor A ) has a modalities (we also say that the factor A contains a levels). The index of the modalities of this first categorical variable, noted i , goes from 1 to a . Similarly, the second categorical variable studied (often called factor B ) has b modalities. The index of the modalities of this second categorical variable, noted i , goes from 1 to b .
-
The total number of observations is always noted as N , but the number of observations in each cell of the factorial design is noted as n i j . Equation (6) defines the relationship between N and n i j . In the following, n is the number of repetitions.
-
The observations in each cell of the factorial design (i.e., in each factorial combination) are denoted by y i j k , where k is the replication index in each crossover. The overall average of the responses is defined in (7), where the two points (“..”) correspond to the indices of the first and second categorical variables.
-
The means of each cross of modalities are noted y l J ¯ (see (8)).
-
The marginal means of the modalities of the first variable and those of the second variable are respectively noted y l · ¯ and y · J ¯ , and meet the definitions of Equations (9) and (10).
-
The marginal numbers of the modalities of the first variable and those of the second variable are respectively noted as n i · and n · j , and meet the definitions of Equations (11) and (12).
N = i = 1 a j = 1 b n i j
y . . ¯ = 1 N i = 1 a j = 1 b k = 1 n i j y i j k
y l J ¯ = 1 n i j k = 1 n i j y i j k
y l · ¯ = 1 b j = 1 b y l J ¯
y · J ¯ = 1 a i = 1 a y l J ¯
n i · = j = 1 b n i j
n · j = i = 1 a n i j
Like the one-way ANOVA method, and as shown in Table 3, the two-way ANOVA technique first measures the total dispersion of the data by calculating the total sum of squares ( S S T ), as defined in (13). Then, the total dispersion is decomposed into the part attributable to the first factor, noted S S A (see (14)), the part attributable to the second factor, noted S S B (see (15)), the part attributable to the interaction between the two factors, noted S S A B (see (16)), and finally, the part attributable to the residues, noted S S R (see (17)). After calculating the variances of the factors, interaction, and residuals—which are referred to as mean squares in Table 3—statistical hypothesis tests, i.e., F -tests of the ratio of two variances, are performed, to assess whether each of the three variance shares is significantly greater than the residual variance. Under the assumptions of normality and homogeneity of residuals, the F -test statistic follows a Fisher distribution with:
-
a 1 and a b n 1 degrees of freedom for the tests related to factor A ;
-
b 1 and a b n 1 degrees of freedom for the tests related to factor B ;
-
a 1 b 1 and a b n 1 degrees of freedom for the tests related to the interaction between the two factors, A and B .
Finally, as in the previous section, and in a classical way, we must calculate the probability, under the null hypothesis H 0 , of observing such an F -value: this is the p -value. This p -value is compared to the chosen level of significance (generally set at 5%). If the p -value is lower than the significance level, we conclude that the effect is significant. If not, the conclusion is not that there is no effect, but that there is no evidence of an effect. It is possible, for example, that the sample sizes are too small to show a significant difference.
S S T = i = 1 a j = 1 b k = 1 n i j y i j k y . . ¯ 2
S S A = b n i = 1 a y l · ¯ y . . ¯ 2
S S B = a n j = 1 b y · J ¯ y . . ¯ 2
S S A B = i = 1 a j = 1 b n y l J ¯ y . . ¯ 2 S S A S S B
S S R = i = 1 a j = 1 b y i j k y l J ¯ 2
Table 3. Key elements of two-way ANOVA calculations.
Table 3. Key elements of two-way ANOVA calculations.
Source of VariationSum of SquaresDegrees of FreedomMean Squares F -Value p -Value
Factor   A S S A ( attributable   to   factor   A ) a 1 S S A a 1 F A a 1 , a b n 1 = S S A a 1 S S E a b n 1 p A
Factor   B S S B ( attributable   to   factor   B ) b 1 S S B b 1 F B b 1 , a b n 1 = S S B b 1 S S E a b n 1 p B
Interaction   A B S S A B ( attributable   to   interaction   A B ) a 1 b 1 S S A B a 1 b 1 F A B a 1 b 1 , a b n 1 = S S A B a 1 b 1 S S E a b n 1 p A B
Residues or error S S E (attributable to residues) a b n 1 S S E a b n 1 F p

2.3. K-Means Clustering Technique

K-means is an unsupervised clustering technique that separates all data into K clusters. Each data point is assigned to one of the K clusters that minimize the Euclidean distance between the data point and the cluster’s center. The centers are then updated, and the data points are reassigned to the closest center throughout several iterations. The iterations are repeated, until the centers do not move or the data points do not change the cluster to which they are assigned [55,56]. Algorithm 2 describes the K-means clustering technique.
Algorithm 2. K-means Algorithm
1: Initialize the number of clusters
2: Choose initial cluster centers
3: While The stopping criterion is not satisfied, do
4:   Assign each data point to one cluster
5:   Update the center of each cluster
6: End while

3. Proposed Segmentation Method

The proposed segmentation method consists of three main stages. In the first stage, a pre-processing of the image is performed, to eliminate any unnecessary data. Then, PSO is applied, to identify the ROI, and K-means is used, to segment the ROI. An illustrative diagram is shown in Figure 7, and a simple flowchart of the proposed method is presented in Figure 8 where the detection and localization of the tumor is marked by the red square.

3.1. Image Pre-Processing

Image pre-processing is vital for removing noisy, inconsistent, incomplete, and irrelevant data [57]. Several approaches can improve the image quality during its transmission or storage [58]. We will mention, for example, those based on the concept of compressed sensing, where the image enhancement is carried out during acquisition [59,60,61,62,63]. An alternative approach is to perform simple and efficient de-noising (close to optimality) from first or second generation wavelets [64,65,66].
Skull-stripping is a crucial step in eliminating from the brain image all non-brain tissue, such as skull bone, fat, skin, etc. [67]. To this end, several approaches have been developed [68,69]. The first step of the proposed technique is a skull-stripping procedure, in which the gray-scale image is converted into a binary image, using a fixed threshold. Then, two morphological operations—filling and erosion—are applied to the binary image. Finally, the original image is masked by the obtained binary image; the generated image is the skull-stripped image (see Figure 9) [60,70,71,72].

3.2. ROI Detection

PSO is a powerful metaheuristic technique, designed to solve complex optimization problems. The proposed algorithm uses PSO to search for the optimal block containing the tumor within the MRI image. The principal concept of this stage is described as follows.
Firstly, several candidate blocks are randomly initialized within the MRI image, and each candidate block is evaluated with a fitness function, to determine the best blocks (see Figure 10). The proposed algorithm’s fitness function is based on the two-way ANOVA, to analyze variance. The used data are the candidate block and the MRI image of a no-disease brain. The used fitness function is given by (18).
f i t n e s s = M S A M S E + M S B M S E
As MSA and MSB represent the variability among group means, and MSE represents the variability within the group, divided by the degree of freedom, then the high values of MSA and MSB correspond to significant variability between the candidate block and the no-disease brain image, which means that there is an abnormal tissue in this block. In other words, the large variability between the candidate block and the no-disease brain image signifies that this candidate block contains a tumor. Therefore, the candidate block that corresponds to the fitness function’s maximum value is considered the best block found. Once the fitness function evaluation and the global and individual best blocks have been updated, the positions of the candidate blocks are also updated, using (1) and (2). The fitness function evaluation and updating block processes are repeated, until maximum iterations are reached. As some solutions can be evaluated more than once in a metaheuristic technique, the fitness function value and the positions of each candidate solution are stored in a matrix, to reduce the computational time. Then, if the PSO algorithm attempts to evaluate an already-evaluated solution, its fitness value is taken directly from the matrix. This idea has decreased the computational complexity of block-matching problems [69].

3.3. Tumor Segmentation

For tumor segmentation, the K-means clustering technique is applied to the global best block, found in the ROI detection stage, containing the tumor. Because we classify the blocks as tumor or non-tumor, the K-means method uses two clusters. Figure 11 illustrates the identified ROI and its segmentation using K-means.
The segmentation results are then evaluated, by measuring the similarity or dissimilarity between them and the Ground Truth. Figure 12 illustrates the segmentation result of our approach and the Ground Truth.
Over the years, several similarity and dissimilarity metrics have been formulated and reported in the literature. To evaluate our algorithm, we used the following metrics [73]:
1.
Dice similarity coefficient:
2 T P 2 T P + F P + F N
2.
Jaccard distance:
T P T P + F P + F N
3.
Correlation coefficient:
1 M × N i = 1 M j = 1 N I s i , j I s σ I s I g i , j I g σ I g
4.
Root Mean Squared Error (RMSE):
1 M × N i = 1 M j = 1 N ( I s i , j I g i , j ) 2
where F P was the number of false positives, T P was the number of true positives, F N was the number of false negatives, I s referred to the segmented image with our technique, I g was the Ground Truth image, and M and N represented the image’s size.
As the Dice coefficient and Jaccard distance are two metrics of similarity, a powerful method should maximize these criteria. The correlation criterion is also a similarity metric, and varies between −1 and +1; the perfect positive correlation is achieved when the coefficient equals +1. RMSE is, on the other hand, a dissimilarity metric; the minimum value of RMSE is indicative of a robust segmentation technique [74].

4. Experimental Analysis

The proposed brain tumor segmentation approach was evaluated using the private CIKA [75] and the challenging BraTS 2015 [16] databases. This section presents the specifications of each employed database. Furthermore, we analyze the results obtained from our proposed approach, and compare the results with other classical approaches.

4.1. Experiments on the KICA Database

4.1.1. Database Description

The proposed algorithm was implemented for the evaluation study on different MRI images and Ground Truths obtained from a database provided by the Kouba imaging center—Algiers (KICA) [75]—which included the brain tumor images and the corresponding Ground Truth images (complete tumor areas). In total, 223 people contributed to the constitution of the database (120 Train/103 Test). The disease-free brain images used were in Digital Imaging and Communications in Medicine (DICOM) format, and were selected in the same sections as the brain tumor images. We selected several MRI images, to perform our different experiments. The presented work was a collaboration between several institutions from Algeria and France. The Ground Truth MRI models were defined with the help of radiologists, neurologists, and biomedical engineers from the Kouba imaging center in Algiers (Algeria) and the Hospital of Tours (France).

4.1.2. Experiments

This section is divided into two parts. The first part illustrates the essential role of ANOVA in our algorithm. The ANOVA-based fitness function results were compared to those obtained with the SAD fitness function. Following the same concept of variability explained above, the candidate block that gave the maximum SAD value was considered the tumor block. In the second part of this work, we compared the experimental results of our algorithm with several well-known segmentation techniques, such as fuzzy C-means (FCM), K-means, Otsu thresholding, local thresholding, and watershed segmentation.
A.
Experiment #1
The robustness of any technique based on metaheuristics depends on the fitness function used: the latter is a decisive parameter in evaluating all the candidate solutions and determining the optimal global solution. In our segmentation technique, the fitness function measured the variability or the difference between the candidate blocks in the brain tumor image and the corresponding blocks in the no-disease image; a significant difference between them indicated the existence of a tumor. We resorted to the statistical method ANOVA, using the fitness function expressed in (18). As the difference could also be measured with any dissimilarity criterion, we changed the used fitness function of (18), and replaced it with the SAD criterion, in order to prove the efficiency of our ANOVA-based fitness function. The SAD metric, also called the L1 norm or Manhattan norm, is a dissimilarity criterion, used to compare the intensities of two blocks or images [74], and is defined as follows:
S A D = i = 1 M j = 1 N I 1 i , j I 2 i , j
where I 1 and I 2 represent the candidate blocks in the brain tumor image and the no-disease image.
A high SAD value showed a substantial difference between the candidate block and the no-disease image, indicating the presence of a tumor. Therefore, the candidate block that gave the maximum SAD value was considered the tumor block. Figure 13 shows the original images before and after the pre-processing and segmentation procedures, where the used fitness function was based on ANOVA and the SAD criterion.
Table 4 shows the Dice, Jaccard distance, correlation, and the RMSE values obtained with the two fitness functions tested by our segmentation method. From Figure 13, we can observe that using SAD as a fitness function gave non-relevant segmentation results in several cases; however, the segmentation results with ANOVA were near the Ground Truths with most of the tested images. In addition, we can observe from Table 4 that the results of segmentation with ANOVA were higher, and near to 100% in most of the tested cases using the Dice similarity coefficient, Jaccard distance, and correlation coefficient, and near to 0% with the RMSE metric. In contrast, SAD gave some promising results, but the majority were not satisfactory. As a recapitulation, it is clear that using ANOVA as a fitness function yielded much better results than those obtained with the SAD fitness function, demonstrating the robustness and effectiveness of the technique in evaluating candidate blocks.
B.
Experiment #2
We assessed our algorithm’s effectiveness against several well-known segmentation techniques in the second experiment. The segmented images obtained with our method, and other segmentation methods, are shown in Figure 14. In addition, Table 5, Table 6, Table 7 and Table 8 highlight the statistical results and comparisons of our method and other related methods, using the following metrics: Dice similarity coefficient; Jaccard distance; correlation coefficient; and RMSE metric.
It can be observed, from the qualitative and quantitative comparisons, that the proposed method can segment the brain tumor very efficiently, in contrast to the classical methods, where their segmentations are not relevant or not satisfactory in the majority of cases. The power of our method resides primarily in the following point: the first stage of our algorithm, that determines the ROI, then segments only this region, making the brain tumor segmentation results free of any irrelevant information, such as skull bone. Unlike other segmentation techniques, such as those based on Otsu or local thresholding, some extraneous information remains on the segmented image.
To complement the above results, Figure 15 further highlights the robustness and effectiveness of our proposed method. As shown in Figure 15a–d, the results of our algorithm (blue bars in each figure) are highest in Dice, Jaccard, and correlation, and lowest in RMSE, compared to other competing techniques, which proves the efficiency and robustness of our method. Thus, all the experimental results demonstrate that the proposed method outperforms all other competing methods.

4.2. Experiments on the BraTS 2015 Database

To ensure that our previous findings could be generalized, we used an external and very challenging dataset to test the performance of the PSO–ANOVA-based segmentation approach.

4.2.1. Database Description

The Medical Image Computing and Computer Assisted Intervention (MICCAI) conference provided the BraTS database [16]; it is the official database for the conference’s brain tumor MRI segmentation challenge, and it is also commonly used by researchers working on brain tumor MRI segmentation. The BraTS database has been updated annually since the challenge was launched in 2012.
The dataset for segmenting brain tumor images was called BraTS 2015. It consisted of 54 low-grade gliomas (LGG) and 220 high-grade gliomas (HGG) MRIs. These 274 images were reserved for the training set, while 110 were for the testing set. The total size of all the MRI images was 240 × 240 × 155. The four MRI modalities were T1, T1c, T2, and T2-FLAIR. Four intra-tumoral classes—edema, enhancing tumor, non-enhancing tumor, and necrosis—were provided as segmented ‘Ground Truth’. The Dice similarity coefficient was used in this experiment to evaluate and compare the segmentation results with some recently published state-of-the-art methods. The three modalities of tumor regions—complete, core, and enhancing tumors—were considered, in computing the performance measure. The complete (or whole) tumor area comprised enhancing and non-enhancing cores, edema, and necroses. Necroses and enhancing and non-enhancing cores were all in the core region. Only the area of the enhancing region was referred to as the enhancing tumor. Figure 16 highlights the three modalities of tumor regions. The Dice score was determined by superimposing the anticipated output image over the manually segmented label (i.e., Ground Truth).

4.2.2. Experiments

As we did with the KICA dataset, we conducted visual and quantitative experiments on the BraTS 2015 dataset, to assess the performance of our proposed brain tumor segmentation approach. A subset of four T2-FLAIR images, representing the most clinically encountered tumors, was chosen, to highlight the visual assessment; Figure 17 shows the results of our proposed brain tumor segmentation approach, using HGG and LGG MRI images. According to the WHO, grades I and II are considered low-grade glioma (LGG), while grades III and IV are highly malignant, and are called high-grade glioma (HGG). We note that the type of segmentation considered in this demonstration is based on complete tumors, which include necrosis, edema, and enhancing and non-enhancing cores. Furthermore, Table 9 presents our approach’s performance, using the Dice similarity coefficient as a statistical criterion, and compares our results against the results of recently published approaches related to the BraTS 2015 challenging dataset. To make a comparison against the state-of-the-art, we have reported in Table 9 the segmentation results of the three modalities of tumor regions (i.e., complete, core, and enhancing tumors).
From the visual and statistical evaluations, we can observe that our approach gave relevant segmentation results in most tested cases, and that the segmentation results were near the Ground Truth images. Besides, we can observe from Table 9 that the segmentation results were higher (mean ≈87%) with the three modalities of tumor regions using the Dice similarity coefficient. According to the complete regions, our approach outperformed all compared state-of-the-art approaches, like CNN [76,77,78] and ILinear [39], while producing equivalent and competitive outcomes to the core and enhancing tumor areas. Finally, as the findings obtained were similar to the results obtained with the CIKA dataset, we can deduce that our approach is a robust and effective technique.
As the majority of the PSO–ANOVA results were highly compatible with private and public datasets, we can deduce that the performance of our brain tumor segmentation approach is satisfactory, and we are confident that it can be implemented in real-world applications, to help doctors in making clinical decisions.

5. Conclusions

In this paper, a new method for brain tumor segmentation has been proposed. This method consists of four steps:
-
The skull bone is precisely removed from the image, to exclude irrelevant data.
-
The particle swarm optimization (PSO) technique is then applied, to detect the region of interest (ROI) that contains the brain lesion.
-
The fitness function used to evaluate the candidate blocks is based on a two-way fixed-effects analysis of variance (ANOVA).
-
Finally, in the last step of the method, the K-means segmentation method is used in the lesion block, to classify it into two possible categories: tumor and non-tumor.
An evaluation study was performed using extensive magnetic resonance imaging (MRI) databases; a visual assessment and four statistical measures were used to evaluate the performance of the tumor segmentation. The images representing the most clinically encountered positions were used for the visual assessment. The results show that competing approaches do not provide usable segmentation results in some cases, whereas our approach is a promising solution for clinical decision support. Indeed, statistically, the results show that the proposed method gives a tumor segmentation accuracy of 96%, outperforming other state-of-the-art methods. Moreover, comparing our method with the manual and careful segmentation performed by experts to obtain what is called ‘Ground Truth’ does not show significant differences. Indeed, the difference is about 1%.
The different results, and the comparison with state-of-the-art methods, show that our approach can be a useful tool for brain cancer detection, diagnosis, and radiotherapy treatment planning. The future direction of our research in brain tumor segmentation must address the limitations of the unsupervised approach by: (1) combining PSO, ANOVA, and a CNN model [84,85,86,87,88,89,90]; (2) using generative adversarial networks [91,92,93,94] to pre-process, colorize, correct, and enhance images before presenting them to the segmentation algorithm.

Author Contributions

Conceptualization, K.E.K. and A.O.; formal analysis, A.B. (Ayache Bouakaz); funding acquisition, A.B. (Amir Benzaoui); investigation, N.A. and S.J.; methodology, A.O. and K.E.K.; project administration, A.B. (Ayache Bouakaz) and A.O.; supervision, K.E.K. and A.O.; validation, A.B. (Amir Benzaoui), M.H. and A.O.; writing—original draft preparation, N.A. and M.H.; writing—review and editing, A.B. (Amir Benzaoui), S.J. and A.O. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data can be shared up on request.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations and acronyms are used in this manuscript:
ANOVAAnalysis of Variance
BRATSMultimodal Brain Tumor Segmentation Challenge
CNNConvolutional Neural Networks
CSFCerebrospinal Fluid
DICOMDigital Imaging and Communications in Medicine
FCMFuzzy C-Means
FLAIRFluid Attenuated Inversion Recovery
FNFalse Negatives
FPFalse Positives
GAGenetic Algorithm
GdGadolinium-based Contrast Agents
HGGHigh-Grade Gliomas (HGG)
LGGLow-Grade Gliomas (LGG)
KICAKouba Imaging Center—Algiers
MRIMagnetic Resonance Imaging
RMSERoot Mean Squared Error
PSOParticle Swarm Optimization
ROIRegion of Interest
SVMSupport Vector Machines
SASimulated Annealing
SADSum-of-Absolute-Differences
SSEError Sum of Squares
SSFFactorial Sum of Squares
SSRResidual Sum of Squares
SSTTotal Sum of Squares
T1T1-Weighted Imaging Sequence
T2T2-Weighted Imaging Sequences
T1cT1-Weighted Contrast-Enhanced
TEEcho Time
TRRepetition Time
TPTrue Positives
WHOWorld Health Organization

References

  1. Park, J.H.; de Lomana, A.L.G.; Marzese, D.M.; Juarez, T.; Feroze, A.; Hothi, P.; Cobbs, C.; Patel, A.P.; Kesari, S.; Huang, S.; et al. A Systems Approach to Brain Tumor Treatment. Cancers 2021, 13, 3152. [Google Scholar] [CrossRef] [PubMed]
  2. Sandler, C.X.; Matsuyama, M.; Jones, T.L.; Bashford, J.; Langbeker, D.; Hayes, S.C. Physical activity and exercise in adults diagnosed with primary brain cancer: A systematic review. J. Neuro-Oncol. 2021, 153, 1–14. [Google Scholar] [CrossRef] [PubMed]
  3. Kanmounye, U.S.; Karekezi, C.; Nyalundja, A.S.; Awad, A.K.; Laeke, T.; Balogun, J.A. Adult brain tumors in Sub–Saharan Africa: A scoping review. Neuro-Oncol. 2022, noac098. [Google Scholar] [CrossRef] [PubMed]
  4. Ali, S.; Li, J.; Pei, Y.; Khurram, R.; Rehman, K.U.; Mahmood, T. A Comprehensive Survey on Brain Tumor Diagnosis Using Deep Learning and Emerging Hybrid Techniques with Multi–modal MR Image. Arch. Computat. Methods Eng. 2022. [Google Scholar] [CrossRef]
  5. Bai, X.; Zhang, Y.; Liu, H.; Wang, Y. Intuitionistic Center–Free FCM Clustering for MR Brain Image Segmentation. IEEE J. Biomed. Health Inform. 2019, 23, 2039–2051. [Google Scholar] [CrossRef]
  6. Li, S.; Liu, J.; Song, Z. Brain tumor segmentation based on region of interest–aided localization and segmentation U–Net. Int. J. Mach. Learn. Cyber. 2022, 13, 2435–2445. [Google Scholar] [CrossRef]
  7. Di Ianni, T.; Airan, R.D. Deep–fUS: A Deep Learning Platform for Functional Ultrasound Imaging of the Brain Using Sparse Data. IEEE Trans. Med. Imaging 2022, 41, 1813–1825. [Google Scholar] [CrossRef]
  8. Sastry, R.; Bi, W.L.; Pieper, S.; Frisken, S.; Kapur, T.; Wells, W.; Golby, A.J. Applications of Ultrasound in the Resection of Brain Tumors. J. Neuroimaging 2017, 27, 5–15. [Google Scholar] [CrossRef]
  9. Guetbi, C.; Kouamé, D.; Ouahabi, A.; Remenieras, J.P. New emboli detection methods [Doppler ultrasound]. In Proceedings of the 1997 IEEE Ultrasonics Symposium Proceedings, An International Symposium (Cat. No.97CH36118), Toronto, ON, Canada, 5–8 October 1997; pp. 1119–1122. [Google Scholar]
  10. Girault, J.M.; Kouamé, D.; Ouahabi, A.; Patat, F. Estimation of the blood Doppler frequency shift by a time–varying parametric approach. Ultrasonics 2000, 38, 682–687. [Google Scholar] [CrossRef]
  11. Girault, J.M.; Ossant, F.; Ouahabi, A.; Kouame, D.; Patat, F. Time–varying autoregressive spectral estimation for ultrasound attenuation in tissue characterization. IEEE Trans. Ultrason Ferroelectr. Freq. Control 1998, 45, 650–659. [Google Scholar] [CrossRef] [Green Version]
  12. Girault, M.; Kouame, D.; Ouahabi, A.; Patat, F. Micro–emboli detection: An ultrasound Doppler signal processing viewpoint. IEEE Trans. Biomed. Eng. 2000, 47, 1431–1439. [Google Scholar] [CrossRef]
  13. Mesfin, F.B.; Al–Dhahir, M.A.; Gliomas. Treasure Island 2022. Available online: https://www.ncbi.nlm.nih.gov/books/NBK441874/ (accessed on 5 June 2022).
  14. Available online: https://www.arcagy.org/infocancer/localisations/autres–types–de–cancers/tumeurs–cerebrales/formes–de–la–maladie/les–gliomes.html/ (accessed on 6 July 2022).
  15. WHO. Central Nervous System Tumours. In WHO Classification of Tumours, 5th ed.; WHO: Geneve, Switzerland, 2022; Volume 8. [Google Scholar]
  16. Menze, B.H.; Jakab, A.; Bauer, S.; Kalpathy-Cramer, J.; Farahani, K.; Kirby, J.; Burren, Y.; Porz, N.; Slotboom, J.; Wiest, R.; et al. The Multimodal Brain Tumor Image Segmentation Benchmark (BRATS). IEEE Trans. Med. Imaging 2015, 34, 1993–2024. [Google Scholar] [CrossRef]
  17. Le, N.Q.K.; Hung, T.N.K.; Do, D.T.; Lam, L.H.T.; Dang, L.H.; Huynh, T.T. Radiomics–based machine learning model for efficiently classifying transcriptome subtypes in glioblastoma patients from MRI. Comput. Biol. Med. 2021, 132, 104320. [Google Scholar] [CrossRef]
  18. Lam, L.H.T.; Do, D.T.; Diep, D.T.N.; Nguyet, D.L.N.; Truong, Q.D.; Tri, T.T.; Thanh, H.N.; Le, N.Q.K. Molecular subtype classification of low–grade gliomas using magnetic resonance imaging–based radiomics and machine learning. NMR Biomed. 2022, e4792. [Google Scholar] [CrossRef]
  19. Yaseen, S.F.; Al–Araji, A.S.; Humaidi, A.J. Brain tumor segmentation and classification: A one–decade review. Int. J. Nonlinear Anal. Appl. 2022, 13, 1879–1891. [Google Scholar]
  20. Ouahabi, A. Image Denoising using Wavelets: Application in Medical Imaging. In Advances in Heuristic Signal Processing and Applications; Chatterjee, A., Nobahari, H., Siarry, P., Eds.; Springer: Basel, Switzerland, 2013; pp. 287–313. [Google Scholar]
  21. Razzak, I.; Imran, M.; Xu, G. Efficient Brain Tumor Segmentation with Multiscale Two–Pathway Group Conventional Neural Networks. IEEE J. Biomed. Health Inform. 2019, 23, 1911–1919. [Google Scholar] [CrossRef]
  22. Zehani, S.; Ouahabi, A.; Oussalah, M.; Mimi, M.; Taleb–Ahmed, A. Bone microarchitecture characterization based on fractal analysis in spatial frequency domain imaging. Int. J. Imaging Syst. Technol. 2021, 31, 141–159. [Google Scholar] [CrossRef]
  23. Vishnuvarthanan, G.; Rajasekaran, M.P.; Subbaraj, P.; Vishnuvarthanan, A. An unsupervised learning method with a clustering approach for tumor identification and tissue segmentation in magnetic resonance brain images. Appl. Soft Comput. 2016, 38, 190–212. [Google Scholar] [CrossRef]
  24. Sujan, M.; Alam, N.; Abdullah, S.; Jahirul, M. A segmentation based automated system for brain tumor detection. Int. J. Comput. Appl. 2016, 153, 41–49. [Google Scholar] [CrossRef]
  25. Ilhan, U.; Ilhan, A. Brain tumor segmentation based on a new threshold approach. Procedia Comput. Sci. 2017, 120, 580–587. [Google Scholar] [CrossRef]
  26. Djeddi, M.; Ouahabi, A.; Batatia, H.; Basarab, A.; Kouame, D. Discrete wavelet transform for multifractal texture classification: Application to ultrasound imaging. In Proceedings of the 2010 IEEE International Conference on Image Processing (ICIP), Hong Kong, China, 26–29 September 2010; pp. 637–640. [Google Scholar]
  27. Deng, W.; Xiao, W.; Deng, H.; Liu, J. MRI brain tumor segmentation with region growing method based on the gradients and variances along and inside of the boundary curve. In Proceedings of the 2010 3rd International Conference on Biomedical Engineering and Informatics, Yantai, China, 16–18 October 2010; pp. 393–396. [Google Scholar]
  28. Węgliński, T.; Fabijańska, A. Brain tumor segmentation from MRI data sets using region growing approach. In Proceedings of the 2011 Perspective Technologies and Methods in MEMS Design, Polyana, Ukraine, 11–14 May 2011; pp. 185–188. [Google Scholar]
  29. Biratu, E.S.; Schwenker, F.; Debelee, T.G.; Kebede, S.R.; Negera, W.G.; Molla, H.T. Enhanced Region Growing for Brain Tumor MR Image Segmentation. J. Imaging 2021, 7, 22. [Google Scholar] [CrossRef]
  30. Khosravanian, A.; Rahmanimanesh, M.; Keshavarzi, P.; Mozaffari, S. Fast Level Set Method for Glioma Brain Tumor Segmentation Based on Super Pixel Fuzzy Clustering and Lattice Boltzmann Method. Comput. Methods Programs Biomed. 2021, 198, 105809. [Google Scholar] [CrossRef]
  31. Hamiane, M.; Saeed, F. SVM Classification of MRI Brain Images for Computer–Assisted Diagnosis. Int. J. Electr. Comput. Eng. 2017, 7, 2555–2564. [Google Scholar] [CrossRef]
  32. Zhang, N.; Ruan, S.; Lebonvallet, S.; Liao, Q.; Zhu, Y. Kernel feature selection to fuse multi–spectral MRI images for brain tumor segmentation. Comput. Vis. Image Underst. 2011, 115, 256–269. [Google Scholar] [CrossRef]
  33. Koley, S.; Sadhu, A.K.; Mitra, P.; Chakraborty, B.; Chakraborty, C. Delineation and diagnosis of brain tumors from post contrast T1–weighted MR images using rough granular computing and random forest. Appl. Soft Comput. 2016, 41, 453–465. [Google Scholar] [CrossRef]
  34. Srinivasu, P.N.; SivaSai, J.G.; Ijaz, M.F.; Bhoi, A.K.; Kim, W.; Kang, J.J. Classification of Skin Disease Using Deep Learning Neural Networks with MobileNet V2 and LSTM. Sensors 2021, 21, 2852. [Google Scholar] [CrossRef]
  35. Ijaz, M.F.; Attique, M.; Son, Y. Data–Driven Cervical Cancer Prediction Model with Outlier Detection and Over–Sampling Methods. Sensors 2020, 20, 2809. [Google Scholar] [CrossRef]
  36. Ali, F.; El–Sappagh, S.; Islam, S.M.R.; Kwak, D.; Ali, A.; Imran, A.; Kwak, K.S. A Smart Healthcare Monitoring System for Heart Disease Prediction Based On Ensemble Deep Learning and Feature Fusion. Inf. Fusion 2020, 63, 208–222. [Google Scholar] [CrossRef]
  37. Havaei, M.; Davy, A.; Warde–Farley, D.; Biard, A.; Courville, A.; Bengio, Y.; Pal, C.; Jodoin, P.M.; Larochelle, H. Brain tumor segmentation with Deep Neural Networks. Med. Image Anal. 2017, 35, 18–31. [Google Scholar] [CrossRef]
  38. Ma, C.; Luo, G.; Wang, K. Concatenated and connected random forests with multiscale patch driven active contour model for automated brain tumor segmentation of MR images. IEEE Trans. Med. Imaging 2018, 37, 1943–1954. [Google Scholar] [CrossRef]
  39. Hussain, S.; Anwar, S.M. Segmentation of Glioma Tumors in Brain Using Deep Convolutional Neural Network. Neurocomputing 2018, 282, 248–261. [Google Scholar] [CrossRef] [Green Version]
  40. Li, Q.; Yu, Z.; Wang, Y.; Zheng, H. TumorGAN: A Multi–Modal Data Augmentation Framework for Brain Tumor Segmentation. Sensors 2020, 20, 4203. [Google Scholar] [CrossRef] [PubMed]
  41. Arora, A.; Jayal, A.; Gupta, M.; Mittal, P.; Satapathy, S.C. Brain Tumor Segmentation of MRI Images Using Processed Image Driven U–Net Architecture. Computers 2021, 10, 139. [Google Scholar] [CrossRef]
  42. Nabizadeh, N.; Kubat, M. Automatic Tumor Segmentation in Single–spectral MRI Using A Texture–based and Contour–based Algorithm. Expert Syst. Appl. 2017, 77, 1–10. [Google Scholar] [CrossRef]
  43. Ben George, E.; Rosline, G.; Rajesh, D. Brain tumor segmentation using Cuckoo search optimization for magnetic resonance images. In Proceedings of the 2015 IEEE 8th GCC Conference & Exhibition, Muscat, Oman, 1–4 February 2015; pp. 1–6. [Google Scholar]
  44. Karnan, M.; Logheshwari, T. Improved implementation of brain MRI image segmentation using ant colony system. In Proceedings of the 2010 IEEE International Conference on Computational Intelligence and Computing Research (ICCIC), Coimbatore, India, 28–29 December 2010; pp. 1–4. [Google Scholar]
  45. Larson, M.G. Analysis of variance. Circulation 2008, 117, 115–121. [Google Scholar] [CrossRef] [PubMed]
  46. Sun, H.; Wang, W. A new algorithm for unsupervised image segmentation based on D–MRF model and ANOVA. In Proceedings of the 2009 IEEE International Conference on Network Infrastructure and Digital Content (IC–NIDC), Beijing, China, 6–8 November 2009; pp. 754–758. [Google Scholar]
  47. Farshi, T.R.; Drake, J.H.; Özcan, E. A multimodal particle swarm optimization–based approach for image segmentation. Expert Syst. Appl. 2020, 149, 113233. [Google Scholar] [CrossRef]
  48. Bonabeau, E.; Dorigo, M.; Theraulaz, G. Swarm intelligence: From natural to artificial systems. Connect. Sci. 2002, 14, 163–164. [Google Scholar]
  49. Eberhart, R.C.; Kennedy, J. Particle Swarm Optimization. In Proceedings of the 1995 IEEE International Conference on Neural Networks (ICNN), Perth, WA, Australia, 27 November–1 December 1995; pp. 1942–1948. [Google Scholar]
  50. Kirkpatrick, S.; Gelatt, C.D.; Vecchi, M.P. Optimization by simulated annealing. Science 1983, 220, 671–680. [Google Scholar] [CrossRef]
  51. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey wolf optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef]
  52. Salleh, M.N.M.; Hussain, K.; Cheng, S.; Shi, Y.; Muhammad, A.; Ullah, G.; Naseem, R. Exploration and Exploitation Measurement in Swarm–Based Metaheuristic Algorithms: An Empirical Analysis. In Advances in Intelligent Systems and Computing; Ghazali, R., Deris, M.M., Nawi, N.M., Abawajy, J.H., Eds.; Springer International Publishing: Berlin/Heidelberg, Germany, 2018; pp. 24–32. [Google Scholar]
  53. Boussaid, I.; Lepagnot, J.; Siarry, P. A survey on optimization metaheuristics. Inf. Sci. 2013, 237, 82–117. [Google Scholar] [CrossRef]
  54. Thangaraj, R.; Pant, M.; Abraham, A.; Bouvry, P. Particle swarm optimization: Hybridization perspectives and experimental illustrations. Appl. Math. Comput. 2011, 217, 5208–5226. [Google Scholar] [CrossRef]
  55. Abdulraqeb, A.R.A.; Al–Haidri, W.A.; Sushkova, L.T. A novel segmentation algorithm for MRI brain tumor images. In Proceedings of the 2018 Ural Symposium on Biomedical Engineering, Radioelectronics and Information Technology (USBEREIT), Yekaterinburg, Russia, 7–8 May 2018; pp. 1–4. [Google Scholar]
  56. Dhanve, V.; Kumar, M. Detection of brain tumor using k–means segmentation based on object labeling algorithm. In Proceedings of the 2017 IEEE International Conference on Power, Control, Signals and Instrumentation Engineering (ICPCSI), Chennai, India, 21–22 September 2017; pp. 944–951. [Google Scholar]
  57. Girault, J.M.; Kouame, D.; Ouahabi, A. Analytical formulation of the fractal dimension of filtered stochastic signals. Signal Processing 2010, 90, 2690–2697. [Google Scholar] [CrossRef] [Green Version]
  58. Ferroukhi, M.; Ouahabi, A.; Attari, M.; Habchi, Y.; Taleb–Ahmed, A. Medical video coding based on 2nd–generation wavelets: Performance evaluation. Electronics 2019, 8, 88. [Google Scholar] [CrossRef]
  59. Mahdaoui, A.E.; Ouahabi, A.; Moulay, M.S. Image Denoising Using a Compressive Sensing Approach Based on Regularization Constraints. Sensors 2022, 22, 2199. [Google Scholar] [CrossRef]
  60. Haneche, H.; Ouahabi, A.; Boudraa, B. New mobile communication system design for Rayleigh environments based on compressed sensing–source coding. IET Commun. 2019, 13, 2375–2385. [Google Scholar] [CrossRef]
  61. Haneche, H.; Boudraa, B.; Ouahabi, A. A new way to enhance speech signal based on compressed sensing. Measurement 2020, 151, 107–117. [Google Scholar] [CrossRef]
  62. Haneche, H.; Ouahabi, A.; Boudraa, B. Compressed sensing–speech coding scheme for mobile communications. Circuits Syst. Signal Process. 2021, 40, 5106–5126. [Google Scholar] [CrossRef]
  63. Kim, J.; Wang, Q.; Zhang, S.; Yoon, S. Compressed Sensing–Based Super–Resolution Ultrasound Imaging for Faster Acquisition and High Quality Images. IEEE Trans. Biomed. Eng. 2021, 68, 3317–3326. [Google Scholar] [CrossRef]
  64. Ouahabi, A. A review of wavelet denoising in medical imaging. In Proceedings of the 2013 8th International Workshop on Systems, Signal Processing and their Applications (WoSSPA), Algiers, Algeria, 12–15 May 2013; pp. 19–26. [Google Scholar]
  65. Sidahmed, S.; Messali, Z.; Ouahabi, A.; Trépout, S.; Messaoudi, C.; Marco, S. Nonparametric Denoising Methods Based on Contourlet Transform with Sharp Frequency Localization: Application to Low Exposure Time Electron Microscopy Images. Entropy 2015, 17, 3461–3478. [Google Scholar]
  66. Ouahabi, A. Signal and Image Multiresolution Analysis, 1st ed.; ISTE–Wiley: London, UK, 2012. [Google Scholar]
  67. Demirhan, A.; Törü, M.; Güler, I. Segmentation of tumor and edema along with healthy tissues of brain using wavelets and neural networks. IEEE J. Biomed. Health Inform. 2015, 19, 1451–1458. [Google Scholar] [CrossRef]
  68. Smith, S.M. Fast robust automated brain extraction. Hum. Brain Mapp. 2002, 17, 143–155. [Google Scholar] [CrossRef]
  69. Iglesias, J.E.; Liu, C.Y.; Thompson, P.M.; Tu, Z. Robust brain extraction across datasets and comparison with publicly available methods. IEEE Trans. Med. Imaging 2011, 30, 1617–1634. [Google Scholar] [CrossRef]
  70. Ashburner, J.; Friston, K.J. Unified segmentation. Neuroimage 2005, 26, 839–851. [Google Scholar] [CrossRef]
  71. Vishnuvarthanan, A.; Rajasekaran, M.P.; Vishnuvarthanan, G.; Zhang, Y.; Thiyagarajan, A. An automated hybrid approach using clustering and nature inspired optimization technique for improved tumor and tissue segmentation in magnetic resonance brain images. Appl. Soft Comput. 2017, 57, 399–426. [Google Scholar] [CrossRef]
  72. Akkus, A.; Galimzianova, A.; Hoogi, A.; Rubin, D.L.; Erickson, B.J. Deep learning for brain MRI segmentation: State of the art and future directions. J. Digit Imaging 2017, 30, 449–459. [Google Scholar] [CrossRef]
  73. Cai, J.; Pan, W.D. On fast and accurate block–based motion estimation algorithms using particle swarm optimization. Inf. Sci. 2012, 197, 53–64. [Google Scholar] [CrossRef]
  74. Goshtasby, A. Image Registration: Principles, Tools and Methods; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2012. [Google Scholar]
  75. Center of Imaging of Kouba, Algeria Database. Available online: https://figshare.com/articles/brain_tumor_dataset/1512427 (accessed on 24 March 2022).
  76. Havaei, M.; Dutil, F.; Pal, C.; Larochelle, H.; Jodoin, P.M. A Convolutional Neural Network Approach to Brain Tumor Segmentation. In Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries; Springer: Cham, Switzerland, 2016; pp. 195–208. [Google Scholar]
  77. Pereira, S.; Pinto, A.; Alves, V.; Silva, C.A. Deep Convolutional Neural Networks for the Segmentation of Gliomas in Multisequence MRI. In Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries; Springer: Cham, Switzerland, 2016; pp. 131–143. [Google Scholar]
  78. Tseng, K.L.; Lin, Y.L.; Hsu, W.; Huang, C.Y. Joint Sequence Learning and Cross–Modality Convolution for 3D Biomedical Segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017. [Google Scholar]
  79. Iqbal, S.; Ghani, M.U.; Saba, T.; Rehman, A. Brain Tumor Segmentation in Multi–spectral MRI Using Convolutional Neural Networks (CNN). Microsc. Res. Technol. 2018, 81, 419–427. [Google Scholar] [CrossRef]
  80. Liu, D.; Zhang, H.; Zhao, M.; Yu, X.; Yao, S.; Zhou, W. Brain Tumor Segmentation Based on Dilated Convolution Refine Networks. In Proceedings of the 16th IEEE International Conference on Software Engineering Research, Management and Application, Kunming, China, 13–15 June 2018; pp. 113–120. [Google Scholar]
  81. Hu, K.; Deng, S.H. Brain Tumor Segmentation Using Multi–Cascaded Convolutional Neural Networks and Conditional Random Field. IEEE Access 2019, 7, 2615–2629. [Google Scholar] [CrossRef]
  82. Li, H.C.; Li, A.; Wang, M.H. A Novel End–to–end Brain Tumor Segmentation Method Using Improved Fully Convolutional Networks. Comput. Biol. Med. 2019, 108, 150–160. [Google Scholar] [CrossRef]
  83. Elmezain, M.; Mahmoud, A.; Mosa, D.T.; Said, W. Brain Tumor Segmentation Using Deep Capsule Network and Latent–Dynamic Conditional Random Fields. J. Imaging 2022, 8, 190. [Google Scholar] [CrossRef]
  84. Adjabi, I.; Ouahabi, A.; Benzaoui, A.; Taleb–Ahmed, A. Past, present, and future of face recognition: A Review. Electronics 2020, 9, 1188. [Google Scholar] [CrossRef]
  85. Adjabi, I.; Ouahabi, A.; Benzaoui, A.; Jacques, S. Multi–block color–binarized statistical images for single–sample face recognition. Sensors 2021, 21, 728. [Google Scholar] [CrossRef] [PubMed]
  86. El Morabit, S.; Rivenq, A.; Zighem, M.E.; Hadid, A.; Ouahabi, A.; Taleb–Ahmed, A. Automatic Pain Estimation from Facial Expressions: A Comparative Analysis Using Off–the–Shelf CNN Architectures. Electronics 2021, 10, 1926. [Google Scholar] [CrossRef]
  87. Khaldi, Y.; Benzaoui, A.; Ouahabi, A.; Jacques, S.; Taleb–Ahmed, A. Ear recognition based on deep unsupervised active learning. IEEE Sens. J. 2021, 21, 20704–20713. [Google Scholar] [CrossRef]
  88. Arbaoui, A.; Ouahabi, A.; Jacques, S.; Hamiane, M. Concrete Cracks Detection and Monitoring Using Deep Learning–Based Multiresolution Analysis. Electronics 2021, 10, 1772. [Google Scholar] [CrossRef]
  89. Arbaoui, A.; Ouahabi, A.; Jacques, S.; Hamiane, M. Wavelet–based multiresolution analysis coupled with deep learning to efficiently monitor cracks in concrete. Frat. Integrita Strutt. 2021, 58, 33–47. [Google Scholar] [CrossRef]
  90. Benlamoudi, A.; Bekhouche, S.E.; Korichi, M.; Bensid, K.; Ouahabi, A.; Hadid, A.; Taleb–Ahmed, A. Face Presentation Attack Detection Using Deep Background Subtraction. Sensors 2022, 22, 3760. [Google Scholar] [CrossRef]
  91. Souibgui, M.A.; Kessentini, Y. DE–GAN: A Conditional Generative Adversarial Network for Document Enhancement. IEEE Trans. Pattern Anal. Mach. Intell. 2022, 44, 1180–1191. [Google Scholar] [CrossRef]
  92. Gui, J.; Sun, Z.; Wen, Y.; Tao, T.; Ye, J. A Review on Generative Adversarial Networks: Algorithms, Theory, and Applications. IEEE Trans. Knowl. Data Eng. 2021. [Google Scholar] [CrossRef]
  93. Khaldi, Y.; Benzaoui, A. A new framework for grayscale ear images recognition using generative adversarial networks under unconstrained conditions. Evol. Syst. 2021, 12, 923–934. [Google Scholar] [CrossRef]
  94. Creswell, A.; Bharath, A.A. Inverting the Generator of a Generative Adversarial Network. IEEE Trans. Neural Netw. Learn. Syst. 2019, 30, 1967–1974. [Google Scholar] [CrossRef] [Green Version]
Figure 1. (A) MRI of the brain: axial FLAIR section. A thin-walled cyst-like image (arrow) consistent with an ependymal cyst can be seen in the occipital extension of the left lateral ventricle. (B) Brain cross-section (illustration).
Figure 1. (A) MRI of the brain: axial FLAIR section. A thin-walled cyst-like image (arrow) consistent with an ependymal cyst can be seen in the occipital extension of the left lateral ventricle. (B) Brain cross-section (illustration).
Cancers 14 04399 g001
Figure 2. Example of 3 MRI sequences: T1-Weighted, T2-Weighted, and FLAIR.
Figure 2. Example of 3 MRI sequences: T1-Weighted, T2-Weighted, and FLAIR.
Cancers 14 04399 g002
Figure 3. Comparison between a T1-Weighted MRI sequence without a contrast agent (T1) and the same sequence with a contrast agent (T1c).
Figure 3. Comparison between a T1-Weighted MRI sequence without a contrast agent (T1) and the same sequence with a contrast agent (T1c).
Cancers 14 04399 g003
Figure 4. MRI sequences of brain tumors: FLAIR, T1, T1 contrasted by Gadolinium injection, T2, and Ground Truth superimposed on the FLAIR sequence.
Figure 4. MRI sequences of brain tumors: FLAIR, T1, T1 contrasted by Gadolinium injection, T2, and Ground Truth superimposed on the FLAIR sequence.
Cancers 14 04399 g004
Figure 5. Segmentation result (see Section 4): MRI sequences: T1, T2, T1c, FLAIR, Ground Truth, and segmentation.
Figure 5. Segmentation result (see Section 4): MRI sequences: T1, T2, T1c, FLAIR, Ground Truth, and segmentation.
Cancers 14 04399 g005
Figure 6. Segmentation result (see Section 4): T1c, automatic detection of brain tumors, segmentation, and binarization.
Figure 6. Segmentation result (see Section 4): T1c, automatic detection of brain tumors, segmentation, and binarization.
Cancers 14 04399 g006
Figure 7. Diagram of the proposed approach.
Figure 7. Diagram of the proposed approach.
Cancers 14 04399 g007
Figure 8. Flowchart of the proposed segmentation method.
Figure 8. Flowchart of the proposed segmentation method.
Cancers 14 04399 g008
Figure 9. Pre-processing step: (a) original brain image; (b) brain image after pre-processing.
Figure 9. Pre-processing step: (a) original brain image; (b) brain image after pre-processing.
Cancers 14 04399 g009
Figure 10. ROI identification, using PSO and ANOVA: (a) candidate blocks (in red) after 5 iterations; (b) candidate blocks after 15 iterations; (c) candidate blocks after 35 iterations; (d) candidate blocks after 50 iterations.
Figure 10. ROI identification, using PSO and ANOVA: (a) candidate blocks (in red) after 5 iterations; (b) candidate blocks after 15 iterations; (c) candidate blocks after 35 iterations; (d) candidate blocks after 50 iterations.
Cancers 14 04399 g010
Figure 11. ROI segmentation using K-means: (a) identified ROI; (b) ROI segmented with K-means.
Figure 11. ROI segmentation using K-means: (a) identified ROI; (b) ROI segmented with K-means.
Cancers 14 04399 g011
Figure 12. Performance comparison between the proposed brain tumor segmentation and Ground Truth: (a) segmentation using our method; (b) Ground Truth.
Figure 12. Performance comparison between the proposed brain tumor segmentation and Ground Truth: (a) segmentation using our method; (b) Ground Truth.
Cancers 14 04399 g012
Figure 13. The efficiency of brain tumor segmentation on the KICA dataset: comparison of ANOVA and SAD-based methods. (a) Original images. (b) Pre-processing. (c) Our method with ANOVA. (d) Our method with SAD.
Figure 13. The efficiency of brain tumor segmentation on the KICA dataset: comparison of ANOVA and SAD-based methods. (a) Original images. (b) Pre-processing. (c) Our method with ANOVA. (d) Our method with SAD.
Cancers 14 04399 g013aCancers 14 04399 g013bCancers 14 04399 g013c
Figure 14. The efficiency of brain tumor segmentation on the KICA dataset: comparison between the proposed ANOVA-based method and other well-known methods.
Figure 14. The efficiency of brain tumor segmentation on the KICA dataset: comparison between the proposed ANOVA-based method and other well-known methods.
Cancers 14 04399 g014aCancers 14 04399 g014bCancers 14 04399 g014c
Figure 15. Comparison of segmentation results on the CIKA dataset based on: (a) Dice similarity coefficient; (b) Jaccard Distance; (c) correlation coefficient; (d) RMSE metric.
Figure 15. Comparison of segmentation results on the CIKA dataset based on: (a) Dice similarity coefficient; (b) Jaccard Distance; (c) correlation coefficient; (d) RMSE metric.
Cancers 14 04399 g015aCancers 14 04399 g015b
Figure 16. The three modalities of tumor regions: complete, core, and enhancing tumors.
Figure 16. The three modalities of tumor regions: complete, core, and enhancing tumors.
Cancers 14 04399 g016
Figure 17. The efficiency of our proposed brain tumor segmentation method on the BraTS 2015 dataset: (top row) original T2-FLAIR images; (bottom row) segmentation results of complete tumors.
Figure 17. The efficiency of our proposed brain tumor segmentation method on the BraTS 2015 dataset: (top row) original T2-FLAIR images; (bottom row) segmentation results of complete tumors.
Cancers 14 04399 g017
Table 1. Analysis of basic MRI sequences in the context of brain tumors.
Table 1. Analysis of basic MRI sequences in the context of brain tumors.
TissueT1-WeightedT2-WeightedFLAIR
White Matter LightDark GrayDark Gray
FatBrightLightLight
CSFDarkBrightDark
InflammationDarkBrightBright
CortexGrayLight GrayLight Gray
Table 4. The results of our segmentation method on the KICA dataset, using the Dice similarity coefficient, Jaccard distance, correlation coefficient, and RMSE metric with two fitness functions: ANOVA and SAD.
Table 4. The results of our segmentation method on the KICA dataset, using the Dice similarity coefficient, Jaccard distance, correlation coefficient, and RMSE metric with two fitness functions: ANOVA and SAD.
ImagesDice Similarity Coefficient (%)Jaccard Distance (%)Correlation Coefficient
(−1 to +1)
RMSE Metric (%)
ANOVASADANOVASADANOVASADANOVASAD
Image 162.008%NRS44.936%NRS0.6730640.4754780.00066470.0010605
Image 278.997%NRS65.285%NRS0.7901160.3861670.00001580.0014934
Image 373.276%24.432%57.823%13.916%0.7610690.5320320.00035790.0005049
Image 483.265%78.242%71.329%64.26%0.8705530.8204690.00006570.0001091
Image 588.252%87.581%78.974%77.907%0.8979220.8885100.00003980.0000464
Image 687.844%85.748%78.323%75.051%0.8914400.8655180.00002430.0000361
Image 791.919%88.809%85.046%79.871%0.9258060.9030270.00006860.0001325
Image 894.535%93.593%89.637%87.959%0.9525240.9425360.00012230.0002402
Image 996.211%90.469%92.699%82.598%0.9536630.663430.00004980.0004337
Image 1095.154%NRS90.756%NRS0.9506690.942860.00004140.0133088
NRS signifies « non-relevant segmentation ».
Table 5. Comparison between the results of our segmentation method on the KICA dataset and some well-known segmentation techniques using the Dice similarity coefficient.
Table 5. Comparison between the results of our segmentation method on the KICA dataset and some well-known segmentation techniques using the Dice similarity coefficient.
ImagesOur MethodFCMK-MeansOtsu
Thresholding
Local
Thresholding
Watershed Thresholding
Image 162.008%2.9106%6.858%0.044%0.303%NRS
Image 278.997%NRSNRSNRSNRSNRS
Image 373.276%1.127%6.690%NRSNRSNRS
Image 483.265%25.948%24.823%5.750%0.906%8.297%
Image 588.252%38.213%38.137%6.130%20.762%7.738%
Image 687.844%42.716%35.910%4.683%4.088%5.357%
Image 791.919%44.635%40.291%10.921%35.751%13.968%
Image 894.535%50.052%79.763%25.940%11.712%27.638%
Image 996.211%1.476%56.53%25.038%23.014%25.643%
Image 1095.154%0.810%48.957%18.878%NRS28.530%
NRS signifies « non-relevant segmentation ».
Table 6. Comparison between the results of our segmentation method on the KICA dataset and some well-known segmentation techniques using the Jaccard distance.
Table 6. Comparison between the results of our segmentation method on the KICA dataset and some well-known segmentation techniques using the Jaccard distance.
ImagesOur MethodFCMK-MeansOtsu
Thresholding
Local
Thresholding
Watershed Thresholding
Image 144.936%1.4768%3.551%0.022%0.151%NRS
Image 265.285%NRSNRSNRSNRSNRS
Image 357.823%0.566%3.461%NRSNRSNRS
Image 471.329%14.908%14.17%2.96%0.455%4.328%
Image 578.974%23.619%23.561%3.161%11.583%4.025%
Image 678.323%27.159%21.885%2.398%2.087%2.752%
Image 785.046%28.729%25.228%5.776%21.766%7.508%
Image 889.637%33.38%66.339%14.903%6.220%16.035%
Image 992.699%0.7438%39.402%14.310%13.003%14.707%
Image 1090.756%0.406%32.413%10.423%NRS16.639%
NRS signifies « non-relevant segmentation ».
Table 7. Comparison between the results of our segmentation method on the KICA dataset and some well-known segmentation techniques using the correlation coefficient.
Table 7. Comparison between the results of our segmentation method on the KICA dataset and some well-known segmentation techniques using the correlation coefficient.
ImagesOur MethodFCMK-MeansOtsu ThresholdingLocal ThresholdingWatershed Thresholding
Image 10.6730640.1575630.2938670.2754140.1962620.214095
Image 20.7901160.2390390.2503150.2198950.1387820.160484
Image 30.7610690.1823890.2534950.2436090.1297380.243366
Image 40.8705530.2895130.4171040.1594790.1753910.192943
Image 50.8979220.3785420.5618950.1641280.2497230.177235
Image 60.8914400.5026740.5573900.1358420.1546600.142385
Image 70.9258060.5370260.5606810.2246820.3564870.281344
Image 80.9525240.4748600.8225970.3287760.1884920.3892
Image 90.9661580.2555780.5937730.3334110.2364880.3951
Image 100.9536630.3000800.6021170.2714520.0841330.3824
Table 8. Comparison between the results of our segmentation method on the KICA dataset and some well-known segmentation techniques using the RMSE metric.
Table 8. Comparison between the results of our segmentation method on the KICA dataset and some well-known segmentation techniques using the RMSE metric.
ImagesOur MethodFCMK-MeansOtsu
Thresholding
Local
Thresholding
Watershed Thresholding
Image 10.00066470.00129720.00213450.00578360.00207890.0031144
Image 20.00001580.00009200.00058150.00401390.00013270.0011636
Image 30.00035790.00084580.00462640.02423700.00241030.0155236
Image 40.00006570.00057980.00135540.09159800.00105450.0589136
Image 50.00003990.00052230.00065720.09735900.00065250.0915326
Image 60.00002430.00022960.00035640.12466280.00054430.1213210
Image 70.00006860.00073010.00185330.06729480.00171840.0423277
Image 80.00012230.00490380.00045660.06364070.00686720.0722
Image 90.00004980.00735480.00183460.10333270.00658270.0887
Image 100.00004140.00609800.00408850.12200650.00624850.0533
Table 9. The results of our segmentation method on the BraTS 2015 dataset, using the Dice similarity coefficient applied on complete, core, and enhancing tumors, and comparison to the state-of-the-art.
Table 9. The results of our segmentation method on the BraTS 2015 dataset, using the Dice similarity coefficient applied on complete, core, and enhancing tumors, and comparison to the state-of-the-art.
AuthorsYearMethodsDice
CompleteCoreEnhancing
Havaei et al. [76]2016CNN (Two-Phase Patch-Wise Training Procedure)88%79%73%
Pereira et al. [77]2016CNN87%73%68%
Tseng et al. [78]2017CNN (Encoder-Decoder Architecture)85%68%68%
Hussain et al. [39]2018ILinear86%87%90%
Iqbal et al. [79]2018CNN (Sequential Multiple Neural Network Layers)87%86%79%
Liu et al. [80]2018CNN (ResNet-50)87%62%68%
Hu and Deng [81]2019MCCNN + CRFs87%76%75%
Li et al. [82]2019CNN (Modified U-Net Architecture)89%73%73%
Elmezain et al. [83]2022CapsNet + LDCRF + Post-processing91%86%85%
Atia et al.2022Our Method91%87%86%
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Atia, N.; Benzaoui, A.; Jacques, S.; Hamiane, M.; Kourd, K.E.; Bouakaz, A.; Ouahabi, A. Particle Swarm Optimization and Two-Way Fixed-Effects Analysis of Variance for Efficient Brain Tumor Segmentation. Cancers 2022, 14, 4399. https://doi.org/10.3390/cancers14184399

AMA Style

Atia N, Benzaoui A, Jacques S, Hamiane M, Kourd KE, Bouakaz A, Ouahabi A. Particle Swarm Optimization and Two-Way Fixed-Effects Analysis of Variance for Efficient Brain Tumor Segmentation. Cancers. 2022; 14(18):4399. https://doi.org/10.3390/cancers14184399

Chicago/Turabian Style

Atia, Naoual, Amir Benzaoui, Sébastien Jacques, Madina Hamiane, Kaouther El Kourd, Ayache Bouakaz, and Abdeldjalil Ouahabi. 2022. "Particle Swarm Optimization and Two-Way Fixed-Effects Analysis of Variance for Efficient Brain Tumor Segmentation" Cancers 14, no. 18: 4399. https://doi.org/10.3390/cancers14184399

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop