Next Article in Journal
Cognitive Aspects-Based Short Text Representation with Named Entity, Concept and Knowledge
Previous Article in Journal
Graphene–Chitosan Hybrid Dental Implants with Enhanced Antibacterial and Cell-Proliferation Properties
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Bayesian Neo-Normal Mixture Model (Nenomimo) for MRI-Based Brain Tumor Segmentation

by
Anindya Apriliyanti Pravitasari
1,2,
Nur Iriawan
2,*,
Kartika Fithriasari
2,
Santi Wulan Purnami
2,
Irhamah
2 and
Widiana Ferriastuti
3
1
Department of Statistics, Faculty of Mathematics and Natural Sciences, Universitas Padjadjaran, Jl. Raya Bandung-Sumedang KM. 21, Bandung 45363, Indonesia
2
Department of Statistics, Faculty of Science and Data Analytics, Institut Teknologi Sepuluh Nopember, Jl. Arif Rahman Hakim Surabaya 60111, Indonesia
3
Department of Radiology, Faculty of Medicine, Universitas Airlangga, Surabaya 60132, Indonesia
*
Author to whom correspondence should be addressed.
Appl. Sci. 2020, 10(14), 4892; https://doi.org/10.3390/app10144892
Submission received: 9 June 2020 / Revised: 11 July 2020 / Accepted: 13 July 2020 / Published: 16 July 2020

Abstract

:
The detection of a brain tumor through magnetic resonance imaging (MRI) is still challenging when the image is in low quality. Image segmentation could be done to provide a clear brain tumor area as the region of interest. In this study, we propose an improved model-based clustering approach for MRI-based image segmentation. The main contribution is the use of the adaptive neo-normal distributions in the form of a finite mixture model that could handle both symmetrical and asymmetrical patterns in an MRI image. The neo-normal mixture model (Nenomimo) also resolves the limitation of the Gaussian mixture model (GMM) and the generalized GMM (GGMM), which are limited by the short-tailed form of their distributions and their sensitivity against noise. Model estimation is done through an optimization process using the Bayesian method coupled with a Markov chain Monte Carlo (MCMC) approach, and it employs a silhouette coefficient to find the optimum number of clusters. The performance of the Nenomimo was evaluated against the GMM and the GGMM using the misclassification ratio (MCR). Finally, this study discovered that the Nenomimo provides better segmentation results for both simulated and real data sets, with an average MCR for MRI brain tumor image segmentation of less than 3%.

1. Introduction

We chose brain tumor detection as the topic for this study since a brain tumor is the 15th most deadly disease in Indonesia, a comparison that includes all types of cancer. The World Health Organization noted that during 2018, Indonesia had 5323 cases of brain and nervous system tumors, with 4229 cases of mortality [1]. This high mortality rate also occurs in Surabaya, where cases of brain tumors are increasing from year to year. The Dr. Soetomo General Hospital (RSUD) in Surabaya provides radiological examination services for brain tumor patients, including brain scanning with the 3 Tesla and 1.5 Tesla magnetic resonance imaging (MRI) scanners. The higher Tesla power provides better quality MRI images. However, the 1.5 Tesla service is still in high demand due to its low cost; moreover, this service is also covered by the Indonesian government’s social security (BPJS). A high-quality MRI image is needed to provide an accurate tumor location for medical surgery. This study tried to employ the image segmentation technique to enhance brain tumor images from the 1.5 Tesla MRI scanner. Segmentation is done by separating the tumor area (which is known as the region of interest—ROI) from the healthy brain. This helps medical surgery by providing a clear boundary to the parts of the tumor area that must be taken away without damaging healthy parts of the brain.
Various image segmentation techniques have been proposed in the form of clustering algorithms. Traditional methods include hierarchical clustering and the partitioning algorithm [2]. In hierarchical clustering, objects are grouped by forming a cluster tree or dendrogram, while partitioning algorithms are based on objective functions that are optimized to obtain clustering and cluster centers. These methods are mostly heuristics, so they are not based on formal models. The hierarchical clustering for image segmentation was implemented by Bruse et al. [3] and Pestunov et al. [4]. Several partitioning algorithm techniques used in image segmentation are K-means cluster [5], K-medoids [6], and fuzzy-based clustering techniques such as Fuzzy C-means (FCM) [7]. Raftery [8] stated that traditional methods are in great demand because of their simplicity. However, Greeve et al. [9] evaluated the traditional methods and found that those methods were unable to represent real data patterns. Model-based clustering is the most appropriate tool to overcome this problem, especially when facing a limited number of data. One of the methods is known as the Gaussian mixture model (GMM) [10].
In image segmentation, the GMM has several limitations. One of them is the usage of Gaussian distributions, which have symmetry and short-tailed properties. They make the GMM inflexible to overcome several cases of image segmentation that tend to have diverse patterns [11]. Some methods developed to overcome this problem include the use of distributions other than Gaussian, e.g., the Student’s t mixture model (SMM) [12] and the Laplace mixture model (LMM) [13]. A similar idea is also seen in the generalized GMM (GGMM), which utilizes the flexibility of form parameters from the Gaussian distribution [14]. However, the use of a Gaussian distribution still leaves problems regarding the short-tailed form of distribution. The existence of a pattern with a long tail must be approached with more Gaussian distribution components that must compile the Gaussian mixture so that the parsimony of the model is less intact. Some of the mentioned models still apply distributions with rules of symmetry and set the mean as the cluster center; therefore, their sensitivity to noise is still high.
Another problem that often arises in image segmentation is the existence of different distributions in the mixture model. Therefore, using a mixture approach from a similar distribution is not adequate. Motivated by a previous study by Rasmussen [10] and Deledalle et al. [14], we propose the use of mixture models with adaptive distributions that can adaptively accommodate a variety of symmetrical and asymmetrical patterns. Despite those facts, these distributions could also handle the limitations related to the short tail form and the lack of robustness of a Gaussian distribution. These adaptive distributions—which are called nearly normal or neo-normal—have symmetrical properties, as with the Gaussian distribution, but can be more pointed or tilted. Some of the distributions in the neo-normal family are Fernandez–Steel skew normal (FSSN) [15] and modified stable to normal from Burr (MSNBurr) [16]. This study used the FSSN and MSNBurr distributions to develop our mixture model, and we named it the Nenomimo. The used FSSN mixture model followed the research of Iriawan et al. [17], which also applied it to MRI image segmentation. The Bayesian method coupled with the Markov chain Monte Carlo (MCMC) approach was employed in model estimation, since the analytical solutions were considered complicated. The proposed models were compared with the GMM and the GGMM for segmenting the simulated and MRI brain tumor image sets. The cluster validation was done by calculating the silhouette coefficient, and the best model was decided by the minimum value of the misclassification ratio.
In the next section, we discuss the Nenomimo, and in Section 3, we continue with the estimation of the purposed models (FSSN and MSNBurr mixture models). Section 4 discusses the cluster validation and model comparison tools. Section 5 shows the discussion of segmentation results performed for the normal model and the Nenomimo in simulated and real data sets. Finally, Section 6 presents some conclusions and directions for future work.

2. Neo-Normal Mixture Model

In this section, we present the mixture model based on neo-normal distributions that were applied in segmenting the MRI brain tumor images. This is explained, as are the finite mixture model and the neo-normal family distributions.

2.1. Finite Mixture Model

This study used MRI images as the input data. MRI images are grayscaled digitized image, in which each pixel has gray color intensities. The grayscale intensities in the pixels describe the level of color from black to white. The lower the intensity value, the darker the gray level, and vice versa.
Suppose we have random variable Y = Y 1 , Y 2 , , Y N , where y i denotes the grayscale intensity of i -th pixel observation, and we assume that there are N pixels in an image that groups into K clusters; then, the finite mixture model is a superposition of K specific distributions whose form is given by [11]:
f ( y i | θ , p ) = j = 1 K p j f j ( y i | θ j ) ,
where f ( y i | θ , p ) is the mixture densities with parameters θ = ( θ 1 , θ 2 , , θ K ) and p = ( p 1 , p 2 , , p K ) , and f j ( y | θ j ) is the density of specific univariate distribution with parameter θ j . The p j terms are the proportions of the mixture component density that must satisfy the following constraints:
j = 1 K p j = 1   and   0 p j 1 ,   j = 1 ,   2 , ,   K .
The likelihood of the N data can be written as:
L ( y | θ , p ) = i = 1 N f ( y i | θ , p ) ,
where Y i and Y n are assumed to be conditional independence i n , and come from identical distributions. This formulation is a general finite K components mixture model. In a specific way, we can provide a certain distribution in the form of f j ( y i | θ j ) . If it is a normal density, then the model becomes a normal or Gaussian mixture model. Likewise, if f j ( y i | θ j ) is the neo-normal distribution, then the model becomes a neo-normal mixture model or the Nenomimo.

2.2. Neo-Normal Distribution

The neo-normal distribution family consists of the adaptive distributions that has symmetrical properties, but it can also accommodate skewed or asymmetrical patterns; moreover, it can also be sharper or gentler [18]. These distributions slightly deviate from normal or Gaussian distributions but set their mode as the location parameter. Some of the distributions included in the neo-normal model are the exponential power distribution and Azalini skew normal distribution. The exponential power distribution is similar to the Gaussian distribution, but it can be more pointed and has a thick tail [19]. The Azalini skew normal distribution can accommodate symmetrical or skewed characteristics of a distribution, but this distribution’s mode is not stable in its location parameters [20].
Fernandez and Steel [15] overcame the weakness of the Azalini’s distribution by introducing another skew normal distribution. This distribution is able to maintain the stability of modes in its location parameters—whether it is symmetrical or skewed—by adding the transformation parameters. This distribution is called the FSSN. Another distribution that is also stable in mode is the MSNBurr by Iriawan [16]. The advantage of using the MSNBurr is that the distribution can accommodate long-tail data, as in an MRI image’s pattern. This study used the FSSN and MSNBurr distributions to build the mixture models. The cluster center was more robust since these distributions are stable in their modes.
Supposing the random variable Y i follows the FSSN distribution, Iriawan et al. [17] provided the density as:
f ( y i | μ , σ , γ ) = 2 γ + 1 γ ( 1 σ 2 π e 1 2 ( y i μ σ ) 2 ( 1 γ 2 I [ μ , 255 ) ( y i ) + γ 2 I ( 0 , μ ) ( y i ) ) ) ,
where μ is a location parameter, σ is a dispersion parameter, and γ is a skewness parameter. This distribution transforms the symmetrical normal distribution using inverse scale factors in the positive and negative axes. It loses its symmetry when γ 1 , skews to the left when γ < 1 , and skews to the right when γ > 1 . By substituting Equation (3) as f j ( y i | θ j ) in Equation (1), we were able to obtain the FSSN mixture model.
The MSNBurr distribution idea was designed for the skewed distribution, which ensures that the modes of the Burr II distribution for all values of its skewness parameter, γ , remain at its location parameter. Iriawan [16] defined the density function (p.d.f) for the MSBurr family as:
f ( y i | μ , σ , γ , κ ) = κ σ exp ( κ ( y i μ σ ) ) ( 1 + exp ( κ ( y i μ σ ) ) γ ) ( γ + 1 ) ,
where μ and σ are location and dispersion parameters, respectively, and κ is a scale factor. The distribution in Equation (5) is called the MSNBurr ( γ , μ , σ ) , with the value of κ is as Equation (5):
κ = 1 2 π ( 1 + 1 γ ) ( γ + 1 ) ,   κ > 0
The MSNBurr mixture model is built by substituting Equation (4) as f j ( y i | θ j ) in Equations (1) and (2) with parameter θ j = ( γ j , μ j , σ j , κ j ) .

3. Bayesian Coupled with Markov Chain Monte Carlo Approach

The fully Bayesian method coupled with MCMC approach was employed to estimate the model parameters and conduct inferences. Bayesian MCMC estimates parameters using full conditional posterior distribution through a sample average approach that is stochastically optimized [21]. In this section, we present the inference with Bayesian for the FSSN and MSNBurr mixture models, and we construct the algorithm.
Suppose N denotes the number of pixels of the MRI image and K denotes the number of the mixture component; then, we assume that each pixel i belongs to a single cluster that is indexed by label z i . For each observation, label z i is assumed to be independent, and the value is given by:
z i = ( z i 1 , z i 2 , , z i K )
where:
z i j = { 1 ,   i f   y i   belongs   to   j t h   mixture   component , 0 ,   otherwise .
For each z i , there is only one of z i j that equals to 1, and the rest are 0. For a certain i , therefore, j = 1 K z i j = 1 . Single-trial results for each z i in exactly one of K possible mixture components have the probabilities p 1 , p 2 , , p K , p j ( 0 , 1 ) , j = 1 ,   2 , , K . Since z i ’s are independent, the joint density of the variable is:
f ( z | p ) = i = 1 N f ( z i | p ) = i = 1 N j = 1 K p j z i j .
Let Y = Y 1 , Y 2 , , Y N be an i.i.d random sample drawn from neo-normal mixture density. The likelihood of the mixture density is given by Equation (2). By integrating the label z i to the likelihood, we can rewrite the likelihood as:
L ( y , z | θ , p ) = f ( y | z , θ , p ) f ( z | p ) = i = 1 N j = 1 K ( p j f j ( y i | θ j ) ) z i j ,
where θ j is the set of the neo-normal parameters.

3.1. Bayesian Approach for FSSN Mixture Model

The Bayesian approach was employed to perform the estimation, owing to the non-close form of the posterior distribution that acts as the multiplication between the likelihood function in Equation (8) and the prior distribution. We designed the prior distribution for the FSSN mixture model (FSSN-MM) following Iriawan et al. [17]. The prior distribution of mixture parameters were discussed by Gelman et al. [22].
μ j ~ Gaussian ( η j , φ j 2 ) , σ j ~ Invers   Gamma ( α j , β j ) , γ j ~ Gamma ( a j , b j ) , p j ~ Dirichlet ( δ 1 , δ 2 , , δ K ) .
The value of hyperparameters were calculated from the data. The hyperparameter η j = mean   ( y i j ) and φ j = var ( y i j ) , where y i j represents all data that belongs to the j th cluster. We also set α j = φ j , β j = η j / ( α j 1 ) , a j = η j 2 / φ j , and b j = η j / φ j . By employing these priors as independent priors and contaminating them with the likelihood, the joint posterior can be constructed easily by multiplying them together. This derivation and the Gibbs sampling algorithm for the FSSN mixture model were adopted from Iriawan et al. [17].
The full conditional posterior was conducted to generate the parameter values. It was derived from multiplying the likelihood and the prior distribution of parameters, which were estimated [23]. The full conditional posterior of each parameter is given below (for simplicity, all equations are written in logarithm and all parts of equation that do not contain the estimated parameter are summed as a constant).
(a)
The full conditional posterior for the location parameter μ j , j = 1 , 2 , , K is given by:
ln   f ( μ j | σ j , γ j , p j , z , y ) = constant 1 2 ( μ j η j φ j ) 2 1 2 i = 1 N z i j ( y i j μ j σ j ) 2 .
(b)
The full conditional posterior for parameter σ j , j = 1 ,   2 ,   ,   K is given by Equation (10):
ln f ( σ j | μ j , γ j , p j , z , y ) = constant ( α j + 1 ) ln ( σ j ) + β j σ j + n log ( σ j γ j 2 π + σ j 2 π γ j ) 1 2 i = 1 N z i j ( y i j μ j σ j ) 2 .
(c)
Equation (11) shows the full conditional posterior for parameter γ j , j = 1 ,   2 ,   ,   K :
ln f ( γ j | μ j , σ j , p j , z , y ) = constant + ( a j 1 ) ln ( γ j ) b j γ j + i = 1 N z i j log 1 γ 2 I [ μ , 255 ) ( y i j ) + γ 2 I ( 0 , μ ) ( y i j ) n log ( σ j γ j 2 π + σ j 2 π γ j )
(d)
Equation (12) shows the full conditional posterior distribution parameter p j , j = 1 ,   2 ,   ,   K :
f ( p j | σ j , γ j , μ j , z , y ) j = 1 K p j i = 1 N z i j = j = 1 K p j ( i = 1 N z i j + 1 ) 1 .
This is a Dirichlet ( 1 + i = 1 N z i 1 , , 1 + i = 1 N z i K ) distribution.
(e)
The label z i j , i = 1 , 2 , , N and j = 1 , 2 , , K , only has two possible values, i.e., 0 or 1. For each z i , there are K possible mixture components, so z i = ( z i 1 , z i 2 , , z i K ) follows the multinomial ( 1 , ω i 1 , , ω i K ) distribution where:
ω i j = f ( y i j | μ j , σ j , γ j ) p j f ( y i j )
To estimate the parameter of the FSSN-MM, the Gibbs sampling framework as the process of Bayesian approach was applied. Algorithm 1 states the steps of the Gibbs sampling algorithm.
Algorithm 1: Gibbs sampling for the Fernandez–Steel skew normal-mixture model (FSSN-MM)
1:
Initialize parameters μ 1 ( 0 ) , ,   μ K ( 0 ) , σ 1 ( 0 ) , ,   σ K ( 0 ) , γ 1 ( 0 ) , ,   γ K ( 0 ) , p 1 ( 0 ) , ,   p K ( 0 ) , z 1 ( 0 ) , ,   z N ( 0 ) , set t = 1 , and determine T .
2:
While t T do
3:
Update value μ j , j = 1 ,   2 ,   ,   K by generating μ j ( t ) according to Equation (9).
4:
Update value σ j , j = 1 ,   2 ,   ,   K by generating σ j ( t ) according to Equation (10).
5:
Update value γ j , j = 1 ,   2 ,   ,   K by generating γ j ( t ) according to Equation (11).
6:
Update value p j , j = 1 ,   2 ,   ,   K by generating p j ( t ) that follow the Dirichlet distribution with parameters ( 1 + i = 1 N z i 1 , ,   1 + i = 1 N z i K ) .
7:
 Update value z i by generating z i ( t ) that follows the multinomial ( 1 ,   ω i 1 ,   ,   ω i K ) distribution, where ω i j , i = 1 ,   2 ,   ,   N , j = 1 ,   2 , ,   K as in Equation (13).
8:
 Set t = t + 1 .
9:
Return μ 1 ( T ) , ,   μ K ( T ) , σ 1 ( T ) , ,   σ K ( T ) , γ 1 ( T ) , ,   γ K ( T ) , p 1 ( T ) , ,   p K ( T ) , z 1 ( T ) , ,   z N ( T ) .

3.2. Bayesian Approach for MSNBurr Mixture Model

The prior distribution for MSNBurr was discussed by Iriawan [16]. The proportion of the mixture components follows the Dirichlet distribution, as in the FSSN-MM. The following are prior distributions for each parameter of the MSNBurr mixture model (MSNBurr-MM):
μ j ~ Gaussian ( η j , φ j 2 ) , σ j ~ Invers   Gamma ( α j , β j ) , γ j ~ Generalised   Symmetrical   Beta ( τ j , τ j , l j , u j ) , p j ~ Dirichlet ( δ 1 , δ 2 , , δ K ) .
The hyperparameters value are given as in the FSSN-MM; however, we set the hyperparameter for prior γ j by τ j = φ j / 2 , l j = 0 , and u j = 2 . By setting these priors as independent priors and multiplying them with the likelihood, its joint posterior could be found easily (see Appendix A). The full conditional posterior distributions for each parameter are discussed below. For simplicity, the formulas are shown as logarithms (some derivations are shown in Appendix B) and all parts of the equation that do not contain the estimated parameter are assembled as a constant.
(a)
The full conditional posterior for μ j , j = 1 , 2 , , K is shown in Equation(14):
ln f ( μ j | σ j , γ j , p j , κ j , z , y ) = constant i = 1 N z i j ( κ j ( y i j μ j σ j ) ) 1 2 ( μ j η j φ j ) 2 + i = 1 N z i j ( ( γ j + 1 ) ln ( 1 + exp ( κ j ( y i j μ j σ j ) ) γ j ) )
(b)
The following Equation (15) shows the full conditional posterior for σ j , j = 1 , 2 , , K :
ln f ( σ j | μ j , γ j , p j , κ j , z , y ) = constant ( α j + 1 ) ln σ j β j σ j + i = 1 N z i j ln ( κ j σ j ) + i = 1 N z i j ( ( γ j + 1 ) ln ( 1 + exp ( κ j ( y i j μ j σ j ) ) γ j ) ) + i = 1 N z i j ( κ j ( y i j μ j σ j ) )
(c)
The full conditional posterior of γ j , j = 1 , 2 , , K is given bellow:
ln f ( γ j | μ j , σ j , p j , κ j , z , y ) = constant ( τ j 1 ) ln ( ( γ j l j ) ( u j γ j ) ) + i = 1 N z i j ( ( γ j + 1 ) ln ( 1 + exp ( κ j ( y i j μ j σ j ) ) γ j ) ) + i = 1 N z i j ( κ j ( y i j μ j σ j ) ) .
(d)
The full conditional posterior for p j , j = 1 ,   2 , ,   K is as shown in Equation (12).
(e)
The full conditional posterior of the label z i j follows the multinomial ( 1 ,   ω i 1 , ,   ω i K ) distribution, where ω i j , i = 1 ,   2 ,   ,   N , j = 1 ,   2 , ,   K , as in Equation (13).
The Gibbs sampling algorithm was constructed with a full conditional posterior for each parameter. The step of Gibbs sampling for the MSNBurr-MM is presented in Algorithm 2.
Algorithm 2: Gibbs sampling for the modified stable to normal from Burr-mixture model (MSNBurr-MM).
1:
Initialize parameters μ 1 ( 0 ) , ,   μ K ( 0 ) , σ 1 ( 0 ) , ,   σ K ( 0 ) , γ 1 ( 0 ) , ,   γ K ( 0 ) , p 1 ( 0 ) , ,   p K ( 0 ) , z 1 ( 0 ) , ,   z N ( 0 ) , set t = 1 , and determine T .
2:
While t T do
3:
Update value μ j , j = 1 ,   2 , ,   K by generating μ j ( t ) according to Equation(14).
4:
Update value σ j , j = 1 ,   2 , ,   K by generating σ j ( t ) according to Equation (15).
5:
Update value γ j , j = 1 ,   2 , ,   K by generating γ j ( t ) according to Equation (16).
6:
Update value p j , j = 1 ,   2 , ,   K by generating p j ( t ) as in Equation (12).
7:
Update value κ j ,   j = 1 ,   2 , ,   K as in Equation (6).
8:
Update value z i by generating z i ( t ) that follows the multinomial ( 1 , ω i 1 , , ω i K ) distribution, where ω i j , i = 1 ,   2 , ,   N , j = 1 ,   2 , , K , as in Equation (13).
9:
Set t = t + 1 .
10:
Return μ 1 ( T ) , ,   μ K ( T ) , σ 1 ( T ) , ,   σ K ( T ) , γ 1 ( T ) , ,   γ K ( T ) , p 1 ( T ) , ,   p K ( T ) , z 1 ( T ) , ,   z N ( T ) .
Both Algorithm 1 and Algorithm 2 were used as a basis for programming via MATLAB. The generation of samples in both algorithms, we used the acceptance rejection sampling method. All programs were run under the computer specification of Intel Core i7, 8GB RAM, 256 GB SSD, without GPU and VRAM. For the sake of reproducibility of our results, we made our MATLAB code for the MSNBurr-MM available online at https://github.com/anindya364/nenomimo.

4. Cluster Validation and Comparison Tools

Cluster validation was done by employing the silhouette coefficient ( S C ) . The value of S C determines the optimum number of clusters. S C i j is formulated in Equation (17) [24]:
S C i j = ξ i j ζ i j max ( ζ i j , ξ i j )
where S C i j is the S C value for all i th data at the j th cluster, ζ i j is the average squared Euclidean distance that denotes how similar the i th data are to their cluster, and ξ i j indicates how similar the i th data are to the other clusters. The S C j of a cluster in Equation (18) is the average value of all the data included in the j th cluster, S C i j . The overall S C ( O S C ) of the image is the average of all S C j of all clusters formulated in Equation (19):
S C j = 1 n j i = 1 n j S C i j ,
O S C = 1 K j = 1 K S C j .
The O S C has a range of values from −1 to 1. The closer the O S C is to 1, the more likely the data should be in the cluster.
The comparison measurement tool used for the segmentation result was the misclassification ratio ( M C R ) . The M C R calculates the differences between the segmentation results with the ground truth (GT) image. The smaller the value of M C R , the better the segmentation results [17]. The formula of M C R is given by Equation (20):
M C R = i = 1 N | pixel   cluster   output i pixel   ground   truth i | N .

5. Result and Discussion

In this section, we compare the proposed models with the GMM by Rasmussen [10] and the GGMM by Deledalle et al. [14]. The GMM and the GGMM represented the model-based normal distribution, while the FSSN-MM and the MSNBurr-MM represented the Nenomimo.

5.1. Application for Data Simulation

The data simulation used in this study were synthetic images and natural images. For synthetic images, original images were also set for the GT. To add more challenge in segmentation, we provided noise (Gaussian noise, salt-paper noise, and speckle noise) to each image. The synthetic images (SI01 and SI02) and natural images (NI01 and NI02) with their GT are visualized in Figure 1.
The segmentation results of each method for SI01, SI02, NI01, and NI02 are shown in Figure 2, Figure 3, Figure 4 and Figure 5. The methods run under two clusters for SI01 and NI01, while SI02 and NI02 run under three clusters.
The accuracy of segmentation results was shown by the M C R . As visualized in Figure 6, the minimum M C R was shown by the Nenomimo (FSSN-MM and MSNBurr-MM). For different types of noise, the Nenomimo gave a better performance in segmenting both the synthetic images and natural images. This indicated that the Nenomimo is more robust in handling noise than the GMM and the GGMM. Overall, for the simulation dataset, the Nenomimo segmented the images more precisely than the model-based normal distribution.
The next application was for a real dataset. The MRI-based brain tumor segmentation was also done under the Nenomimo, the GMM, and the GGMM.

5.2. Application for MRI-Based Brain Tumor

The real datasets were from the Dr. Soetomo General Hospital (RSUD), Surabaya. The MRI sequences in this study were from T1 MEMP + C and T2 FLAIR; both were from the axial point of view. T1 MEMP + C was the MRI sequence in which the tumor area was enhanced by contrast media, which is the material given to patients, either orally or injected, before doing MRI scanning. T2 FLAIR was the MRI sequence without contrast media; this sequence visualized the swelling or edema of the brain tumor more visibly; therefore, the ROI of T2 FLAIR was usually larger than T1 MEMP + C.
We tried to segment about 75 images from the T1 MEMP + C and T2 FLAIR sequences. The tumor area was set as the ROI, while another area was set as a non-ROI. All sequences used in this study passed medical approval, and the GT was also conducted under medical judgment. All MRI images were analyzed under normal and neo-normal models. This paper only displays four sample images. The sample datasets are visualized in Figure 7, while the histograms are shown in Figure 8.
Figure 7a,c includes the sample from the T1 MEMP + C sequence, while Figure 7b,d is the sample from the T2 FLAIR sequence. The sample data images are named IM01, IM02, IM03, and IM04. The first applied procedure was preprocessing. It was done to eliminate the fat around the skull, which could have interfered with the segmentation process. Figure 8 shows the image histogram after preprocessing. Unlike Figure 8b–d, the multimodality in Figure 8a is not clearly visible since the frequency of the other modes was very small. Due to the multimodalities present in the MRI data pattern, applying the mixture model framework was considered appropriate for this case.
The 75 datasets were segmented under the GMM, the GGMM, the FSSN-MM, and the MSNBurr-MM. The OSC was calculated to provide the optimum number of clusters. Table 1 shows the sample of the OSC for IM01–04. The optimum value of the OSC led to the optimum number of clusters. The optimum number of clusters for the Nenomimo for IM01 and IM03 was K = 3 , while IM02 and IM04 reached the optimum number at K = 4 . Various results were found with the GMM; for IM01, the optimum number of clusters was K = 7 , while IM02 and IM04 reached the optimum number at K = 4 , and IM03 reached the optimum number at K = 5 . The GGMM had the cluster optimum at K = 4 for IM01 and IM04, K = 5 for IM02, and K = 3 for IM03. The segmentation results for the optimum clusters are visualized in Figure 9.
Based on segmentation results in Figure 9, we can see that the ROI from the GMM still contained more noise than the GGMM, the FSSN-MM, and the MSNBurr-MM. For empirical comparison, the M C R from 75 dataset was found. Figure 10 shows the comparison of the M C R for all dataset using different methods. The MCR from Figure 10 shows that the minimum value was reached by the MSNBurr-MM, while the FSSN-MM gave similar results. This indicated that the Nenomimo have a better performance than the model-based normal distribution. The average value of the M C R for the FSSN-MM was about 0.02761, while the MSNBurr-MM had an average M C R value of about 0.02644. This value showed that the Nenomimo could segment the real dataset with a misclassification ratio of less than 3%.
The parameter of the Nenomimo for optimum clusters is shown in Table 2. In Table 2, the ROI for each dataset is denoted by the last cluster. For example, in IM01, for the FSSN-MM, the ROI had a cluster center at a grayscale intensity of 165 with the range of intensity dispersion at about 6.719. The pixel in the ROI was about 5.4% from all MRI areas. Different results were provided by the MSNBurr-MM, which had a cluster center at a grayscale intensity of about 163, an intensity dispersion of 7.138, and the ROI pixel just 6.3% from the MRI areas. Both the FSSN-MM and the MSNBurr-MM had skewness parameters γ > 1 in the first and third clusters, indicating that the distribution of the grayscale intensities was right skewed. The skewness parameter γ < 1 in the second cluster of IM01 indicated the left skew of the distribution.
Figure 11 shows the historically generated parameter for the Nenomimo iteration. Under the computer specification with processor Intel Core i7, 8GB RAM, 256GB SSD, without GPU and VRAM, the average running time for clustering the MRI images with the FSSN-MM took about six minutes of computer processing, while the MSNBurr-MM took about four minutes. Figure 11 we can see that parameters from both the FSSN-MM and the MSNBurr-MM reached convergence fast. The steady state for the historical mean plotted in Figure 6 was reached before 100 iterations.

6. Conclusions

In this paper, we presented an improved method for image segmentation, namely the neo-normal mixture model (Nenomimo). The parameter estimation of the Nenomimo employs the Bayesian method coupled with an MCMC framework via Gibbs sampling. The algorithm ran well when using a computer with processor Intel Core i7, 8GB RAM, 256GB SSD, without GPU and VRAM, and it reached convergence quickly. The proposed method successfully overcame the limitation of mixture models based on the normal distribution. The Nenomimo gave a better performance than the GMM and the GGMM on both simulated and real datasets. For segmenting MRI brain tumor images, the Nenomimo showed powerful results, with the value of the misclassification ratio being less than 3%.
In the present work, segmentation results for MRI brain tumors using the Nenomimo still had a little noise, which was identified as the ROI. Therefore, for future research, it is recommended to continue to develop the Nenomimo model that considers pixel spatial dependencies for image segmentation.

Author Contributions

A.A.P., N.I., K.F., S.W.P., I., and W.F. understood and designed the research; A.A.P. collected the data, analyzed the data, and drafted the paper. All authors critically read and revised the draft and approved the final paper. All authors have read and agreed to the published version of the manuscript.

Funding

The authors are grateful to the Directorate for Research and Community Service (DRPM) Ministry of Research, Technology, and Higher Education Indonesia, which supports this research under PT research grant no. 1311/PKS/ITS/2020. This research is also partially funded by the Indonesian Ministry of Research, Technology, and Higher Education under World Class University (WCU) Program managed by Institut Teknologi Bandung.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Joint Posterior Density for Nenomimo

Assuming that each parameter in the Nenomimo, μ , σ , γ , and p , is independent (where μ = ( μ 1 , μ 2 , , μ j , , μ K ) , σ = ( σ 1 , σ 2 , , σ j , , σ K ) , γ = ( γ 1 , γ 2 , , γ j , , γ K ) ), then y = ( y 1 , y 2 , , y i , , y N ) is the pixels in the MRI image. Therefore, the joint prior density for each μ , σ , and γ is given by:
g ( μ ) = g ( μ 1 ) × g ( μ 2 ) × × g ( μ K ) = j = 1 K g ( μ j ) , g ( σ ) = g ( σ 1 ) × g ( σ 2 ) × × g ( σ K ) = j = 1 K g ( σ j ) , g ( γ ) = g ( γ 1 ) × g ( γ 2 ) × × g ( γ K ) = j = 1 K g ( γ j ) .
Via the Bayes theorem, the joint posterior density of the Nenomimo is:
f ( μ , σ , γ , p , z | y ) = f ( y , μ , σ , γ , p , z ) f ( y ) f ( y , μ , σ , γ , p , z ) .
Substituting the joint density for each prior in MSNBurr-MM, we then obtain:
f ( y , μ , σ , γ , p , z ) = f ( y , z | μ , σ , γ , p ) g ( μ ) g ( σ ) g ( γ ) g ( p ) = i = 1 N j = 1 K ( p j κ σ exp ( κ ( y μ σ ) ) ( 1 + exp ( κ ( y μ σ ) ) γ ) ( γ + 1 ) ) z i j × j = 1 K g ( μ j ) g ( σ j ) g ( γ j ) × ( K 1 ) ! .
Since:
μ j ~ Gaussian ( η j , φ j 2 ) , σ j ~ Invers   Gamma ( α j , β j ) , γ j ~ Generalised   Symmetrical   Beta ( τ j , τ j , l j , u j ) , p j ~ Dirichlet ( δ 1 , δ 2 , , δ K ) ,
We then get the following in detail.
(a)
The prior density for μ j ,   j = 1 ,   2 , . . , K is:
g ( μ j | η j , φ j 2 ) = 1 2 π φ j exp ( 1 2 ( μ j η j φ j ) 2 ) ,
where η j and φ j 2 are the mean and variance from the data, respectively; 0 η j 255 and φ j > 0 .
(b)
The prior density for σ j ,   j = 1 , 2 , . . , K is given by:
g ( σ j | α j , β j ) = β j α j Γ ( α j ) ( σ j ( α j + 1 ) ) exp ( β j α j ) .
(c)
The prior density for γ j ,   j = 1 , 2 , . . , K is:
g ( γ j | τ j , τ j , l j , u j ) = Γ ( 2 τ j ) Γ ( τ j ) 2 ( ( γ j l j ) ( u j γ j ) ) τ j 1 R ( 2 τ j 1 ) ,
where R = u j l j , u j > l j 0 , τ j 1 , and γ j > 0 .
(d)
Prior density for p j ,     j = 1 ,   2 , . . , K .
Assume that p ~ D i r ( p 1 , p 2 , , p K , δ 1 , δ 2 , , δ K ) , with δ j = 1 , j = 1 ,   2 , ,   K ; then, we can get the prior density as:
g ( p 1 , p 2 , , p K | δ 1 , δ 2 , , δ K ) = 1 B ( δ ) j = 1 K p j δ j 1 = Γ ( j = 1 K 1 ) j = 1 K Γ ( 1 ) j = 1 K p j 0 = Γ ( K ) j = 1 K Γ ( 1 ) = ( K 1 ) ! j = 1 K ( 1 ) = ( K 1 ) ! ,
where the normalized constant B ( δ ) is a multinomial beta function that could be expressed as the following gamma function:
B ( δ ) = j = 1 K Γ ( δ j ) Γ ( j = 1 K δ j ) ,   δ = ( δ 1 , δ 2 , , δ K ) .

Appendix B. Some Direction for the Derivation of Full Conditional Posterior

Appendix B gives an example of a derivation to get the full conditional posterior for one of the MSNBurr-MM parameters. For example, we derived the full conditional posterior for parameter μ j ,   j = 1 ,   2 , . . , K . The full conditional posterior μ j for certain j th cluster is given by:
f ( μ j | σ j , γ j , p j , z , y ) = f ( y | μ , σ , γ , z ) g ( μ j ) = i = 1 N ( κ j σ j exp ( κ j ( y i μ j σ j ) ) ( 1 + exp ( κ j ( y i μ j σ j ) ) γ j ) ( γ j + 1 ) ) z i j × 1 2 π φ j exp ( 1 2 ( μ j η j φ j ) 2 ) .
To simplify the derivation, we processed Equation (A2) in the form of the logarithm as follows:
ln f ( μ j | σ j , γ j , p j , z , y ) = i = 1 N z i j κ j σ j exp ( κ j ( y i μ j σ j ) ) ( 1 + exp ( κ j ( y i μ j σ j ) ) γ j ) ( γ j + 1 ) + ln ( 1 2 π φ j exp ( 1 2 ( μ j η j φ j ) 2 ) ) = i = 1 N z i j ( ln ( κ j σ j ) κ j ( y i μ j σ j ) ( γ j + 1 ) ln ( 1 + exp ( κ j ( y i μ j σ j ) ) γ j ) ) × ln ( 1 2 π φ j ) 1 2 ( μ j η j φ j ) 2 .
Equation components that do not contain μ j can be simplified into constants; therefore, Equation (A3) can be written as:
log f ( μ j | σ j , γ j , p j , z , y ) = constant i = 1 N z i j ( κ ( y i μ j σ j ) ) 1 2 ( μ j η j φ j ) 2 + i = 1 N z i j ( ( γ j + 1 ) ln ( 1 + exp ( κ ( y i μ j σ j ) ) γ j ) ) .
In the same way, a full conditional posterior for parameter σ j and γ j can be obtained.

References

  1. Global Cancer Observatory. Available online: http://gco.iarc.fr/today/data/factsheets/populations/360-indonesia-fact-sheets.pdf (accessed on 16 April 2020).
  2. Grün, B. Model-based Clustering. Model-based Clsutering. In Handbook of Mixture Analysis; CRC Press: Boca Raton, FL, USA, 2018; pp. 163–198. [Google Scholar]
  3. Bruse, J.L.; Zuluaga, M.A.; Khushnood, A.; McLeod, K.; Ntsinjana, H.N.; Hsia, T.; Sermesant, M.; Pennec, X.; Taylor, A.M.; Schievano, S. Detecting Clinically Meaningful Shape Clusters in Medical Image Data: Metrics Analysis for Hierarchical Clustering Applied to Healthy and Pathological Aortic Arches. IEEE Trans. Biomed. Eng. 2017, 64, 2373–2383. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  4. Pestunov, I.A.; Rylov, S.A.; Berikov, V.B. Hierarchical Clustering Algorithms for Segmentation of Multispectral Images. Optoelectron. Instrum. Data Process. 2015, 51, 329–338. [Google Scholar] [CrossRef]
  5. Rohith, J.; Ramesh, H. Colour Based Segmentation of a Landsat Image Using K-Means Clustering Algorithm. J. Imag. Process. Pattern Recogn. Progress 2017, 4, 31–38. [Google Scholar]
  6. Muruganandham, S.K.; Sobya, D.; Nallusamy, S.; Mandal, D.K.; Chakraborty, P.S. Study on Leaf Segmentation using K-Means and K-Medoid Clustering Algorithm for Identification of Disease. Indian J. Public Health Res. Dev. 2018, 9, 289–293. [Google Scholar] [CrossRef]
  7. Huang, H.; Meng, F.; Zhou, S.; Jiang, F.; Manogaran, G. Brain Image Segmentation based on FCM Clustering Algorithm and Rough Set. IEEE Access 2019, 7, 12386–12396. [Google Scholar] [CrossRef]
  8. Oh, M.S.; Raftery, A.E. Model-Based Clustering with Dissimilarities: A Bayesian Approach. J. Comput. Graph. Stat. 2007, 16, 559–585. [Google Scholar] [CrossRef] [Green Version]
  9. Greve, B.; Pigeot, I.; Huybrechts, I.; Pala, V.; Börnhorst, C.A. Comparison of Heuristic and Model-based Clustering Methods for Dietary Pattern Analysis. Public Health Nutr. 2016, 19, 255–264. [Google Scholar] [CrossRef] [PubMed]
  10. Rasmussen, C.E. The Infinite Gaussian mixture model. In Advances in Neural Information Processing Systems; MIT Press: Cambridge, MA, USA, 2000; pp. 554–560. [Google Scholar]
  11. Ji, Z.; Huang, Y.; Sun, Q.; Cao, G.A. Spatially Constrained Generative Asymmetric Gaussian mixture model for Image Segmentation. J. Vis. Commun. Image Represent. 2016, 40, 600–626. [Google Scholar] [CrossRef]
  12. Zhu, H.; Pan, S.; Xie, Q. Image Segmentation by Student’s-t Mixture Models Based on Markov Random Field and Weighted Mean Template. Int. J. Signal. Process. Imag. Process. Pattern Recogn. 2016, 9, 313–322. [Google Scholar]
  13. Franczak, B.C.; Browne, P.; McNicholas, P. Mixtures of Shifted Asymmetric Laplace Distribution. IEEE Trans. Pattern Anal. Mach. Intell. 2014, 36, 1149–1157. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  14. Deledalle, C.A.; Parameswaran, S.; Nguyen, T.Q. Image denoising with generalized Gaussian mixture model patch priors. SIAM J. Imag. Sci. 2018, 11, 2568–2609. [Google Scholar] [CrossRef] [Green Version]
  15. Fernandez, C.; Steel, M.F.J. On Bayesian Modelling of Fat Tails and Skewness. J. Am. Stat. Assoc. 1998, 93, 359–371. [Google Scholar] [CrossRef] [Green Version]
  16. Iriawan, N. Computationally Intensive Approaches to Inference in Neo-Normal Linear Models. Ph.D. Thesis, Curtin University of Technology, Perth, Australia, 2000. [Google Scholar]
  17. Iriawan, N.; Pravitasari, A.A.; Fithriasari, K.; Irhamah; Purnami, S.W.; Ferriastuti, W. Comparative Study of Brain Tumor Segmentation using Different Segmentation Techniques in Handling Noise. In Proceedings of the 2018 International Conference on Computer Engineering, Network and Intelligent Multimedia (CENIM), Surabaya, Indonesia, 26–27 November 2018; IEEE: Surabaya, Indonesia, 2018; pp. 289–293. [Google Scholar]
  18. Choir, A.S.; Iriawan, N.; Ulama, B.; Dokhi, M. MSEpBurr Distribution: Properties and Parameter Estimation. Pakistan J. Stat. Oper. Res. 2019, 15, 179–193. [Google Scholar] [CrossRef]
  19. Box, G.P.E.; Tiao, G.C. Bayesian Inference in Statistical Analysis, 1st ed.; Addison Wesley Publishing Company: Reading, UK, 1973. [Google Scholar]
  20. Azzalini, A.A. Class of Distribution which Includes the Normal Ones. Scand. J. Stat. 1985, 12, 171–178. [Google Scholar]
  21. Anderson, E.; Nguyen, H. When can we improve on sample average approximation for stochastic optimization? Oper. Res. Lett. 2020, 48, 566–572. [Google Scholar] [CrossRef]
  22. Gelman, A.; Carlin, J.B.; Stern, H.S.; Dunson, D.B.; Vehtari, A.; Rubin, D.B. Bayesian Data Analysis, 3rd ed.; CRC Press: Boca Raton, FL, USA, 2014. [Google Scholar]
  23. Prasetyo, R.B.; Kuswanto, H.; Iriawan, N.; Ulama, B.S.S. Binomial Regression Models with a Flexible Generalized Logit Link Function. Symmetry 2020, 12, 221. [Google Scholar] [CrossRef] [Green Version]
  24. Pravitasari, A.A.; Iriawan, N.; Safa, M.A.I.; Irhamah; Fithriasari, K.; Purnami, S.W.; Ferriastuti, W. MRI-Based Brain Tumor Segmentation using Modified Stable Student’s t from Burr Mixture Model with Bayesian Approach. Malays. J. Math. Sci. 2019, 13, 297–310. [Google Scholar]
Figure 1. Synthetic images: (a) SI01 and (b) SI01. Natural images: (c) NI01 and (d) NI02.
Figure 1. Synthetic images: (a) SI01 and (b) SI01. Natural images: (c) NI01 and (d) NI02.
Applsci 10 04892 g001
Figure 2. Segmentation result of SI01 for different methods.
Figure 2. Segmentation result of SI01 for different methods.
Applsci 10 04892 g002
Figure 3. Segmentation result of SI02 for different methods.
Figure 3. Segmentation result of SI02 for different methods.
Applsci 10 04892 g003
Figure 4. Segmentation result of NI01 for different methods.
Figure 4. Segmentation result of NI01 for different methods.
Applsci 10 04892 g004
Figure 5. Segmentation result of NI02 for different methods.
Figure 5. Segmentation result of NI02 for different methods.
Applsci 10 04892 g005
Figure 6. The misclassification ratio (MCR) comparison for SI01, SI02, NI01, and NI02 segmentation results.
Figure 6. The misclassification ratio (MCR) comparison for SI01, SI02, NI01, and NI02 segmentation results.
Applsci 10 04892 g006
Figure 7. The real dataset from medical consent: (a) IM01, (b) IM02, (c) IM03, and (d) IM04.
Figure 7. The real dataset from medical consent: (a) IM01, (b) IM02, (c) IM03, and (d) IM04.
Applsci 10 04892 g007
Figure 8. The histograms of (a) IM01, (b) IM02, (c) IM03, and (d) IM04 after preprocessing.
Figure 8. The histograms of (a) IM01, (b) IM02, (c) IM03, and (d) IM04 after preprocessing.
Applsci 10 04892 g008
Figure 9. The region of interest (ROI) of (a) input slice in different layers compared with (b) the ground truth (GT) for the (c) GMM, (d) GGMM, (e) FSSN-MM, and (f) MSNBurr-MM algorithms.
Figure 9. The region of interest (ROI) of (a) input slice in different layers compared with (b) the ground truth (GT) for the (c) GMM, (d) GGMM, (e) FSSN-MM, and (f) MSNBurr-MM algorithms.
Applsci 10 04892 g009
Figure 10. The comparison of the MCR for each image and each method.
Figure 10. The comparison of the MCR for each image and each method.
Applsci 10 04892 g010
Figure 11. Historically generated the Nenomimo parameter for sample MRI images.
Figure 11. Historically generated the Nenomimo parameter for sample MRI images.
Applsci 10 04892 g011
Table 1. The summary of overall silhouette coefficients (OSCs) for the sample image. GMM: Gaussian mixture model; GGMM: generalized Gaussian mixture model.
Table 1. The summary of overall silhouette coefficients (OSCs) for the sample image. GMM: Gaussian mixture model; GGMM: generalized Gaussian mixture model.
AlgorithmDatasetNumber of Clusters (K)
3456789
NormalGMMIM010.54980.34880.5390.45530.6949 *0.69360.654
IM020.9040.9133 *0.7480.69090.7580.7950.748
IM030.79240.8030.8356 *0.80340.76540.74970.7138
IM040.88750.9275 *0.87640.87590.84370.78680.7745
GGMMIM010.80210.8263 *0.74120.74060.68240.62110.5488
IM020.76820.84870.8935 *0.82040.75810.63280.7004
IM030.9002 *0.89550.73240.71010.60480.55450.6266
IM040.83330.8859 *0.84790.80220.76430.76140.7321
Neo-NormalFSSN-MMIM010.8725 *0.64120.67520.63510.58970.63250.6606
IM020.81090.8636 *0.84640.8120.84670.79290.8292
IM030.9145 *0.80140.75120.70490.68540.70460.6699
IM040.83470.9010 *0.90090.87940.86950.85290.7558
MSNBurr-MMIM010.9300 *0.83490.74120.72180.7140.73310.7435
IM020.84280.9119 *0.91060.77290.73220.67310.7091
IM030.9157 *0.80140.74910.70720.69380.69310.6912
IM040.83640.9230 *0.86470.80090.8010.76470.7577
* indicates the maximum OSC value.
Table 2. The parameter of the Nenomimo for the sample image.
Table 2. The parameter of the Nenomimo for the sample image.
Sample Image j FSSN-MMMSNBurr-MM
p j μ j σ j γ j p j μ j σ j σ j
IM0110.0841.5694.85626.4680.0761.5753.77414.024
20.86289.5697.9920.9040.86187.8916.6780.864
30.054165.0016.7193.5660.063163.2577.1382.555
IM0210.1632.6844.7774.6250.1862.8874.5764.128
20.487113.5217.56211.0610.501112.3725.12310.784
30.196158.015.3247.2250.196154.7534.8528.211
40.154197.7317.7890.9160.117198.2625.4760.913
IM0310.0790.3792.14910.2510.0740.3752.149.321
20.813103.1686.6111.0830.839103.1946.620.992
30.108197.1137.70813.9340.087197.2637.68518.858
IM0410.0540.4752.37917.3430.0690.4453.46916.321
20.66596.0155.5557.2360.564122.9816.2785.048
30.136125.1765.3556.7250.245161.0623.2834.769
40.145200.367.1812.0890.122199.9084.4768.866

Share and Cite

MDPI and ACS Style

Pravitasari, A.A.; Iriawan, N.; Fithriasari, K.; Purnami, S.W.; Irhamah; Ferriastuti, W. A Bayesian Neo-Normal Mixture Model (Nenomimo) for MRI-Based Brain Tumor Segmentation. Appl. Sci. 2020, 10, 4892. https://doi.org/10.3390/app10144892

AMA Style

Pravitasari AA, Iriawan N, Fithriasari K, Purnami SW, Irhamah, Ferriastuti W. A Bayesian Neo-Normal Mixture Model (Nenomimo) for MRI-Based Brain Tumor Segmentation. Applied Sciences. 2020; 10(14):4892. https://doi.org/10.3390/app10144892

Chicago/Turabian Style

Pravitasari, Anindya Apriliyanti, Nur Iriawan, Kartika Fithriasari, Santi Wulan Purnami, Irhamah, and Widiana Ferriastuti. 2020. "A Bayesian Neo-Normal Mixture Model (Nenomimo) for MRI-Based Brain Tumor Segmentation" Applied Sciences 10, no. 14: 4892. https://doi.org/10.3390/app10144892

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop