Next Article in Journal
Modified Artificial Hummingbird Algorithm-Based Single-Sensor Global MPPT for Photovoltaic Systems
Next Article in Special Issue
Analysis and Recognition of Human Gait Activity Based on Multimodal Sensors
Previous Article in Journal
Geo-Spatial Mapping of Hate Speech Prediction in Roman Urdu
Previous Article in Special Issue
VPP: Visual Pollution Prediction Framework Based on a Deep Active Learning Approach Using Public Road Images
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Breast Abnormality Boundary Extraction in Mammography Image Using Variational Level Set and Self-Organizing Map (SOM)

by
Noor Ain Syazwani Mohd Ghani
1,
Abdul Kadir Jumaat
1,2,*,
Rozi Mahmud
3,
Mohd Azdi Maasar
4,
Farizuwana Akma Zulkifle
5 and
Aisyah Mat Jasin
6
1
School of Mathematical Sciences, College of Computing, Informatics and Media, Universiti Teknologi MARA (UiTM), Shah Alam 40450, Selangor, Malaysia
2
Institute for Big Data Analytics and Artificial Intelligence (IBDAAI), Universiti Teknologi MARA (UiTM), Shah Alam 40450, Selangor, Malaysia
3
Radiology Department, Faculty of Medicine and Health sciences, Universiti Putra Malaysia, Serdang 43400, Selangor, Malaysia
4
Mathematical Sciences Studies, College of Computing, Informatics and Media, Seremban Campus, Universiti Teknologi MARA (UiTM) Negeri Sembilan Branch, Seremban 70300, Negeri Sembilan, Malaysia
5
Computing Sciences Studies, College of Computing, Informatics and Media, Kuala Pilah Campus, Universiti Teknologi MARA (UiTM) Negeri Sembilan Branch, Kuala Pilah 72000, Negeri Sembilan, Malaysia
6
Computing Sciences Studies, College of Computing, Informatics and Media, Pahang Branch, Raub Campus, Universiti Teknologi MARA (UiTM), Raub 27600, Pahang, Malaysia
*
Author to whom correspondence should be addressed.
Mathematics 2023, 11(4), 976; https://doi.org/10.3390/math11040976
Submission received: 31 December 2022 / Revised: 31 January 2023 / Accepted: 11 February 2023 / Published: 14 February 2023

Abstract

:
A mammography provides a grayscale image of the breast. The main challenge of analyzing mammography images is to extract the region boundary of the breast abnormality for further analysis. In computer vision, this method is also known as image segmentation. The variational level set mathematical model has been proven to be effective for image segmentation. Several selective types of variational level set models have recently been formulated to accurately segment a specific object on images. However, these models are incapable of handling complex intensity inhomogeneity images, and the segmentation process tends to be slow. Therefore, this study formulated a new selective type of the variational level set model to segment mammography images that incorporate a machine learning algorithm known as Self-Organizing Map (SOM). In addition to that, the Gaussian function was applied in the model as a regularizer to speed up the processing time. Then, the accuracy of the segmentation’s output was evaluated using the Jaccard, Dice, Accuracy and Error metrics, while the efficiency was assessed by recording the computational time. Experimental results indicated that the new proposed model is able to segment mammography images with the highest segmentation accuracy and fastest computational speed compared to other iterative models.

Graphical Abstract

1. Introduction

Nowadays, cancer has become a leading cause of death worldwide. The Ministry of Health [1] stated that breast cancer is a serious disease that primarily affects women. Breast cancer is a type of cancer that is found in the breast tissue and happens when cells in the breast grow in an uncontrolled way. Spreafico et al. [2] stated that this cancer can affect both males and females, however, it is more common among females compared to males. Mammography, MRI and ultrasound are several types of breast cancer diagnostic techniques that function to detect breast abnormalities at an early stage and help to improve the chance of successful treatment [3]. Among all those techniques, mammography, which is a breast cancer screening technology that provides grayscale images of the breast, is the gold standard technique used for early-stage detection of breast abnormalities.
In medicine, segmentation plays an important role and has been widely developed in medical imaging technologies such as mammograms since it is an automated diagnostic system that can extract the boundary of the abnormality region [4]. Segmentation is also known as the process of splitting or extracting an image into several segments or objects that helps to reduce the image’s complexity for subsequent analysis. The variational segmentation approaches were normally derived in a level set mathematical framework by minimizing the cost energy function using calculus of variations [5,6], while non-variational segmentation approaches were usually based on the heuristic approach [7]. There is a wide range of literature on the non-variational (non-level set) segmentation method in extracting the abnormality region using mammography images. For instance, the fuzzy technique [6], Intuitionistic Fuzzy Image Processing [8], clustering-based methods [9,10,11,12] and neural network [13]. Other than that, deep learning methods such as the U-net model [14] are frequently used for mammogram image segmentation [15]. The non-variational approaches had tremendous use and success in the past and can obtain fast solutions; however, they are too reliant on the amount of data [16], which is hard to implement when the amount of time needed is short, with drawbacks in terms of accuracy since it was based on the heuristic approach [7].
On the other hand, variational level set-based segmentation approaches are more structured and capable of achieving high speeds, accuracy and performance stability according to [7,17]. According to [18], variational level set-based segmentation techniques can be divided into two main categories: level set-based global segmentation (global variational) and level set-based selective segmentation (selective variational). Global segmentation necessitates segmenting the boundary of all objects in an input image. Meanwhile, selective segmentation only segments the desired object from an input image according to specific geometrical restrictions.
In the literature, well-known global variational active contour models were implemented by [19] on grayscale mammography images. Meanwhile, in 2017, Ciecholewski [20] applied the Active Contour Without Edges (ACWE) model [21] but was unable to produce a satisfactory result when faced with a strong intensity inhomogeneity of images. Because of that, the authors in [22] combined the ACWE model with the Fuzzy C means clustering method to handle the intensity inhomogeneity of images and reduce the presence of noise. Other than that, Somroo and Choi [23] introduced a novel shifted Heaviside signed pressure force (SPF) function. However, the SPF function fails to segment the targeted object that is near the neighbor object or the boundary is fuzzy. Two other related works on the level-set global segmentation method using mammography images are [24,25].
Indeed, all the studies mentioned above are for level set-based global segmentation, as all features in an image should be segmented. However, the result produced by global segmentation may have poor segmentation quality when the targeted abnormality regions have almost similar intensities or are very close to healthy tissue boundaries, have fuzzy contours, low contrast and the presence of noise [26]. Therefore, variational level set-based selective image segmentation is more convenient to implement as this method aims to extract a single target object from an image using additional geometric constraint information.
Selective segmentation is concerned with segmenting or extracting a specific object in a given image, depending on minimal user input [27,28,29]. There is scarce research involving variational level set-based selective segmentation for grayscale mammography images. The related research on level set-based selective models for grayscale images was introduced by [26,30,31]. Other than that, according to [31,32], the state-of-the-art model in selective segmentation is the Interactive Image Segmentation (IIS) model [33]. In [33], they introduced the IIS model with two sets of geometric constraints, namely geometric points for the inside and outside of a targeted object. This model is capable of segmenting both grayscale and vector-valued images. The most recent and effective research on selective segmentation techniques was proposed by [34]. They reformulated the model that was completed by [18] by incorporating an image enhancement technique in the fitting term to segment mammography images. While the result was successful, the total variation (TV) term used in the formulation is computationally expensive, which will slow down the segmentation process [35]. In addition, their model cannot segment mammography images with complex intensity inhomogeneity.
One effective approach to segment an image with intensity inhomogeneity is to incorporate an unsupervised neural network machine learning algorithm, namely the Self-Organizing Map (SOM), in a variational level set formulation called SOMCV as proposed by Abdelsamea et al. [36]. Although the SOMCV model was formulated in a global segmentation framework, we found that the SOMCV is capable of selectively segmenting a targeted object in an image due to the advantage of using SOM in the formulation. This can be achieved by placing the initial contour relatively close to the targeted object.
Therefore, with these problems and motivations, the aim of this study is to formulate a new selective type of the variational level set model to segment mammography images that incorporates the ideas of selective segmentation from [34] and the idea of using the unsupervised neural network algorithm, SOM, from [36]. The next section of this paper provides a brief overview of the models that are related to this study, followed by formulations of the proposed models. Then, the experimental outcomes of the existing and proposed models are presented.

2. Review of the Existing Models

This section provides a brief review of the models that are significant to this study.

2.1. Chan and Vese (CV) Model

Active Contour Without Edges by [21], which is formulated based on [37], is very important in variational image segmentation. In this model, it is assumed that the image u 0 = u 0 ( x , y ) is constructed using two regions where the intensities of unknown values d 1 and d 2 are approximately piecewise constant, separated by an unknown curve or contour D . Let the image domain be Ω . Assume the detected object is represented by the region Ω 1 with the value d 1 inside the curve D , whereas the intensity of u 0 is approximated by the value d 2 in Ω 2 = Ω \ Ω 1 , outside the curve D .
The level set technique, developed by [38], is applied, in which the unknown curve D is represented by the zero level set of the Lipschitz function φ . Thus, the CV model is defined as:
     min φ , d 1 , d 2    C V ( φ , d 1 , d 2 ) , C V ( φ , d 1 , d 2 ) = μ Ω | H ( φ ) | d Ω + α + Ω ( u 0 d 1 ) 2 H ( φ ) d Ω                + α Ω ( u 0 d 2 ) 2 ( 1 H ( φ ) ) d Ω
Here, the non-negative parameters μ ,   α +   and   α represent the weights of the regularizing term and the fitting term, respectively. They introduce a Heaviside function, H ( φ ( x , y ) ) , and a Dirac delta function, δ ( φ ( x , y ) ) , with small (near zero) constant ε for curve stability [39]. Then, keeping d 1 and d 2 consistent in C V ( φ , d 1 , d 2 ) leads to the following Euler Lagrange (EL) Equation for φ :
{ μ δ ( φ ) . ( φ | φ | ) α + δ ( φ ) ( u 0 d 1 ) 2 + α δ ( φ ) ( u 0 d 2 ) 2 = 0   i n   Ω , δ ( φ ) | φ | u 0 n = 0                    o n   Ω .
where | φ | is the norm of the gradient of the level set function φ , also known as the TV term. The finite difference method is then applied to solve the EL equation. Nonetheless, the CV model produces unsatisfactory results when segmenting a targeted object in intense inhomogeneity vector-valued images and has a high computational cost due to the existence of a highly non-linear curvature term . ( φ | φ | ) , as shown in Equation (4).

2.2. SOM-Based Chan–Vese (SOMCV) Model

Abdelsamea et al. [36] successfully developed a global segmentation model based on the unsupervised neural network SOM approach, called the SOM-based Chan–Vese model (SOMCV). It works by directly incorporating information from the prototype neurons in a trained SOM to decide whether to shrink or expand the existing contour during the iterative optimization process.
During the training process, the neurons of each SOM are topologically arranged in the corresponding map based on their prototypes (weights), and the neurons at a certain geometric distance from them are moved toward the current input using the classical self-organization learning rule of the SOM, expressed by:
s p ( t + 1 )   : = s p ( t ) + η ( t ) g c p ( t ) [ u 0 ( t m ) ( x t , y t )   s p ( t ) ] ,
where η ( t ) is the learning rate defined as:
η ( t ) : = η 0 exp ( 1 κ η ) ,
The intensity u 0 ( t m ) ( x t , y t ) of a randomly-extracted pixel ( x t , y t ) of a training image is applied as the input to the SOM at time t = 0 , 1 , 2 ,   t max   ( t m ) 1 , where t max   ( t m ) is the number of iterations in the SOM’s training. The function g c p ( t ) is a neighborhood kernel at time t of the neuron p around the Best-Matching Unit (BMU) neuron c defined as
g c p ( t ) : = exp ( | | m b m n | | 2 2 m 2 ( t ) ) ,
where m b , m n R 2 are the location vectors of neurons b and n in the output neural map, and m ( t ) is a time-decreasing neighborhood radius which is expressed as follows:
m ( t ) : = m 0 exp ( t κ m ) ,
where m 0 > 0 is the initial neighborhood radius of the map and κ m > 0 is another time constant. Once the SOM’s training has been achieved, the trained network is adapted online in the testing session to estimate and describe globally the foreground and background intensity distributions of an identical test image u 0 ( x , y ) during the evolution of the contour D . For each neuron p , the quantities
s k + : = argmin p | s p m e a n ( u 0 ( x , y ) | ( x , y ) i n ( D ) ) | ,
s k : = argmin p | s p m e a n ( u 0 ( x , y ) | ( x , y ) o u t ( D ) ) | ,
which are the distances of the associated prototype s p from the mean intensities of the current foreground and background approximations, respectively, are also calculated repeatedly throughout the testing session. Therefore, the energy function of the proposed SOMCV model is defined as:
E S O M C V ( φ ) : = α + Ω e + H ( φ ( x , y ) ) d Ω + α Ω e ( 1 H ( φ ( x , y ) ) ) d Ω .
where α + , α 0 , φ is the segmentation curve, H is the Heaviside function, e + ( x , y , D ) : = ( u 0 ( x , y ) s k + ( D ) ) 2 and e ( x , y , D ) : = ( u 0 ( x , y ) s k ( D ) ) 2 . The model can be iteratively solved using the finite differences method.

2.3. Primal-Dual Selective Segmentation 2 (PD2) Model

Recently, Ghani et al. [34] developed a selective segmentation model, namely the Primal-Dual Selective Segmentation 2 (PD2). This model is an improvement on the prior model proposed by Jumaat and Chen [18], termed the Primal-Dual Selective Segmentation (PD) model. The PD model may yield disappointing results for low-contrast images. Therefore, Ghani et al. [34] modified the PD model by replacing the fitting term with an image enhancement algorithm which can enhance low-contrast images. Now, we will introduce the PD model.
Assume u 0 = u 0 ( x , y ) as the image in domain Ω . Here, the marker set A = { m j = ( x j , y j ) Ω , 1 j n } is introduced to generate the polygon S with marker points n ( 3 ) that will be set close to the targeted object. R d ( x , y ) functions as the Euclidean distance of each point ( x , y ) Ω from its nearest points of ( x s , y s ) S :
R d ( x , y ) = ( x x s ) 2 + ( y y s ) 2 .
Then, the PD model is defined as:
min a , b [ 0 , 1 ] { P D ( a , b ) = μ Ω | a | g d Ω + Ω r b   d Ω + θ Ω R d b   d Ω + 1 2 τ Ω ( a b ) 2 d Ω }
where μ , θ and τ indicate the weightage parameters used to control the TV function, | a | g , distance function, R d and penalty term, ( a b ) 2 , respectively. In addition, the function r is defined as the fitting term where r = ( k 1 u 0 ) 2 ( k 2 u 0 ) 2 , b is a dual variable and g ( x , y ) is known as the edge detector function. k 1 and k 2 are the unknown constants that specify the average intensity of input image inside and outside the contour a .
Let u H S be the output image by applying the image enhancement approach that will enhance the contrast of an input image so that the hidden information can be revealed for a better segmentation result. The modified PD model, termed PD2, is defined by replacing the fitting term, u 0 , in Equation (11) with an image enhancement algorithm ( u H S ) as follows:
P D 2 ( a , b ) = μ Ω ( | a | g + [ ( k 1 u H S ) 2 ( k 2 u H S ) 2 ] b + θ R d b + 1 2 τ ( a b ) 2 )   d Ω .
Equation (12) is solved using an alternating minimization approach.

3. The Proposed Models

The PD2 model [34] may lead to a slow segmentation process due to the existence of the total variation function. Moreover, this model cannot handle images with complex intensity inhomogeneity. Therefore, the main idea of this study is to formulate a new selective type of the variational level set model to segment mammography images that incorporate the ideas of selective segmentation from [34] and the idea of using SOM from [36].
Thus, the variational energy functional minimization problem of the proposed model, termed the Selective Self-Organizing Map (SSOM), is defined as:
E S S O M ( D ) : = α + i n ( D ) ( u 0 ( x , y ) s k + ( D ) ) 2 d x d y + α o u t ( D ) ( u 0 ( x , y ) s k ( D ) ) 2 d x d y + i n s i d e ( D ) θ R d ( x , y ) d x d y .
The functions s k + and s k are similarly defined as given in Equation (7) and (8), respectively. The parameters α + , α 0 represent the weights of two image energy terms inside and outside the contour, respectively, while θ > 0 is known as the area of parameter of the distance fitting term. The value of θ in each image changes according to the targeted object. If the area parameter is set to a value that is too large, the outcome will just be the polygon S, which is undesirable. To compute Equation (13) in whole image domain Ω , the contour curve D is then replaced with the level set function φ, obtaining:
min φ E S S O M ( φ ) : = α + Ω H ( φ ) ( u 0 ( x , y ) s k + ( D ) ) 2 d Ω               + α Ω ( 1 H ( φ ) ) ( u 0 ( x , y ) s k ( D ) ) 2 d Ω + Ω θ H ( φ ) R d d Ω ,
where φ ( x , y ) and S d ( x , y ) are replaced with φ and R d , respectively, for simplicity. The Heaviside step function H and Dirac function δ are defined as
H ( φ ( x , y ) ) = 1 2 [ 1 + 2 π t a n 1 ( φ ε ) ]
δ ( φ ( x , y ) ) = H ( φ ( x , y ) ) = ε π ( ε 2 + ϕ 2 ) ,
where ε is a constant used to avoid values of H and δ that tend to be zero, which potentially leads to the failure of the object to be extracted if it is far from the initial contour.

3.1. Derivation of Euler Lagrange (EL) Equation

The optimization problem of the energy function in Equation (14) can be solved using the Calculus of Variation technique to obtain the EL equation, E S S O M ( φ ) / φ . The evolution of the level set function φ ( x , y ) should satisfy the EL equation. To derive the EL equation, let the integrand I 1 ( φ ) = H ( φ ) , I 2 ( φ ) = ( 1 H ( φ ) ) and the Taylor expansion equation at i = 0 be defined as follows:
f ( i ) = f ( 0 ) + f ( 0 ) a + O ( i 2 ) = ( x 2 + y 2 ) q + q ( x 2 + y 2 ) q 1 ( 2 x g 1 + 2 y g 2 ) i + O ( i 2 )
Afterwards, by adding the variation η ν to the level set function φ such that φ = φ + η ν , where ν in an arbitrary test function and η is a close-to-zero real parameter, it becomes
I 1 ( φ + η ν ) = H ( φ + η ν ) ,   I 2 ( φ + η ν ) = 1 H ( φ + η ν ) .  
Next, we differentiate I 1 ( φ + η ν ) = H ( φ + η ν ) and I 2 ( φ + η ν ) = 1 H ( φ + η ν ) with respect to η as follows:
d d η I 1 ( φ + η ν ) = d d η H ( φ + η ν ) = H ( φ + η ν ) ν = δ ε ( φ + η ν ) ν ,
d d η I 2 ( φ + η ν ) = d d η ( 1 H ε ( φ + η ν ) ) = H ε ( φ + η ν ) ν = δ ε ( φ + η ν ) ν .
Therefore, applying Taylor expansion, which is Equation (17), at η = 0 will give
I 1 ( φ + η ν ) = I ( φ ) + I ( φ ) η + O ( η 2 ) = H ε ( φ ) + δ ε ( φ ) ν η + O ( η 2 ) ,
and
I 2 ( φ + η ν ) = I 2 ( φ ) + I 2 ( φ ) η + O ( η 2 ) = ( 1 H ε ( φ ) ) δ ε ( φ ) ν η + O ( η 2 ) .
Now, the first variation of E S S O M ( φ ) in Equation (14) is defined as
E S S O M ( φ ) φ = lim η 0 E S S O M ( φ + η ν ) E S S O M ( φ ) η = 0
Before the evaluation of Equation (31), we initially compute
Ω E S S O M ( φ + η v ) E S S O M ( φ ) η d Ω = 1 η Ω [ α + ( u 0 ( x , y ) s k + ( D ) ) 2 ( H ε ( φ ) + δ ε ( φ ) v η + O ( η 2 ) H ε ( φ ) ) + α ( u 0 ( x , y ) s k ( D ) ) 2 ( 1 H ε ( φ ) δ ε ( φ ) v η + O ( η 2 ) ( 1 H ε ( φ ) ) ) + θ R d ( H ε ( φ ) + δ ε ( φ ) v η + O ( η 2 ) H ε ( φ ) ) ] d Ω
Simplifying Equation (24), it becomes
Ω E S S O M ( φ + η v ) E S S O M ( φ ) η d Ω = 1 η Ω [ α + ( u 0 ( x , y ) s k + ( D ) ) 2 ( δ ε ( φ ) v η + O ( η 2 ) ) + α ( u 0 ( x , y ) s k ( D ) ) 2 ( δ ε ( φ ) v η + O ( η 2 ) ) + θ R d ( δ ε ( φ ) v η + O ( η 2 ) ) ] d Ω
Next, we evaluate Equation (23) using information from Equation (25):
Ω [ δ ε ( φ ) ν ( α + ( u 0 ( x , y ) s k + ( D ) ) 2 α ( u 0 ( x , y ) s k ( D ) ) 2 + θ R d ) ]   d Ω = 0
The integrand in Equation (26) will be zero if
δ ε ( φ ) ν ( α + ( u 0 ( x , y ) s k + ( D ) ) 2 α ( u 0 ( x , y ) s k ( D ) ) 2 + θ R d ) = 0
Finally, since it should be satisfied with the arbitrary function ν , the EL equation for the SSOM model is
δ ε ( φ ) ( α + ( u 0 ( x , y ) s k + ( D ) ) 2 + α ( u 0 ( x , y ) s k ( D ) ) 2 θ R d ) = 0
Hence, applying the gradient descent method will obtain the following gradient descent flow:
φ t = E S S O M ( φ ) φ = δ ε ( φ ) ( α + ( u 0 ( x , y ) s k + ( D ) ) 2 + α ( u 0 ( x , y ) s k ( D ) ) 2 θ R d )
Equation (29) is solved and discretized using the forward finite differences method. Here, ϕ t is denoted as the progression of the level set function φ ( x , y ) with respect to artificial time t . Note that the direction of the progression of φ ( x , y ) is in the opposite direction of the EL equation, i.e., E S S O M ( φ ) / ϕ , which is the steep descent direction of the energy function E S S O M ( φ ) .
In order to preserve the regularity of the function φ ( x , y ) , which is essential to produce a smooth segmentation contour, we replaced the traditional TV regularized term used in the PD2 model with the Gaussian function G σ = e ( x 2 + y 2 ) / 2 σ 2 , where σ is represented as the standard deviation, which controls the smoothness of the contour.
The Gaussian function is convolved with the level set function φ ( x , y ) , and the respective output in each iteration is used as the initial condition for the next iteration. As a result, the requirement to solve the highly non-linear curvature term, which is computationally expensive, can be eradicated thus making the evolution of the function φ ( x , y ) in our proposed SSOM model significantly more efficient.

3.2. A New Variant of the SSOM Model

Mammography images are known to be low contrast, which can lead to unsatisfactory segmentation results. Given a grayscale mammography image, its histogram consists of its intensity value, which is a graph indicating the number of times each intensity value occurs in the image. We can deduce a great deal about the appearance of an image from its histogram. For example, Figure 1 indicates a mammography image with its histogram profile.
Based on Figure 1, the mammography image in Figure 1a has low contrast because its intensity values are clustered at the upper end as indicated in the histogram in Figure 1b. A low-contrast image may affect the segmentation output. In a well-contrasted image, the intensity values would be well spread out over much of the intensity value (gray levels) range.
We can spread out the intensity values in a specified range by applying the piecewise linear function defined as
y = d c b a ( x a ) + c
Based on the function, pixel values less than a are all converted to c, and pixel values greater than b are all converted to d. The output intensity, y between c and d is computed based on the Equation (30). Here, x is the input intensity between a and b. This procedure has the effect of stretching or spreading the intensity values of the input image to the interested output intensity values. In this study, we set a as the bottom 1% of all input intensity values, b as the top 1% of all input intensity values, while c and d equal are 0 and 1, respectively. These settings show satisfactory results as demonstrated in [34].
Figure 2 demonstrates the output intensity, the corresponding histogram of the mammogram image in Figure 1a and the output image after applying the piecewise linear function.
As shown in Figure 2a, the input intensity values of the mammography image in Figure 1a are transformed according to the piecewise linear function. The results of the transformation are indicated as the output intensity in Figure 2a. The corresponding histogram of the transformation is illustrated in Figure 2b. Based on the histogram, we can observe that the intensity values after the transformation are more spread out compared to the original histogram profile in Figure 2b. This indicates that the output image has greater contrast than the original as shown in Figure 2c.
Here, by applying the idea of spreading out the intensity values using the piecewise linear function of Equation (30), we proposed a modified version of SSOM termed the SSOMH (Selective Self-Organizing Map Histogram)-based segmentation model. Let u 0 = u 0 ( x , y ) be indicated as an input image while u H S is indicated as an output image after applying the piecewise linear function. Then, the modified model is defined as follows:
E S S O M H ( D ) : = α + i n ( D ) ( u HS ( x , y ) s k + ( D ) ) 2 d x d y , + α o u t ( D ) ( u HS ( x , y ) s k ( D ) ) 2 d x d y + i n s i d e ( D ) θ R d ( x , y ) d x d y ,
The contour curve D is then replaced with the level set function φ , obtaining:
E S S O M H ( φ ) : = α + Ω H ( φ ) ( u HS ( x , y ) s k + ( D ) ) 2 d Ω + α Ω ( 1 H ( φ ) ) ( u HS ( x , y ) s k ( D ) ) 2 d Ω + Ω θ H ( φ ) R d d Ω .
Then, the associated EL equation by calculus of variation is defined as follows:
δ ε ( φ ) ( α 1 ( u HS ( x , y ) s k + ( D ) ) 2 + α 2 ( u HS ( x , y ) s k ( D ) ) 2 θ R d ) = 0 .
with the following gradient descent flow:
φ t = δ ε ( φ ) ( α 1 ( u HS ( x , y ) s k + ( D ) ) 2 + α 2 ( u HS ( x , y ) s k ( D ) ) 2 θ R d )
Finally, Equation (34) is solved and discretized using the forward finite differences method.

3.3. Steps of the Algorithm for the Proposed SSOM and SSOMH Models

This algorithm shows the steps involved in implementing the new proposed models, the SSOM model and the SSOMH model, to compute the solution using MATLAB R2017b software with an 11th Gen Intel(R) Core(TM) i5-1155G7 CPU @ 2.5 GHz and 8GB installed memory (RAM). The description of Algorithm 1 will be as follows:
Algorithm 1: Algorithm for the SSOM Model.
  • Procedure
    • Input
      -
      Training and testing grayscale mammography images.
      -
      Number of neurons and network topology.
      -
      Iterations number t max   ( t m ) for neural map training.
      -
      Maximum iterations number t max   ( e v o l ) for the evolution contour.
      -
      η 0 > 0 ; initial learning rate.
      -
      m 0 > 0 ; initial neighborhood radius of the map.
      -
      κ η , κ m > 0 ; time constant in learning rate and contour smoothing parameter.
      -
      α + , α 0 , weights of the image energy terms inside and outside the contour, respectively.
      -
      σ > 0 ; Gaussian contour of the smoothing parameter.
      -
      β > 0 ; binary approximation constant of the level set function.
    • Output
      -
      Segmentation result.
      •          TRAINING SESSION
  • Initialize the weights of the neurons in the output layer at random.
  • Repeat
  •    Choose a pixel x t at random in the image domain Ω and determine the winner neuron to the input intensity J ( t m ) ( x t ) .
  •    Update the weights of neuron s p using Equations (3)–(6).
  • Until the learning of weights (prototypes) is complete (i.e., reached the iterations number t max   ( t m ) ).
    •              TESTING SESSION
  • Choose a subset Ω 0 (e.g., square) in the image domain Ω with boundary Ω 0 . Then, initialize the level set function as:
    φ ( x , y ) = { β , ( x , y ) Ω 0 \ Ω 0 , 0 , ( x , y ) Ω 0 , β , ( x , y ) Ω \ ( Ω 0   Ω 0 ) .
  • Minimize the functional SSOM based on Equation (14).
  • Repeat
  •    Calculate s k + and s k from Equations (7) and (8).
  •    Evolve the level set function φ based on the finite difference of Equation (29).
  •    Perform the update at each iteration of the finite difference framework to reinitialize the current level set function to be binary.
    φ β ( H ( φ ) H ( φ ) ) ,
       Then, regularize the obtained level set function via convolution:
    φ G σ φ ,
  • Until the evolution of the curve converges (i.e., a sufficient stopping criterion is met, ϕ n + 1 ϕ n / ϕ n     t o l or reaches maximum iterations number, t max ( e v o l ) ).
  • End procedure.
The stopping criteria used for both models are set as the value of tolerance t o l = 1 × 10 5 and maximum number of iterations, t max ( e v o l ) = 100. Next, Algorithm 2 is discussed, which is known as the SSOMH model. All steps in Algorithm 2 are equivalent to Algorithm 1 except for Step 8 and Step 11. The model is minimized based on Equation (32) while in Step 11, the evolving level set function is based on Equation (34). Algorithm 2 is described as follows:
Algorithm 2: Algorithm for SSOMH Model.
  • Step 1 to Step 7 is identical to Algorithm 1.
  • For Step 8, minimize the functional SSOMH based on Equation (32).
  • Then, follow Step 9 and Step 10 from Algorithm 1.
  • Next, evolve the level set function φ based on the finite difference approximation of Equation (34).
  • For Step 12 to Step 14, the flow is similar to Algorithm 1.

3.4. Convergence Analysis

Based on the proposed SSOM model in Equation (14), let Ω = { φ ¯ | E S S O M ( φ ¯ ) = 0 } be the solution set. Our method’s gradient algorithm can be thought of as a synthetic mapping A = M P . Here, P ( φ ) = ( φ , E S S O M ( φ ) ) is a mapping from R n to R n × R n , while M is a mapping from R n × R n to R n . Thus, the point φ k can be obtained as well as its negative gradient by mapping P to a given point φ as follows:
E S S O M ( ϕ k ) = δ ε ( φ ) ( α + ( u 0 ( x , y ) s k + ( D ) ) 2 + α ( u 0 ( x , y ) s k ( D ) ) 2 θ R d ) .
To prove the convergence of Algorithm 1, we need to show that these five sufficient conditions are met
  • M is a closed mapping;
  • P is continuous;
  • A is closed at φ ( E S S O M ( φ ) 0 ) ;
  • E S S O M ( φ ) is a decent function of A and φ ;
  • The sequence { φ ( k ) } is contained in a compact set, Τ .
We now verify these conditions. Firstly, through mapping M , given a current point φ k and a direction d = E S S O M ( φ k ) , we can obtain an updated solution: φ k + 1 = φ k + d Δ t , where the energy E S S O M ( φ k + 1 ) is lower than the point φ k from the previous iteration. When E S S O M ( φ ) 0 , M is a closed mapping by Lemma 1 from [40].
Next, the mapping P is continuous because the energy function E S S O M ( φ ) is continuous and differentiable. The mapping A is closed at φ ( E S S O M ( ϕ ) 0 ) , according to Inference 1 from [40]. When φ Ω , we obtain d = E S S O M ( φ ) 0 and E S S O M ( φ ) T d < 0 , indicating that E S S O M ( φ ) is a decent function of A and φ .
Furthermore, the evolution of our algorithm’s level set function can be described in the following limits:
φ ( x , y ) = { φ ( x , y )   i f   | φ ( x , y ) | < L L i f   φ ( x , y ) < L L i f   φ ( x , y ) > L     ,
where L is a positive number. As a result, we can define a compact set:
Τ : { φ ( φ 1 , , φ N ) : φ R N , | φ i | L i [ 1 , N ] } ,
where N denotes the number of pixels in the input image. The sequence { φ ( k ) } is obviously contained in Τ . Thus, Algorithm 1 is convergent according to Lemma 3 from [40]. To prove the convergence of Algorithm 2, a similar approach is taken. The differences are only: (1) the term E S S O M is changed to E S S O M H and (2) Equation (35) is replaced by the following Equation (36):
E S S O M H ( ϕ k ) = δ ε ( φ ) ( α + ( u HS ( x , y ) s k + ( D ) ) 2 + α ( u HS ( x , y ) s k ( D ) ) 2 θ R d ) .
The proof is completed. □

4. Experimental Results

In this section, the accuracy and efficiency of the SSOM model and the SSOMH model will be compared with the iterative models and deep learning-based method. The iterative models are the SOMCV model, the PD2 model and the state-of-the-art IIS model, while the deep learning method is U-Net. The SSOM, SSOMH, PD2, and U-Net algorithms are implemented using MATLAB, while the IIS model is implemented using software provided by the authors [33].
To test the performances of all methods, two experiments were conducted. The first experiment was on segmenting the region of interest (ROI) images from the INbreast database, while the second experiment was on segmenting ROI images from the CBIS-DDSM database. Both datasets are publicly available datasets of breast cancer abnormalities with ground-truth annotations from [41,42], respectively. Due to the limited number of ROI in each dataset, we have augmented the original ROIs by applying the Contrast Enhancement method and rotating them with the angles Δ = {0°, 90°, 180°, 270°}. Thus, a total of 500 ROIs for each database were prepared. There were 400 ROIs (80%) used for training, 50 ROIs (10%) used for testing and 50 ROIs (10%) used for validation.
In each experiment, the parameter’s value, η 0 = 0.1 , σ = 1 , and the parameter’s weight, α + = α = 1 are fixed for all problems. In addition, r 0 : = max ( t m ) ( J , K ) / 2 , where J and K are the numbers of the row and column of the neural map, t max ( t m ) = 100 , t max ( e v o l ) = 100 , κ m : = t max ( t m ) / ln ( m 0 ) and β = 1 . A 1-dimensional neural map is preferable and has been chosen for the SOM network of grayscale images (i.e., J = 5 , N = 1 ). These settings are suggested by [36] to produce a good result. In addition, the same parameter values are used for all models to avoid bias.
For the selective segmentation models, the value of the parameter θ that functions to restrict only the target object will be different for each trial. In this study, the value of θ varies between 18 and 10,000. The third experiment in this study was conducted to demonstrate the sensitivity of this parameter to our proposed model.

4.1. Segmentation Results of Test Images from the INbreast Database

In this first experiment, the segmentation performance for all models was compared using two methods. The first method is a qualitative method in which the performances are evaluated using visual observation, while the second method is a quantitative method in which the computation time, Dice similarity coefficients (DSCs), Jaccard similarity coefficients (JSCs), also known as IoU, Accuracy and Error metrics of the output images in the models are calculated using the following formulas:
D S C = 2 × T P ( 2 × T P + F P + F N ) , J S C = DSC 2 DSC A c c u r a c y = ( T P + T N ) ( F N + F P + T P + T N ) ,             E r r o r = ( F P + F N ) ( T P + F N + F P + T N )
where T P is true positive (foreground pixels of the segmented image are completely extracted), FP is false positive (background pixels of the segmented image are wrongly retrieved as foreground), FN is false negative (foreground pixels of the segmented image have been mistakenly erased), and TN is true negative (background pixels of the segmented image have been completely removed).
Basically, a low value of computing time indicates efficient processing time. For the metrics evaluations, high values of JSC, DSC and Accuracy approaching to 1 indicate that the model is accurate at segmenting the input images, while smaller values of Error approaching to 0 show better segmentation results of the test images. The DSC and JSC coefficients are known as an overlapping metric between two data sets, while the Accuracy and Error metrics are determined by the closeness and the wrong predictions of the segmentation results, respectively [43]. The result was scaled from 0 to 1.
Figure 3 demonstrates the results of the segmentation performed using each model for 4 chosen image samples out of 50 test images from the INbreast database. To remove unnecessary information and speed up the segmentation process, the images are cropped to the ROI with a size of 256 × 256 pixels.
Based on Figure 3, the first column demonstrates the original images with green markers indicating the targeted object. The second column shows the ground-truth images as benchmarks for the segmentation output. The results for the SOMCV, IIS, U-NET, PD2, SSOM and SSOMH models are indicated in the third, fourth, fifth, sixth, seventh and eighth columns, respectively. The findings were presented in the form of binary images. As we can see, the result for SOMCV was over-segmented mainly because the targeted region is too close to the surrounding healthy breast tissues. The result generated using U-Net is better than SOMCV; however, we lack the large dataset required to produce the impressive results that deep learning approaches typically produce.
The IIS produces a smooth result, but some regions are under-segmented due to inhomogeneous intensity. The segmentation region generated using the PD2 model has many small particles and is less smooth compared to SSOM and SSOMH due to intensity inhomogeneity of the mammography images. Note that the segmentation result of the SSOM model is almost identical to the SSOMH model thanks to the unsupervised neural network, SOM and distance fitting term in the variational level set formulations in the SSOM and SSOMH models, which is vital to segment an image with intensity inhomogeneity and to capture the boundary of abnormality region, respectively.
In addition, we also provide a quantitative evaluation of the segmentation accuracy based on the JSC, DSC, Accuracy and Error metrics. Table 1 shows the average values of JSC, DSC, Accuracy and Error of the segmentation results for each model using the INbreast Database. The data in Table 1 are visualized in Figure 4.
Based on Table 1 and Figure 4, the JSC, DSC and Accuracy metrics of the SSOMH model achieved the highest values with the lowest value of the Error metric, which indicates the highest segmentation accuracy in segmenting the targeted objects compared to the SOMCV, IIS, U-NET, PD2 and SSOM. Again, this is evidence of the advantage of using the combination of SOM-based machine learning approach, distance fitting term and the idea of spreading out the intensity values using a piecewise linear function as applied in this model.
In addition, a quantitative analysis of the segmentation speed is also performed in this experiment. We clarify that the efficiency comparison on IIS cannot be performed because the interactive software provided by the authors [33] used for implementing the IIS model does not have any built-in functions or tools for recording time processing. Table 2 illustrates the computation speed of segmentation results for each model.
Based on Table 2, the U-Net method achieved the fastest computational speed in the testing phase but the slowest speed during the training phase. Note that the computation time for SOMCV, SSOM and SSOMH during training are the same, while the testing speeds are almost similar because the Gaussian function was used in the models to efficiently speed up the computation time. In addition, the PD2 model has the slowest testing speed in segmenting the region, which is due to the computationally expensive TV term in the model that slows down the segmentation process.
Therefore, based on the experiments above, the proposed SSOMH model is more recommended than the SSOM model due to its efficiency and effectiveness in segmenting breast abnormality with inhomogeneous intensity. The SSOMH model is able to increase the contrast of the test images to reveal detailed information about hidden abnormalities present in the given images for better segmentation. Thus, the SSOMH is chosen to be compared with the SOMCV, IIS, U-Net and PD2 models in the next experiment.

4.2. Segmentation Results of Test Images from the CBIS-DDSM Database

In this second experiment, 50 mammography images of size 256 × 256 pixels from the CBIS-DDSM database were tested. The SSOMH is compared with SOMCV, IIS, U-Net and PD2 models. Figure 5 demonstrates 4 samples of input images (out of 50 test images) with green markers indicating the targeted object and the ground truth images as benchmarks for the segmentation. The results were presented in the form of binary images.
By visual observation, the result of the SOMCV model was over-segmented while the U-Net and IIS were under-segmented for test images 2, 3 and 4. The PD2 and SSOMH models could selectively segment the targeted region; however, the result delivered using SSOMH is smoother compared to PD2. In addition, the quantitative evaluation of the segmentation result based on the JSC, DSC, Accuracy and Error metrics are also provided. Table 3 below shows the average values of JSC, DSC, Accuracy, and Error of the segmentation results for each model using the CBIS-DDSM database, which are visualized in Figure 6.
From Table 3 and Figure 6, the SSOMH model achieved the highest values of the JSC, DSC and Accuracy metrics and the lowest value of Error, which indicate the highest segmentation accuracy in segmenting the targeted objects compared to the SOMCV, IIS, U-NET and PD2 models. The lowest values of JSC, DSC and Accuracy are obtained with the SOMCV model.
Similar to the first experiment on the INbreast Database, a quantitative analysis on the segmentation speed is also performed in this experiment for all tested models, except the IIS model, when segmenting test images using the CBIS-DDSM database, as illustrated in the following Table 4.
Based on Table 4, the result for U-Net is consistent with the previous experiment 1 where the method achieved the fastest testing time but was slower during the training phase compared to the other models. On the other hand, the computation time of the testing phase for SOMCV and SSMOH is comparable to U-Net. The segmentation time for PD2 is slower compared to the other models.

4.3. Results of SSOMH Model with Different Values of Area Parameter θ

In this final experiment, the important area parameter θ is tested to determine how it affects the segmentation accuracy of the recommended SSOMH model. Figure 7 demonstrates the segmentation results of the SSOMH model for test image 3 from Figure 3 with different values of θ .
We set the values of parameter θ for (ae) to 100, 300, 1000, 2000 and 4000, respectively. The respective results are indicated in (a1e1). By visual observation, (c1) with θ = 1000 shows a better segmentation result compared with (a1,b1,d1,e1) according to the benchmark in Figure 3. In addition, the quantitative evaluation of the segmentation accuracy is also provided based on the JSC, DSC, Accuracy and Error metrics. Figure 8 illustrates the values of JSC, DSC, Accuracy and Error for different values of θ.
Figure 8 shows the JSC, DSC, Accuracy and Error metrics for different values of θ . It can be observed that the image with θ = 1000 has the highest JSC, DSC and Accuracy values with the lowest Error values, indicating more accurate segmentation results compared to the images with the other θ values shown above. Thus, the values of θ should be controlled using trial and error to achieve accurate segmentation, which is the main limitation of the proposed model. As a general guide, the value of θ should be large when the targeted object is too close to the normal tissue, while a smaller value of θ s required for a clearly separated object.

5. Conclusions

In this research work, we focused on the extraction of the abnormality region in mammography images using the selective segmentation technique. Two models were proposed, namely SSOM and SSOMH. Both models adopted the idea of using distance fitting terms to capture the targeted region, the SOM machine learning-based approach to segment images with intensity inhomogeneity and a Gaussian function for curve regularization in the formulations. In the SSOMH model, the idea of spreading out the intensity values using a piecewise linear function is applied to increase the contrast of the mammography images. To minimize the energy functions of the SSOM and SSOMH models, the Euler–Lagrange equations were established using calculus of variations. Then, the equations were solved in MATLAB software using the gradient descent algorithm. The efficiency of each model was evaluated in terms of computational time, and segmentation accuracy was measured by evaluating the JSC, DSC, Accuracy and Error values for each image.
Based on the first experiment using the INbreast database, it is recommended to use the SSOMH model to segment mammography images due to its efficiency and high accuracy compared to the SSOM model. In the same experiment, SSOMH outperformed the SOMCV, IIS, U-Net and PD2 models in terms of Accuracy. Similar observation can be made in the second experiment using the CBIS-DDSM database. Based on both experiments, we found that U-Net has faster testing time but a slower training time compared to SSOMH. However, the segmentation process of SSOMH was faster than the other iterative models (SSOM, SOMCV and PD2).
As demonstrated in the last experiment, this study has the drawback that the parameter of θ must be manually set because each image requires a distinct set of values. The parameter must be adjusted one at a time using a process of trial and error until the targeted segmented image is achieved.
For future research, it is recommended to investigate how to choose a suitable value of the parameter θ . On the other hand, the recommended model, i.e., the SSOMH model, can be extended into a vector-valued (color) framework and a three-dimensional (3D) framework for segmentation of color and 3D mammography images, respectively. The reason is that vector-valued (color) and 3D images provide more significant details that are beneficial in the evaluation of medical and non-medical images. Moreover, as the SSOMH model is non-convex, it may be sensitive to initialization. Future directions will try to formulate a convex formulation of the model.

Author Contributions

Conceptualization, N.A.S.M.G., A.K.J. and R.M.; methodology, N.A.S.M.G., A.K.J. and R.M.; investigation, N.A.S.M.G.; validation, N.A.S.M.G., A.K.J., R.M., M.A.M., F.A.Z. and A.M.J.; formal analysis, N.A.S.M.G.; writing—original draft preparation, N.A.S.M.G.; writing—review and editing, N.A.S.M.G., A.K.J., R.M., M.A.M., F.A.Z. and A.M.J.; visualization, N.A.S.M.G.; supervision, A.K.J., R.M., M.A.M., F.A.Z. and A.M.J. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Ministry of Higher Education (MOHE) and Universiti Teknologi MARA, Shah Alam, grant number FRGS/1/2021/STG06/UITM/02/3.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Ministry of Health. Malaysia National Cancer Registry 2012–2016; Ministry of Health: Putrajaya, Malaysia, 2019. Available online: https://drive.google.com/file/d/1BuPWrb05N2Jez6sEP8VM5r6JtJtlPN5W/view (accessed on 1 November 2021).
  2. Spreafico, F.S.; Cardoso-filho, C.; Cabello, C.; Sarian, L.O.; Zeferino, L.C.; Vale, D.B. Breast Cancer in Men: Clinical and Pathological Analysis of 817 Cases. Am. J. Men’s Health 2020, 14, 1–6. [Google Scholar] [CrossRef] [PubMed]
  3. Shamsi, M.; Islamian, J.P. Breast cancer: Early diagnosis and effective treatment by drug delivery tracing. Nucl. Med. Rev. 2017, 20, 45–48. [Google Scholar] [CrossRef]
  4. Yasiran, S.S.; Jumaat, A.K.; Manaf, M.; Ibrahim, A.; Wan Eny Zarina, W.A.R.; Malek, A.; Laham, M.F.; Mahmud, R. Comparison between GVF Snake and ED Snake in Segmenting Microcalcifications. In Proceedings of the 2011 IEEE International Conference on Computer Applications and Industrial Electronics (ICCAIE), Penang, Malaysia, 4–7 December 2011; pp. 597–601. [Google Scholar]
  5. Chen, K. Introduction to variational image-processing models and applications. Int. J. Comput. Math. 2013, 90, 1–8. [Google Scholar] [CrossRef]
  6. Rick, A.; Bothorel, S.; Bouchon-Meunier, B.; Muller, S.; Rifqi, M. Fuzzy techniques in mammographic image processing. In Fuzzy Techniques in Image Processing; Physica: Heidelberg, Germany, 2000; pp. 308–336. [Google Scholar]
  7. Yearwood, A.B. A Brief Survey on Variational Methods for Image Segmentation. 2010, pp. 1–7. Available online: researchgate.net/profile/Abdu-Badru-Yearwood/publication/323971382_A_Brief_Survey_on_Variational_Methods_for_Image_Segmentation/links/5abd38e8a6fdcccda6581b05/A-Brief-Survey-on-Variational-Methods-for-Image-Segmentation.pdf (accessed on 1 November 2021).
  8. Vlachos, I.K.; Sergiadis, G.D. Intuitionistic Fuzzy Image Processing. In Soft Computing in Image Processing; Springer: Berlin/Heidelberg, Germany, 2007; pp. 383–414. [Google Scholar]
  9. Chowdhary, C.L.; Acharjya, D.P. Segmentation of mammograms using a novel intuitionistic possibilistic fuzzy c-mean clustering algorithm. In Nature Inspired Computing; Advances in Intelligent Systems and Computing; Springer: Singapore, 2018; pp. 75–82. [Google Scholar] [CrossRef]
  10. Ghosh, S.K.; Mitra, A.; Ghosh, A. A novel intuitionistic fuzzy soft set entrenched mammogram segmentation under Multigranulation approximation for breast cancer detection in early stages. Expert Syst. Appl. 2021, 169, 114329. [Google Scholar] [CrossRef]
  11. Chowdhary, C.L.; Mittal, M.; P., K.; Pattanaik, P.A.; Marszalek, Z. An efficient segmentation and classification system in medical images using intuitionist possibilistic fuzzy C-mean clustering and fuzzy SVM algorithm. Sensors 2020, 20, 3903. [Google Scholar] [CrossRef] [PubMed]
  12. Chaira, T. An Intuitionistic Fuzzy Clustering Approach for Detection of Abnormal Regions in Mammogram Images. J. Digit. Imaging 2021, 34, 428–439. [Google Scholar] [CrossRef]
  13. Atiqah, N.; Zaman, K.; Eny, W.; Wan, Z.; Rahman, A.; Jumaat, A.K.; Yasiran, S.S. Classification of Breast Abnormalities Using Artificial Neural Network. AIP Conf. Proc. 2015, 1660, 050038. [Google Scholar] [CrossRef]
  14. Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention: 2015 18th International Conference, Munich, Germany, 5–9 October 2015; Part III. pp. 234–241. [Google Scholar]
  15. Michael, E.; Ma, H.; Li, H.; Kulwa, F.; Li, J. Breast Cancer Segmentation Methods Current Status and Future Potentials. Biomed Res. Int. 2021, 2021, 9962109. [Google Scholar] [CrossRef] [PubMed]
  16. Saravanan, R.; Sujatha, P. A State of Art Techniques on Machine Learning Algorithms: A Perspective of Supervised Learning Approaches in Data Classification. In Proceedings of the 2018 Second International Conference on Intelligent Computing and Control Systems (ICICCS), Madurai, India, 14–15 June 2018; pp. 945–949. [Google Scholar]
  17. Barbu, T.; Marinoschi, G.; Moroanu, C.; Munteanu, I. Advances in Variational and Partial Differential Equation-Based Models for Image Processing and Computer Vision. Math. Probl. Eng. 2018, 2018, 1701052. [Google Scholar] [CrossRef]
  18. Jumaat, A.K.; Chen, K. A Reformulated Convex and Selective Variational Image Segmentation Model and its Fast Multilevel Algorithm. Numer. Math. Theory Methods Appl. 2019, 12, 403–437. [Google Scholar]
  19. Rahmati, P.; Adler, A.; Hamarneh, G. Mammography segmentation with maximum likelihood active contours. Med. Image Anal. 2012, 16, 1167–1186. [Google Scholar] [CrossRef] [PubMed]
  20. Ciecholewski, M. Malignant and benign mass segmentation in mammograms using active contour methods. Symmetry 2017, 9, 277. [Google Scholar] [CrossRef]
  21. Chan, T.F.; Vese, L.A. Active contours without edges. IEEE Trans. Image Process. 2001, 10, 266–277. [Google Scholar] [CrossRef] [PubMed]
  22. Saraswathi, D.; Srinivasan, E.; Ranjitha, P. An Efficient Level Set Mammographic Image Segmentation using Fuzzy C Means Clustering. Asian J. Appl. Sci. Technol. 2017, 1, 7–11. [Google Scholar]
  23. Somroo, S.; Choi, K.N. Robust active contours for mammogram image segmentation. In Proceedings of the 2017 IEEE International Conference on Image Processing (ICIP), Beijing, China, 17–20 September 2017; pp. 2149–2153. [Google Scholar]
  24. Hmida, M.; Hamrouni, K.; Solaiman, B.; Boussetta, S. Mammographic mass segmentation using fuzzy contours. Comput. Methods Programs Biomed. 2018, 164, 131–142. [Google Scholar] [CrossRef]
  25. Radhi, E.A.; Kamil, M.Y. Segmentation of breast mammogram images using level set method. AIP Conf. Proc. 2022, 2398, 020071. [Google Scholar]
  26. Badshah, N.; Atta, H.; Ali Shah, S.I.; Attaullah, S.; Minallah, N.; Ullah, M. New local region based model for the segmentation of medical images. IEEE Access 2020, 8, 175035–175053. [Google Scholar] [CrossRef]
  27. Jumaat, A.K.; Chen, K.E. An optimization-based multilevel algorithm for variational image segmentation models. Electron. Trans. Numer. Anal. 2017, 46, 474–504. [Google Scholar]
  28. Jumaat, A.K.; Chen, K. Three-Dimensional Convex and Selective Variational Image Segmentation Model. Malays. J. Math. Sci. 2020, 14, 81–92. [Google Scholar]
  29. Jumaat, A.K.; Chen, K. A fast multilevel method for selective segmentation model of 3-D digital images. Adv. Stud. Math. J. 2022, 127–152. Available online: https://tcms.org.ge/Journals/ASETMJ/Special%20issue/10/PDF/asetmj_SpIssue_10_9.pdf (accessed on 1 November 2021).
  30. Acho, S.N.; Rae, W.I.D. Interactive breast mass segmentation using a convex active contour model with optimal threshold values. Phys. Med. 2016, 32, 1352–1359. [Google Scholar] [CrossRef]
  31. Ali, H.; Faisal, S.; Chen, K.; Rada, L. Image-selective segmentation model for multi-regions within the object of interest with application to medical disease. Vis. Comput. 2020, 37, 939–955. [Google Scholar] [CrossRef]
  32. Ghani, N.A.S.M.; Jumaat, A.K. Selective Segmentation Model for Vector-Valued Images. J. Inf. Commun. Technol. 2022, 5, 149–173. Available online: http://e-journal.uum.edu.my/index.php/jict/article/view/8062 (accessed on 1 November 2021).
  33. Nguyen, T.N.A.; Cai, J.; Zhang, J.; Zheng, J. Robust Interactive Image Segmentation Using Convex Active Contours. IEEE Trans. Image Process. 2012, 21, 3734–3743. [Google Scholar] [CrossRef] [PubMed]
  34. Ghani, N.A.S.M.; Jumaat, A.K.; Mahmud, R. Boundary Extraction of Abnormality Region in Breast Mammography Image using Active Contours. ESTEEM Acad. J. 2022, 18, 115–127. [Google Scholar]
  35. Zhang, K.; Song, H.; Zhang, L. Active contours driven by local image fitting energy. Pattern Recognit. 2010, 43, 1199–1206. [Google Scholar] [CrossRef]
  36. Abdelsamea, M.M.; Gnecco, G.; Gaber, M.M. A SOM-based Chan-Vese model for unsupervised image segmentation. Soft Comput. 2017, 21, 2047–2067. [Google Scholar] [CrossRef]
  37. Mumford, D.; Shah, J. Optimal approximations by piecewise smooth functions and associated variational problems. Commun. Pure Appl. Math. 1989, 42, 577–685. [Google Scholar] [CrossRef]
  38. Osher, S.; Sethian, J.A. Fronts propagating with curvature-dependent speed: Algorithms based on Hamilton-Jacobi formulations. J. Comput. Phys. 1988, 79, 12–49. [Google Scholar] [CrossRef]
  39. Altarawneh, N.M.; Luo, S.; Regan, B.; Sun, C.; Jia, F. Global Threshold and Region-Based Active Contour Model For Accurate Image Segmentation. Signal Image Process. 2014, 5, 1–11. [Google Scholar] [CrossRef]
  40. Chen, B.L. Optimization Theory and Algorithms, 2nd ed.; Tsinghua University Press: Beijing, China, 1989. [Google Scholar]
  41. Moreira, I.C.; Amaral, I.; Domingues, I.; Cardoso, A.; Cardoso, M.J.; Cardoso, J.S. INbreast: Toward a Full-field Digital Mammographic Database. Acad. Radiol. 2012, 19, 236–248. [Google Scholar] [CrossRef] [PubMed]
  42. Lee, R.S.; Gimenez, F.; Hoogi, A.; Miyake, K.K.; Gorovoy, M.; Rubin, D.L. Data Descriptor: A curated mammography data set for use in computer-aided detection and diagnosis research. Sci. Data 2017, 4, 170177. [Google Scholar] [CrossRef] [PubMed]
  43. Azam, A.S.B.; Malek, A.A.; Ramlee, A.S.; Suhaimi, N.D.S.M.; Mohamed, N. Segmentation of Breast Microcalcification Using Hybrid Method of Canny Algorithm with Otsu Thresholding and 2D Wavelet Transform. In Proceedings of the 2020 10th IEEE International Conference on Control System, Computing and Engineering (ICCSCE), Penang, Malaysia, 21–22 August 2020; pp. 91–96. [Google Scholar] [CrossRef]
Figure 1. A mammography image (a) with its histogram profile (b).
Figure 1. A mammography image (a) with its histogram profile (b).
Mathematics 11 00976 g001
Figure 2. The output intensity (a), the corresponding histogram (b) and the output image (c).
Figure 2. The output intensity (a), the corresponding histogram (b) and the output image (c).
Mathematics 11 00976 g002
Figure 3. Segmentation Results of Four Samples of Test Images (INbreast Database) for the SOMCV, IIS, U-NET, PD2, SSOM and SSOMH Models.
Figure 3. Segmentation Results of Four Samples of Test Images (INbreast Database) for the SOMCV, IIS, U-NET, PD2, SSOM and SSOMH Models.
Mathematics 11 00976 g003
Figure 4. Average Values of JSC, DSC, Accuracy and Error for All Models.
Figure 4. Average Values of JSC, DSC, Accuracy and Error for All Models.
Mathematics 11 00976 g004
Figure 5. Segmentation Results of Four Samples of Test Images (CBIS-DDSM Database) for the SOMCV, IIS, U-NET, PD2, SSOM and SSOMH Models.
Figure 5. Segmentation Results of Four Samples of Test Images (CBIS-DDSM Database) for the SOMCV, IIS, U-NET, PD2, SSOM and SSOMH Models.
Mathematics 11 00976 g005
Figure 6. Average Values of JSC, DSC, Accuracy and Error for All Models.
Figure 6. Average Values of JSC, DSC, Accuracy and Error for All Models.
Mathematics 11 00976 g006
Figure 7. Segmentation Results for SSOMH Model with Different Values of θ.
Figure 7. Segmentation Results for SSOMH Model with Different Values of θ.
Mathematics 11 00976 g007
Figure 8. Performance Evaluations of the SSOMH Model with Different Values of θ.
Figure 8. Performance Evaluations of the SSOMH Model with Different Values of θ.
Mathematics 11 00976 g008
Table 1. Average Values of JSC, DSC, Accuracy and Error for All Models.
Table 1. Average Values of JSC, DSC, Accuracy and Error for All Models.
ModelJSCDSCAccuracyError
SOMCV0.4340.5840.7350.265
IIS0.8010.8870.9610.040
U-NET0.5190.6740.8470.153
PD20.8190.8990.9620.038
SSOM0.8830.9370.9760.024
SSOMH0.8840.9380.9770.023
Table 2. Average Computation Time for All Models.
Table 2. Average Computation Time for All Models.
ModelComputation Time (Seconds)
TrainingTesting
SOMCV0.051.67
U-NET327.000.76
PD2Not Related71.03
SSOM0.051.42
SSOMH0.051.39
Table 3. Average Values of JSC, DSC, Accuracy and Error for All Models.
Table 3. Average Values of JSC, DSC, Accuracy and Error for All Models.
ModelJSCDSCAccuracyError
SOMCV0.4250.5760.6950.304
IIS0.4490.6160.7780.222
U-NET0.5690.7120.8930.107
PD20.7680.8670.9450.055
SSOMH0.8560.9200.9640.036
Table 4. Average Computation Time for All Models.
Table 4. Average Computation Time for All Models.
ModelComputation Time (Seconds)
TrainingTesting
SOMCV0.051.79
U-NET307.000.73
PD2Not Related98.54
SSOMH0.050.8
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ghani, N.A.S.M.; Jumaat, A.K.; Mahmud, R.; Maasar, M.A.; Zulkifle, F.A.; Jasin, A.M. Breast Abnormality Boundary Extraction in Mammography Image Using Variational Level Set and Self-Organizing Map (SOM). Mathematics 2023, 11, 976. https://doi.org/10.3390/math11040976

AMA Style

Ghani NASM, Jumaat AK, Mahmud R, Maasar MA, Zulkifle FA, Jasin AM. Breast Abnormality Boundary Extraction in Mammography Image Using Variational Level Set and Self-Organizing Map (SOM). Mathematics. 2023; 11(4):976. https://doi.org/10.3390/math11040976

Chicago/Turabian Style

Ghani, Noor Ain Syazwani Mohd, Abdul Kadir Jumaat, Rozi Mahmud, Mohd Azdi Maasar, Farizuwana Akma Zulkifle, and Aisyah Mat Jasin. 2023. "Breast Abnormality Boundary Extraction in Mammography Image Using Variational Level Set and Self-Organizing Map (SOM)" Mathematics 11, no. 4: 976. https://doi.org/10.3390/math11040976

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop