Next Article in Journal
On the Dimensionality and Utility of Convolutional Autoencoder’s Latent Space Trained with Topology-Preserving Spectral EEG Head-Maps
Previous Article in Journal
Evidence-Based Regularization for Neural Networks
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Morphological Post-Processing Approach for Overlapped Segmentation of Bacterial Cell Images

1
Department of Computer Science, University of Nebraska Omaha, Omaha, NE 68182, USA
2
Department of Chemical and Biological Engineering, South Dakota School of Mines & Technology, Rapid City, SD 57701, USA
*
Author to whom correspondence should be addressed.
Mach. Learn. Knowl. Extr. 2022, 4(4), 1024-1041; https://doi.org/10.3390/make4040052
Submission received: 25 September 2022 / Revised: 5 November 2022 / Accepted: 10 November 2022 / Published: 17 November 2022

Abstract

:
Scanning electron microscopy (SEM) techniques have been extensively performed to image and study bacterial cells with high-resolution images. Bacterial image segmentation in SEM images is an essential task to distinguish an object of interest and its specific region. These segmentation results can then be used to retrieve quantitative measures (e.g., cell length, area, cell density) for the accurate decision-making process of obtaining cellular objects. However, the complexity of the bacterial segmentation task is a barrier, as the intensity and texture of foreground and background are similar, and also, most clustered bacterial cells in images are partially overlapping with each other. The traditional approaches for identifying cell regions in microscopy images are labor intensive and heavily dependent on the professional knowledge of researchers. To mitigate the aforementioned challenges, in this study, we tested a U-Net-based semantic segmentation architecture followed by a post-processing step of morphological over-segmentation resolution to achieve accurate cell segmentation of SEM-acquired images of bacterial cells grown in a rotary culture system. The approach showed an 89.52% Dice similarity score on bacterial cell segmentation with lower segmentation error rates, validated over several cell overlapping object segmentation approaches with significant performance improvement.

1. Introduction

Microscopic observation of cellular morphology is a preliminary step in microbiology research. Cell size variability in microbes is indicative of cellular response to environmental stimuli through physiological and gene expression changes [1,2]. It has also been reported that cellular housekeeping, nutrient transport, and cell reproduction are associated with cell size variations [3]. Automated cell segmentation techniques in microscopy imaging are essential for measuring cellular characteristics such as alteration in size for the evaluation of the impact of specific environmental change [4,5]. Analysis of a large number of cells requires automated cell recognition techniques to enable faster decision-making.
To gain valuable information for semantic understanding [6] of images, object detection techniques should precisely estimate the shapes and locations of objects in each image. In machine learning, semantic understanding is defined as the ability of a machine to understand the meaning and context behind real-world information [7]. Understanding an image containing a large number of objects of interest is a task of great importance for monitoring in multiple domains, such as cells of in vitro cultures and developing embryos [8], grain analysis [9], and nuclei analysis [10]. However, the complexity of the bacterial cell segmentation task is a barrier, as the intensity and texture of the foreground and background are similar, and most clustered bacterial cells in images that are partially overlapping with each other. Many practical approaches have been proposed in the literature to tackle the overlapping objects segmentation problem, such as objects in different colors [11], level sets [12], ellipse fitting [13], and the watershed algorithm [14,15]. The traditional approaches to identify cell regions in microscopy images are labor intensive and heavily dependent on the professional knowledge of researchers.
Cell segmentation and growth rate measures of bacteria in a changed growth condition are crucial for developing an instrument with different materials for bacteria cell growth, pattern recognition, and further understanding the behavior and impact on cell cycle progression. A considerable research effort is being made to develop methods for extracting cell regions from its surface and background, segmenting the cell surfaces, and counting/measuring the cell growth proliferation [12]. Direct imaging techniques, such as confocal laser microscopy, electron microscopy, light microscopy, bioluminescence imaging, and macroscale photography have been frequently used in experiments on bacteria cell detection and to extract measurements. These imaging techniques have become more convenient to experiment with, rapid, and standard diagnostic methods for analyzing morphological disruption [16]. Scanning electron microscopy (SEM) techniques have been extensively performed to image and study bacterial cells with high-resolution images [17,18]. However, distinguishing the cells in a cluster remains challenging in many SEM images because of the similar pixel intensities and touching and overlapping cell clusters, leading to significant difficulties in extracting quantitative information.
The overall objective of this study is to develop a machine learning pipeline to automatically extract cellular instances in SEM images for accurate quantitative measurements, i.e., cell counts. The challenges of instance segmentation of touching or overlapping bacterial cell clusters in SEM images are as follows: (1) the similarity between foreground and background: the study surface’s intensities and textures are similar to the material’s surface; (2) complicated clustered objects: there are overlapping bacterial cells in SEM images; and (3) the lack of regular patterns: clusters of cells are irregularly shaped and have different orientations on a surface. In this paper, to overcome the aforementioned challenges of bacterial cell aggregates, we propose a U-Net-based semantic segmentation architecture followed by a post-processing step of morphological over-segmentation resolution to achieve accurate cellular segmentation and mitigate the weakness. The contributions of our method can be summarized in two aspects:
  • A bacterial cell segmentation pipeline comprising deep semantic segmentation architecture and morphological post-processing technique is proposed to accommodate the above-mentioned cell extraction complications to retrieve accurate quantitative measures in SEM images.
  • Benchmark the segmentation performance compared to the other mature overlapping object segmentation approaches, the proposed method shows a 89.52% Dice similarity score on bacterial segmentation and is validated with a comparison to several cell overlapping object segmentation approaches, in which significant a performance improvement was observed.
The rest of this paper is organized as follows. Section 2 describes the related work on different cell segmentation techniques. Section 3 describes our overall approach, including a brief overview of the basic U-Net network and our morphological post-processing step. Section 4 discusses our experimental setup on the SEM image dataset, and Section 5 presents results and discussions. Finally, Section 6, describes the conclusions of the proposed approach and some possibilities for future directions.

2. Related Work

Many methodological studies have made impressive progress on automated image segmentation and counting of touching and overlapping cells. This section reviews the traditional approaches for overlapping object segmentation in Section 2.1, contour-based methods for corner points detection in Section 2.2, ellipse fitting methods for objects segmentation task in Section 2.3, existing deep learning-based methods in Section 2.4, and our contributions in the field of bacterial image analysis in Section 2.5.

2.1. Traditional Approaches for Overlapping Objects Segmentation

Traditional early-stage image segmentation tasks have applied thresholding techniques where each pixel is classified as either an object or as part of the background based on a chosen threshold value. The well-known approaches include morphological operations [19,20], watershed segmentation [21], levelset methods [22,23], graph-based approaches [24,25], and their variations [26,27]. These methods mainly focus on features, such as gradient, color, and/or structural distributions. However, due to the variability of cell segmentation tasks, e.g., heterogeneous object/cell structures and shared features between foreground and background, the parameters in these methods need to be customized for application by the proper selection of the structuring elements. Similarly, traditional machine learning approaches have been used for cell segmentation [28,29], but most of the approaches were evaluated in situations with uniform cell appearance and a high contrast between the foreground and background. Furthermore, these approaches are too labor-intensive and time consuming (especially for large datasets) to have generalized segmentation performance and robustness for cell segmentation tasks [30]. Furthermore, optimization algorithms such as swarm intelligence and genetic algorithms have been extended to perform medical image segmentation applications [31,32]. However, these approaches have not been used in complex segmentation applications.

2.2. Contour-Based Methods

The contour-based method employs a set of steps to represent the curve evolution of segmented regions in an image, including curvature, skeleton, and polygon approximation. The method treats the instance segmentation as a regression task where a contour can be reduced to a series of discrete vertex coordinates. Contour detection has attracted extensive attention in the field of image segmentation for overlapping or touching object segmentation [33,34,35,36]. Many contour detection algorithms have been introduced in the literature to understand the nature of overlapping objects. Refs. [37,38] adopted sliding window-based approaches to extract the contours and foreground object from the background. Similarly, Ref. [39] proposed a bottleneck detector that identifies a pair of splitting points with a minimum Euclidean distance transform (EDT) while maximizing their distance over the contours for concave corners identification. However, these methods are highly prone to noise [40] and tend to retrieve false corner points, leading to low accuracy. Hence, extensive preprocessing is required to avoid noise before the ellipse fitting stage.
Besides the aforementioned corner detection approaches with manual object segmentation, the general corner identification algorithms, such as k-curvature [41,42], have been successfully used to extract corners for overlapping object instances as well. For example, Ref. [10] proposed a corner point identification that uses the Harris corner detection algorithm to generate candidate points and then extract obvious and uncertain concave points from the candidate points. However, the method is constrained by the parameter settings, which is reflected in the low degree of generalizability of the models. Furthermore, these algorithms cannot perform well when object boundaries appear blurry. Such a situation requires deep learning methods for feature extractions to identify precise object boundaries under different conditions. For example, Ref. [43] adopted a k-curvature technique to determine the candidate curve points and further improve the detection of corner points.

2.3. Ellipse-Fitting Methods

Most of the ellipse-fitting approaches have been proposed to address touching elliptical-shaped objects for segmentation tasks [13,44]. For example, the segmentation task of overlapping elliptical grains [45] and cell nuclei [46] utilizes the multi-ellipse fitting method to set a minimum threshold of the expected area of each cell that can automatically detect and split touching cells. Despite its great potential, the lack of generalizability has hindered the widespread adoption of different applications with specific type objects due to the requirement of task-specific rules and parameters.
Some recent studies have extended ellipse-fitting techniques to achieve more accurate segmentation results. For example, Ref. [13] proposed that the modified ellipse-fitting technique generates candidate ellipses and selects the optimal ellipse among the candidate pool to recognize overlapping elliptical objects in a binary image from given concave points that are extracted by a polygon approximation algorithm. Ref. [47] proposed a parameter-free decremental ellipse fitting algorithm (DEFA) for automatically estimating the number and the parameters of ellipses objects by exploiting the skeleton of a shape. However, the approach requires binarization of elliptical-shaped objects in an image or images with high contrast between foreground and background. Ref. [48] proposed an improved version of DEFA, called region-based fitting of overlapping ellipses (RFOVE), which automatically determines the number of possibly overlapping ellipses by optimizing the area of shape coverage from unsupervised learning and operating with previously unknown shapes. Furthermore, Refs. [46,49] have been successfully implemented to accommodate overlapping cell segmentation tasks to retrieve accurate quantitative measures in SEM images.

2.4. Deep Learning Methods

Owing to the huge success of deep learning (DL), it has become the de facto choice for advanced segmentation tasks. Recently, because of its capability for better feature extraction and accurate segmentation quality, many cell segmentation methods and applications with DL techniques have been widely adopted in cell segmentation [15,50,51]. DL methods for object segmentation can be broadly classified into two categories:
  • Instance segmentation: mask R-CNN [52] is a well-known deep neural network architecture for multi-objects detection, and extends faster R-CNN [14] by adding an extra branch for predicting segmentation masks while simultaneously recognizing the bounding box from the existing branch. Mask R-CNN uses a region proposal-based object detection and uses a high-quality segmentation mask technique to achieve instance segmentation results. However, this method cannot perform well in situations with heavily overlapping object instances or highly close object occurrences due to greedy non-maximum suppression post-processing.
  • Semantic segmentation: U-Net [53] deep learning architecture is recognized as another popular semantic segmentation approach that neither employs region proposals nor reuses pooling indices. Instead, it uses encoder–decoder-based neural network architecture to predict a class-based object segmentation output. The U-Net architecture has been successfully used in many overlapping cell segmentation tasks [51,54], especially in the medical community, because of its intrinsic capability to perform down sampling–up sampling. For example, research studies by [55,56] have demonstrated how to use the architecture to accurately segment overlapping cervical cells.

2.5. Our Contribution

In summary, the methods, and techniques of overlapping objects in the literature have been applied to segmentation tasks and counting touching/overlapping cells. Apparently, most of the solutions have been validated over benchmark datasets that include object-centric instances and simple object shapes. Furthermore, although applying deep learning-based architectures have obtained promising achievement, they require post-processing to achieve an optimal performance. Therefore, semantic segmentation plays a vital role in splitting algorithms to distinguish the body of cells from given heterogeneous foreground objects from the background. To overcome the aforementioned limitations, we propose a combined morphological post-processing step with concave points (called a machine learning pipeline) to produce deep semantic segmentation models to improve overlapping or touching objects segmentation with complex overlapping scenarios in SEM images. Specifically, we utilize morphological skeletonization to enhance the concave point detection and post-process semantic segmentation results for overlapping objects.

3. Methodology

This section presents the major steps of the proposed approach (called a morphological post-processing approach) to develop an overlapped cell segmentation task for bacterial images. We first introduce the dataset acquisition in Section 3.1 and describe the image pre-processing in Section 3.2, followed by U-Net segmentation in Section 3.3. We then present the circular template method for corner detection and construct graphs from skeletonization in Section 3.4 and analysis in Section 3.5. Finally, three algorithms are developed for instance segmentation in Section 3.6, and the final results of the overlapped cell segmentation task are shown in Section 3.7. The flowchart is illustrated in Figure 1.

3.1. Data Acquisition

Geobacillus is a bacterial genus from the family Bacillaceae. The bacterial cells were grown at 60 ° C in a rotatory cell culture system to simulate microgravity. After 24 h of growth, the cell growth was arrested by treatment with glutaraldehyde followed by three consecutive washes with 50%, 70%, and 100% alcohol, respectively. The diluted cell suspensions were mounted on a SEM sample mount and air dried before image acquisition. To generate SEM images, the equipment used was a Zeiss Supra 40 VP/Gemini Column Scanning Electron Microscope (SEM), and the microscopy parameters are as follows: electron high tension (EHT) voltage = 1 kV (called accelerating voltage from now on), and the type is field emission SEM, and the detector is SE2 (secondary electron) [57]. A dataset of 72 grayscale SEM images was used to train and test the proposed bacterial cell segmentation approach. Each image in the dataset consists of metadata, such as magnitude, focal length of the objective lens (WD), EHT, noise reduction method, chamber Status, data, and time. Figure 2 shows the sample SEM images from the dataset used in our experiments. On average, 1 micrometer represents 60 pixels in the image.

3.2. Images Pre-Processing

To improve the quality for successful analysis of bacterial images, our image pre-processing pipeline combines image resizing, contrast adjustment, and removal of unwanted features. Every image is resized into 512 px × 512 px and image adjustments are performed using a well-known algorithm, called the contrast-limited adaptive histogram equalization (CLAHE) [58] to enhance the contrast of features of foreground and background. The meta-information (i.e., magnification, scale, noise reduction technique used, date and time of the capture, etc.) on the image is cropped out to ensure learning features could obtain higher accuracy. The dataset samples have been manually annotated/labeled to pixel level for instance segmentation task using the VGG image annotation software (called VIA) [59], which is available for download free of charge for any use. Through the tool, we can annotate the surface areas of the bacteria cells to achieve semantic segmentation ground truth masks. The dataset samples contain over ∼230 individual bacteria cells through the VIA, and was then applied the data augmentation method [60] to increase more data points, more details described in Section 4.3. These annotations/labels have been constructed as our ground truth information for training and testing our overlapped segmentation approach.

3.3. U-Net Segmentation Architecture

To extract features from images to build increasingly abstract representations, replacing the traditional approach of hand-crafting features, our approach adopts a deep architecture, called U-Net [53], which is a U-shaped convolutional network extended with a fully convolutional network (FCNs) by expanding cascaded successive layers where pooling operators are replaced by upsampling operators. In U-Net, skip connections are concatenated and can maintain the key features at different dimensions to achieve higher resolution. The complete architecture follows an encoder–decoder fashion, where the encoder is a conventional convolutional neural network and the decoder is an asymmetric deconvolutional neural network. The output of the U-Net model is a semantically segmented mask of the objects where all pixels belonging to a particular class are classified. Therefore, to retrieve the final instance segmentation results from the semantic segmentation mask, a novel morphological post-processing step is proposed by applying nonlinear operations related to the shape features in our images.

3.4. Corner Detection and Skeletonization

Corner detection has been widely studied for over-segmentation (overlapped objects) as multiple objects with known shapes construct unique corners when overlapped. Corner points refer to the abnormal changes around a point in the region boundary. From the contours of the overlapped objects, the appearance of a corner point can be further divided into two categories: concave points and convex points. When the corner point is narrowed to the interest region, it will be identified as a concave corner point instead of a convex corner point. In our research, in general, two convex points occur, one at each end of the long axis of the cell object. When two or more cells overlap, concave points are formed at the intersection points of the objects. However, a single bacterial object has different variations in its appearance, causing various formations of the object contours due to overlapping such as hole boundaries where the background area is surrounded by multiple objects or parallel overlapping. Figure 3 illustrates examples of such varied formations of bacterial cells and their contours. More details are described in Section 3.5.
Numerous techniques for dealing with corner point identification and categorization have been introduced in the literature [10,43]. Of these methods, the circular template approach has drawn high interest because of its computational efficiency and simplicity. In this approach, a circular structural element (denoted as s t r ) is used with a pre-defined radius. A structural element moves along the boundary of the objects and calculates the area captured by the structural element (denoted as n p ). Consequently, by adjusting the ratio (denoted as a p ) between n p and the area of the s t r , concave points (namely, the local maximum points) and convex points (namely the local minimum points) can be separated. In the default setting, the points in the range between 0.4 and 0.6 will be ignored because they are mostly represented as a flat boundary region.
Although the method is computationally simple, there still exist some concerns. One of the drawbacks is that the radius of the circular template should be scaled in accordance to the scale of the objects in the dataset. For example, consider a scenario where two bacterial cells appear very close to each other, which means that both of their surfaces appear in the structural element. This scenario miscalculates the a p ratio such that more points (close to corner points) would be chosen during the corner extraction process. Similarly, this approach outputs groups of corner point pixels and further processing is required to identify the peak corner pixel. To tackle these issues, we also use the Harris corner detector [42] to obtain features parallel to the circular template approach and select the stronger corner points overlapped with both the approaches withing c r radius of the branching point. Figure 4a illustrates an instance of corner point extraction using both the circular template approach and the Harris feature-based approach. Figure 4b illustrates the corresponding corner point extraction with the circular template approach.
On the other hand, skeletonization has been used in general image morphology to represent the process of drawing a line thinned from a binary image object which summarizes its shape, orientation, and connectivity [62]. Based on our U-Net segmentation modes, the masked output will be converted into a skeleton (denoted as s k ) by using the morphological skeletonization approach. In our application, different from other applications [63,64], the skeleton information of each segmented image is a pivot but could vary dramatically with the bacterial objects in size, length, and shape (spiral). Therefore, our idea of skeletonization is proposed to construct a skeleton graph to ensure all the branches are followed by diagonal connections with 8-connected neighbors. Let G = ( V , E ) be a skeleton graph, where V is a set of vertices (branching points and endpoints of the skeleton graph) and E is a set of edges connecting those vertices. Next, we will explain how to use the skeleton graph.

3.5. Corner and Skeleton Analysis

The next task is to utilize the structural representation of skeleton and corner point distribution to examine over-segmented bacterial cells. For example, in Figure 5, the edges represent the skeleton connections, the blue vertices represent branching points (denoted as b p ) of the skeleton, the orange points represent the endpoints (denoted as e p ), and the cross points represent the extracted concave points (denoted as c p ) using the method explained in Section 3.4.
The number of edges (namely the degree) and their shared corner points at a branching point in the skeleton obtained from the segmented images can be used to describe a vertex type. Then, each one can be broadly categorized as X-type, Y-type, I-type, or H-type (the skeleton resembles an X, Y, I, and H alphabet), which are the most common occurrences, as shown in Figure 3. I-type occurs when an edge with two endpoints in s k , and has no branching points, as shown in Figure 3c. X-type occurs when a branching point has a degree of four, as shown in Figure 3a. Y-type occurs when a cell is overlapped with one of its endpoints. H-type occurs when the midpoint of the two cells are right next to each other. Assume that there exists one branching point b p 1 b p , and the corner points can also be found in the area of the circle with c r radius ( c r is determined by the training set c p distribution statistics) to the branching point. Y-type occurs when only one b p 1 has degree three in s k and this b p 1 has two common corner points within c r radius, as shown in Figure 3b. H-type occurs when two branching points (called b p 1 and b p 2 b p ) share an edge and have the same set of corner points, and b p 1 ( b p 2 , respectively) belongs to Y-type, as shown in Figure 3d. Note that the corner points shared by a branch are determined relative to the slope line of the branch with respect to its connected branching point.

3.6. Instance Segmentation

Then, the proposed edge coloring-based technique is applied to determine the object instances such that the same-colored path of the skeleton graph represents a single cellular instance, as shown in Algorithm 1. A queue data structure (namely Q) populated with e p is used to start the process. Note that Q is constructed to store the endpoints (denoted as e p ). If there exists at least one endpoint (which is a single bacteria cell), the queue would not be empty and can be applied in Algorithm 1. Every vertex in the queue then undergoes the edge coloring and local agreement, followed by global backpropagation. Initially, for a vertex, all the connected edges are assigned colors so that each edge is assigned with a distinguished color (denoted as C L ). Here, C L is a set of colors, and when an edge (e.g., i x ) is assigned with a distinguished color, this color will be added to C L set. If all the edges that belong to a vertex are assigned with colors, the local agreement function (as shown in Algorithm 2) is carried out to solve over-segmentation by considering the vertex types. Based on the X-type, Y-type, and I-type, these edges will be assigned a color. If both X-type and Y-type occur, which means that the cells are overlapped, these edges are assigned a distinguished color. If I-type, which indicates the same cell, these edges are assigned the same color. If H-type, which indicates that the two cells are not overlapped (just right next to each other), the edge i x between two b p are removed from the skeleton, as shown in Figure 6. Because these color agreements are performed locally, global coloring contradictions are possible. Hence, for each local color assignment, global color assignment back-propagation is also performed, as mentioned in Algorithm 3. Whenever a vertex is de-queued to perform the above tasks, its neighboring vertices that are not already in the queue are queued. This process is repeated until all the vertices in s k are processed.
Algorithm 1: Bacterial Cell Segmentor
Make 04 00052 i001
Algorithm 2: Local Agreement
 Input: G be the skeleton graph, the vertex v with the concave points c p , and the
    skeleton s k
 Output: Assign distinguished colors for the edges based on v different types
 Variables: radius c r , and a color set C L
1Let i x , i y , i z be the three edges incident to vertex v
2ifvinX-typethen
3  Assign a distinguished color such that C L ( i x ) = C L ( i y ) , where i x , i y e d g e ( v )
4  AND i x , i y are opposite adjacent edges        // as shown in Figure 6a
5else ifvinY-typethen
6   Assign a distinguished color such that C L ( i x ) = C L ( i y ) , where i x , i y e d g e ( v )
7  AND i x , i y do not share two corner points
8   Assign a distinguished color such that C L ( i x ) C L ( i z ) , where i x , i z e d g e ( v )
9  AND i x , i y share two corner points          // as shown in Figure 6b
10else ifvinH-typethen
11   Remove i x from s k
12  AND Remove i x from E, where i x e d g e ( v )      // as shown in Figure 6d
13else
14   Assign the same color ∀ e d g e ( v )         // I-type, as shown in Figure 6c
15returnG
Algorithm 3: Global Backpropagation
Make 04 00052 i002

3.7. Area Calculation and Counting

As a final outcome, the paths assigned with a similar color are masked as instance segmentation on the input image. To achieve mask reconstruction, we use a circular structural element (denoted as m a s k s t r e l ) to follow each path, keeping the center of the m a s k s t r e l on the edges of s k . The radius of m a s k s t r e l is determined by the length of the perpendicular line from s k to its boundary. Finally, we calculate the number of cells and the area coverage by extracting the color components from every connected components in the final output.

4. Experimental Design

This section first describes the experimental setup in Section 4.1. Then, evaluation metrics for image analysis are presented in Section 4.2. The experiments for training U-Net models and post-processing steps are conducted in this study to evaluate the performance of our solution in Section 4.3 and Section 4.4. Our sources code is disclosed at the GitHub repository: A Morphological Post Processing Approach https://github.com/dabeyrathna/A-Morphological-Post-Processing-Approach (accessed on 15 September 2022).

4.1. Experimental Setup

All the experimental work, including training and testing tasks, run on a GPU-enabled LAMBDA QUAD deep learning workstation, whose components include an Intel(R) Core(TM) i9-9920X CPU (3.50 GHz), Nvidia QuadroRTX 6000 GPU with 24 GB memory, and 128 GB RAM. The complete implementation was done in Matlab R2021-a, i.e., both graphical user interface (GUI) and model construction.

4.2. Evaluation Metrics

To evaluate this instance segmentation task, two different types of metrics were used. First, as the main measure of the accuracy of identifying the segmented instances, we employed segmentation rate ( S R ), segmentation error rate ( S E R R ), and segmentation efficiency rate ( S E F R ) as the discriminator to discriminate and select the optimal solution which can produce a more accurate prediction of future evaluation based on the work of [65] (see Equations (1)–(3)).
S e g m e n t a t i o n _ R a t e ( S R ) = N p r e d N g t
S e g m e n t a t i o n _ E r r o r _ R a t e ( S E R R ) = N e r r N g t × 100 %
S e g m e n t a t i o n _ E f f i c i e n c y _ R a t e ( S E F R ) = N c o r r e c t N p r e d × 100 %
Here, N p r e d , N g t , N e r r , and N c o r r e c t are a number of predicted instances, the actual number of cells, number of incorrectly predicted instances segmentation (including partial segmentation), and the number of correctly segmented cells, respectively. Here, I O U p r e d i c t t r u t h represents the intersection over union (IOU) between the predicted cells and the ground truth. This pivot provides us with a score for each picture to predict the accuracy of the predicted segmentation. Then, tr represents a threshold which is a decision boundary of I O U p r e d i c t t r u t h . A successful case is counted as a correctly predicted instance if the segmentation predictions cover over 80% (tr) of the ground truth; otherwise, it is recorded as an incorrect prediction.
Similarly, we used the Dice score [66] as the second metric type to evaluate pixel-wise segmentation performance. This score is calculated as twice the overlap between the ground truth label and the predicted segment, divided by the pixels covered by both ground truth label and predicted segment (see Equation (4)). Here, T P is the true-positive, F P is the false-positive, and F N is the false-negative.
D i c e - S c o r e = 2 T P 2 T P + F P + F N
To verify the performance of the proposed method, it was compared with one of the matured instance segmentation approaches, [49] method which uses region-based fitting of overlapping ellipses with a U-Net model, concave extraction based on the ellipse fitting method proposed by [13], and Mask R-CNN. The segmentation results are presented in Table 1.

4.3. Train U-Net Models

The dataset was divided into three non-overlapping subsets, namely the training set, validation set, and testing set with sizes of 70%, 10%, and 20% of the original dataset, respectively. A U-Net model was used to learn the semantic segmentation masks. The U-Net architecture used in this study consists of two 3 × 3 convolutions, 2 × 2 max pooling operations with a stride value of 2 and rectified linear unit (ReLU) activation in the down-sampling path. In the upsampling path, the feature maps undergo 2 × 2 convolution and two 3 × 3 convolutions. All the downsampling feature maps are connected to upsampling feature maps using skip connections. The network was trained on the training set using the Adam gradient descent algorithm [67] with a starting learning rate set to 1 × 10 5 . We used the early stopping technique with a patience parameter set to 5 to ensure that the model learns optimally with the validation set. Input into the network was a 128 px × 128 px sized image and the model was trained with 20 epochs. Because the dataset is small, the training data were extended by using data augmentation methods [60] in our experiments, such as rotation, scaling, cropping, etc., to enhance the robustness and prevent overfitting during the training process. The final outcome of the U-Net model is in the form of a binarized mask where the pixels containing the bacteria cells are set to white and the background pixels are set to black. The resultant semantic segmentation masks (binary) are then forwarded to the post-processing steps to retrieve the instance-level segmentation results.

4.4. Post-Processing Step

As mentioned in Algorithm 1, we used a Matlab b w s k e l function to generate the skeletons for the segmented mask. The b w s k e l function extracts the centerline in the 2D image to one-pixel wide curve lines while preserving the essential structure image. Then, e p and b p points are collected in a queue data structure using the generated skeleton structure after it is refined for smaller branching edges. The pruning of smaller branches is carried out using the M i n B r a n c h L e n g t h property set to 6, which eliminates all branches shorter than 6 pixels using 8 connectivity. Similarly, c p points are collected using the corner detection mechanism described in Section 3. To determine the branching type (denoted as I-type, Y-type, and X-type), we empirically set a distance threshold parameter (denoted as c r ) that restricts the minimal distance of two neighboring branching points to 25 pixels in our experiments. After assigning the colors to each edge according to Algorithm 1, a disk-shaped structural element (denoted as m a s k s t r e l ) was used to mark the object instances on the original SEM image.

5. Evaluation Results & Discussion

Compared to the other three cell overlapping object segmentation approaches (see Table 1), the proposed method shows a higher Dice similarity score of 89.52%, a lower segmentation error rate of 14.71%, and a higher segmentation efficiency rate of 88.28% in bacterial segmentation, which is a significant performance improvement. The performance comparison of the results is shown in Table 1. Here, we used micro-averaged S R , S E R R , and S E F R values as the number of bacteria cells that appear within an image can be drastically different. The S R value of the proposed method is significantly better than the values of the other methods. However, we can see that the proposed method shows over-segmentation ( S R > 1 ), whereas mask R-CNN shows under-segmentation in its results ( S R < 1 ). One possible reason for the under-segmentation of mask R-CNN was the prediction of closely overlapping objects as a single object instance. Additionally, the S E R R value of the proposed method is nearly two times lower than the next closest method, mask R-CNN.
To compare the computational efficiency of the proposed method with the other methods, we performed a runtime experiment (average time in seconds per image) for inference with the same system configurations (described in Section 4.1). For Zou et al. [13], we considered the average time consumed for U-Net segmentation mask prediction followed by two stages (1) the candidate ellipses generation stage and (2) the optimal ellipses selection stage) for the test set. Similarly, the average time was taken for the method proposed by Abeyrathna et al. [49], which consists of U-Net segmentation mask prediction followed by two steps (1) the cell segmentation and (2) the overlapping cells identification steps. The inference time of the mask R-CNN method mainly depends on the configuration of the region proposals. Here, we used 1000 region proposals in the inference stage while keeping the rest of the configurations in the default implementation [68].
The overall performance of the method presented in [13] was considerably low due to its intrinsic assumption of recognizing overlapped elliptical objects. The algorithm continuously tries to achieve high coverage while optimizing the ellipse count. Because of that reason, its coverage of the segmentation area was fairly better, with a Dice score of (79.1%). Similarly, the approach of [49] underperformed, as it tried to cover the object instances with single ellipses. These results provide empirical evidence that ellipse fitting approaches are not suitable to perform overlapping bacterial cell segmentation tasks.
Figure 7 illustrates a sample of the final results of testing images that are performed under our method and the other segmentation methods. Our method apparently outperformed the other participating methods in recognizing overlapping and touching object instances. However, we have recognized two main scenarios where our approach performs poorly: (1) when two or more bacteria cells appear close and parallel to each other and (2) when the U-Net models output partial semantic segmentation results. In the first scenario, the morphological skeletonization tends to result in one edge for multiple closely appearing bacteria cells. The error that occurs due to the second scenario can be mitigated by training the model optimally for better generalization. Here, it is also important to note that the consistency of the annotated ground truth affects the accuracy of the U-Net model prediction. For example, in this dataset, some of the cells are partially divided or about to divide and the ground truth annotations are inconsistent throughout the dataset. This inconsistency in annotations leads to incorrect U-Net segmentation results. Although the proposed post-processing approach can handle both outcomes (with or without partially separated cell parts) from the U-Net model, it still could negatively influence the overall instance segmentation performance. Figure 8a,b illustrates two such instances where our approach failed to correctly predict.
The dataset used in this study with 72 images illustrates the requirement of such a technique to achieve better instance segmentation performance. However, it is possible that the dataset does not capture all different types of bacteria cell clusters to validate the proposed method. Therefore, we also have conducted the following cell arrangements: chains, stars, and partially occluded cells cropped out by the image border, etc. Figure 8c,d illustrates the results for chain-shaped cell arrangements with our method. The current implementation is incapable of recognizing all the individual cells when they are arranged in a tight chain structure (see Figure 8c). In Figure 8d, the cell clusters form a chain while the ends of the cells can be captured by the skeletonization process and the corner detection process. Other complex arrangements such as stars mostly are predicted incorrectly when the arrangement is close and tight.
The proposed method consists of several morphological image processing techniques, such as dilation, erosion, and skeletonization, so the parameter reconfiguration is required according to the image metadata. In this study, the parameter settings were determined based on the fact that, on average, 1 micrometer represents 60 pixels in the image. Mainly the parameters such as c r , M i n B r a n c h L e n g t h , and m a s k s t r e l should be reconfigured to satisfy the given dataset conditions. One of the goals of the proposed post-processing technique is to extend semantic segmentation predictions to instance segmentation outcome when the available data are insufficient to train an end-to-end instance segmentation framework (e.g., mask R-CNN). The dataset used in this study with 72 images illustrates the requirement of such technique to achieve better instance segmentation performance. However, it is possible that the dataset does not capture all possible bacteria cell arrangements to validate the proposed method. We have conducted simulated cell arrangements such as chains, stars and partially occluded cells cropped out by the image border, etc.

6. Conclusions

In summary, this study presents an instance segmentation method for overlapping cells in SEM images and retrieves quantitative measures for an accurate decision-making process of bacterial cell accumulation on material surfaces. The dataset of SEM images used in this study has several challenging characteristics, such as a low color contrast between the foreground and background, overlapping bacterial cells, and heterogeneous object shapes and different dimensions. Hence, given these characteristics, the traditional segmentation approaches such as color thresholds, ellipse fitting, or direct instance segmentation approaches perform poorly.
While these limitations were addressed, this paper proposed a deep semantic segmentation architecture and followed with a morphological post-processing approach to train instance segmentation models. The proposed solution significantly improved the object segmentation task with complex cell overlapping scenarios in SEM images. Our approach is composed of image prepossessing, U-Net semantic segmentation, and morphological post-processing steps. In the morphological post-processing step, the methods of corner detection, skeletonization, and graph coloring-based cell instance segmentation have been developed for an overlapped cell segmentation task for bacterial images. The experimental results demonstrated the significance and effectiveness of using the morphological post-processing step for overlapping cell segmentation applications. Compared to the other cell overlapping object segmentation approaches, such as mask RCNN and U-Net variant, the proposed approach demonstrates promising performance improvements in individual cell segmentation and quantitative measures on the dataset, even when the cells are overlapping or touching each other.
In the near future, we shall extend this method to create a generalized model for overlapping object segmentation that will be beneficial in other fields, e.g., medicine, engineering, and biology. Further, we consider extending this approach into three-dimensional (3D) cell segmentation tasks that have huge demand in current medical and engineering applications. Proper segmentation and tracking of cells would lead to a better understanding of cell viability, cell signaling, adhesion, etc. [69]. Due to the flexibility of extending graph dimensions, e.g., 2D to 3D, there are many different potential advantages of extending the proposed method to 3D image segmentation tasks. We shall also consider adaptive patch generation to isolate cell cluster types, thereby improving the efficiency of the proposed graph coloring stage and enhancing the performance of the proposed approach.

Author Contributions

P.-C.H. and D.A. contributed to the conception and algorithm design of the study. S.R. and R.K.S. created and organized the database. P.-C.H. and D.A. wrote the first draft of the manuscript. D.A., P.-C.H., R.K.S. and S.R. wrote sections of the manuscript. Funding acquisition, P.-C.H. experiments and development, P.-C.H. and D.A. All authors have read and agreed to the published version of the manuscript.

Funding

The authors would like to acknowledge the funding support from the NSF EPSCoR RII T-2 FEC #1920954 and Nebraska Research Initiative (NRI) #41-3208-0440.

Data Availability Statement

Restrictions apply to the availability of these data. Data were obtained from the University and are available [from the authors/at URL] with the permission of [third party].

Acknowledgments

D.A. and P.-C.H. are partially supported by the funding support from the NSF EPSCoR RII Track 2 FEC #1920954. S.R. and R.K.S. acknowledge funding support from NSF #1736255 and #1849206.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Sampaio, N.M.V.; Dunlop, M.J. Functional roles of microbial cell-to-cell heterogeneity and emerging technologies for analysis and control. Curr. Opin. Microbiol. 2020, 57, 87–94. [Google Scholar] [CrossRef] [PubMed]
  2. Keegstra, J.M.; Kamino, K.; Anquez, F.; Lazova, M.D.; Emonet, T.; Shimizu, T.S. Phenotypic diversity and temporal variability in a bacterial signaling network revealed by single-cell FRET. eLife 2017, 6, e27455. [Google Scholar] [CrossRef] [PubMed]
  3. Westfall, S.; Lomis, N.; Kahouli, I.; Dia, S.Y.; Singh, S.P.; Prakash, S. Microbiome, probiotics and neurodegenerative diseases: Deciphering the gut brain axis. Cell. Mol. Life Sci. 2017, 74, 3769–3787. [Google Scholar] [CrossRef] [PubMed]
  4. Golding, C.G.; Lamboo, L.L.; Beniac, D.R.; Booth, T.F. The scanning electron microscope in microbiology and diagnosis of infectious disease. Sci. Rep. 2016, 6, 26516. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  5. Brahim Belhaouari, D.; Fontanini, A.; Baudoin, J.P.; Haddad, G.; Le Bideau, M.; Bou Khalil, J.Y.; Raoult, D.; La Scola, B. The strengths of scanning electron microscopy in deciphering SARS-CoV-2 infectious cycle. Front. Microbiol. 2020, 11, 2014. [Google Scholar] [CrossRef]
  6. Guo, Y.; Liu, Y.; Georgiou, T.; Lew, M.S. A review of semantic segmentation using deep neural networks. Int. J. Multimed. Inf. Retr. 2018, 7, 87–93. [Google Scholar] [CrossRef] [Green Version]
  7. Zhou, B.; Zhao, H.; Puig, X.; Xiao, T.; Fidler, S.; Barriuso, A.; Torralba, A. Semantic understanding of scenes through the ade20k dataset. Int. J. Comput. Vis. 2019, 127, 302–321. [Google Scholar] [CrossRef] [Green Version]
  8. Arteta, C.; Lempitsky, V.; Noble, J.A.; Zisserman, A. Learning to detect partially overlapping instances. In Proceedings of the IEEE Conference on Computer Vision And Pattern Recognition, Portland, OR, USA, 23–28 June 2013; pp. 3230–3237. [Google Scholar]
  9. Yan, L.; Park, C.W.; Lee, S.R.; Lee, C.Y. New separation algorithm for touching grain kernels based on contour segments and ellipse fitting. J. Zhejiang Univ. Sci. 2011, 12, 54–61. [Google Scholar] [CrossRef]
  10. Zhang, W.; Li, H. Automated segmentation of overlapped nuclei using concave point detection and segment grouping. Pattern Recognit. 2017, 71, 349–360. [Google Scholar] [CrossRef]
  11. Mosaliganti, K.R.; Noche, R.R.; Xiong, F.; Swinburne, I.A.; Megason, S.G. ACME: Automated cell morphology extractor for comprehensive reconstruction of cell membranes. PLoS Comput. Biol. 2012, 8, e1002780. [Google Scholar] [CrossRef]
  12. Vyas, N.; Sammons, R.; Addison, O.; Dehghani, H.; Walmsley, A. A quantitative method to measure biofilm removal efficiency from complex biomaterial surfaces using SEM and image analysis. Sci. Rep. 2016, 6, 32694. [Google Scholar] [CrossRef] [Green Version]
  13. Zou, T.; Pan, T.; Taylor, M.; Stern, H. Recognition of overlapping elliptical objects in a binary image. Pattern Anal. Appl. 2021, 24, 1193–1206. [Google Scholar] [CrossRef]
  14. Ren, S.; He, K.; Girshick, R.; Sun, J. Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks. arXiv 2015, arXiv:cs.CV/1506.01497. [Google Scholar] [CrossRef] [Green Version]
  15. Litjens, G.; Kooi, T.; Bejnordi, B.E.; Setio, A.A.A.; Ciompi, F.; Ghafoorian, M.; van der Laak, J.A.; van Ginneken, B.; Sánchez, C.I. A survey on deep learning in medical image analysis. Med. Image Anal. 2017, 42, 60–88. [Google Scholar] [CrossRef] [Green Version]
  16. Nistico, L.; Hall-Stoodley, L.; Stoodley, P. Imaging bacteria and biofilms on hardware and periprosthetic tissue in orthopedic infections. In Microbial Biofilms; Springer: Berlin/Heidelberg, Germany, 2014; pp. 105–126. [Google Scholar]
  17. Li, J.; Hirota, K.; Goto, T.; Yumoto, H.; Miyake, Y.; Ichikawa, T. Biofilm formation of Candida albicans on implant overdenture materials and its removal. J. Dent. 2012, 40, 686–692. [Google Scholar] [CrossRef]
  18. Hägi, T.T.; Klemensberger, S.; Bereiter, R.; Nietzsche, S.; Cosgarea, R.; Flury, S.; Lussi, A.; Sculean, A.; Eick, S. A biofilm pocket model to evaluate different non-surgical periodontal treatment modalities in terms of biofilm removal and reformation, surface alterations and attachment of periodontal ligament fibroblasts. PLoS ONE 2015, 10, e0131056. [Google Scholar] [CrossRef] [Green Version]
  19. Vincent, L. Morphological grayscale reconstruction in image analysis: Applications and efficient algorithms. IEEE Trans. Image Process. 1993, 2, 176–201. [Google Scholar] [CrossRef] [Green Version]
  20. Cooper, L.A.; Kong, J.; Gutman, D.A.; Wang, F.; Gao, J.; Appin, C.; Cholleti, S.; Pan, T.; Sharma, A.; Scarpace, L.; et al. Integrated morphologic analysis for the identification and characterization of disease subtypes. J. Am. Med. Inform. Assoc. 2012, 19, 317–323. [Google Scholar] [CrossRef] [Green Version]
  21. Vincent, L.; Soille, P. Watersheds in digital spaces: An efficient algorithm based on immersion simulations. IEEE Comput. Archit. Lett. 1991, 13, 583–598. [Google Scholar] [CrossRef] [Green Version]
  22. Nath, S.K.; Palaniappan, K.; Bunyak, F. Cell Segmentation Using Coupled Level Sets and Graph-Vertex Coloring. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention 2006, Copenhagen, Denmark, 1–6 October 2006; Larsen, R., Nielsen, M., Sporring, J., Eds.; Springer: Berlin/Heidelberg, Germany, 2006; pp. 101–108. [Google Scholar]
  23. Dzyubachyk, O.; Niessen, W.; Meijering, E. Advanced level-set based multiple-cell segmentation and tracking in time-lapse fluorescence microscopy images. In Proceedings of the 2008 5th IEEE International Symposium on Biomedical Imaging: From Nano to Macro, Paris, France, 14–17 May 2008; pp. 185–188. [Google Scholar] [CrossRef]
  24. Chang, H.; Han, J.; Borowsky, A.; Loss, L.; Gray, J.W.; Spellman, P.T.; Parvin, B. Invariant delineation of nuclear architecture in glioblastoma multiforme for clinical and molecular association. IEEE Trans. Med. Imaging 2012, 32, 670–682. [Google Scholar] [CrossRef]
  25. Al-Kofahi, Y.; Lassoued, W.; Lee, W.; Roysam, B. Improved automatic detection and segmentation of cell nuclei in histopathology images. IEEE Trans. Biomed. Eng. 2009, 57, 841–852. [Google Scholar] [CrossRef] [PubMed]
  26. Kumar, N.; Verma, R.; Sharma, S.; Bhargava, S.; Vahadane, A.; Sethi, A. A dataset and a technique for generalized nuclear segmentation for computational pathology. IEEE Trans. Med. Imaging 2017, 36, 1550–1560. [Google Scholar] [CrossRef] [PubMed]
  27. Jeulin, D. Morphological Models of Random Structures; Springer: Berlin/Heidelberg, Germany, 2021. [Google Scholar]
  28. Yin, Z.; Bise, R.; Chen, M.; Kanade, T. Cell segmentation in microscopy imagery using a bag of local Bayesian classifiers. In Proceedings of the 2010 IEEE International Symposium on Biomedical Imaging: From Nano to Macro, Rotterdam, The Netherlands, 14–17 April 2010; pp. 125–128. [Google Scholar] [CrossRef] [Green Version]
  29. Su, H.; Yin, Z.; Huh, S.; Kanade, T. Cell segmentation in phase contrast microscopy images via semi-supervised classification over optics-related features. Med. Image Anal. 2013, 17, 746–765. [Google Scholar] [CrossRef] [PubMed]
  30. Wang, W.; Taft, D.A.; Chen, Y.J.; Zhang, J.; Wallace, C.T.; Xu, M.; Watkins, S.C.; Xing, J. Learn to segment single cells with deep distance estimator and deep cell detector. Comput. Biol. Med. 2019, 108, 133–141. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  31. Al-Rifaie, M.M.; Aber, A.; Hemanth, D.J. Deploying swarm intelligence in medical imaging identifying metastasis, micro-calcifications and brain image segmentation. IET Syst. Biol. 2015, 9, 234–244. [Google Scholar] [CrossRef]
  32. Brezočnik, L.; Fister, I., Jr.; Podgorelec, V. Swarm intelligence algorithms for feature selection: A review. Appl. Sci. 2018, 8, 1521. [Google Scholar] [CrossRef] [Green Version]
  33. Li, C.; Xu, C.; Gui, C.; Fox, M.D. Distance regularized level set evolution and its application to image segmentation. IEEE Trans. Image Process. 2010, 19, 3243–3254. [Google Scholar] [CrossRef]
  34. Zhang, K.; Song, H.; Zhang, L. Active contours driven by local image fitting energy. Pattern Recognit. 2010, 43, 1199–1206. [Google Scholar] [CrossRef]
  35. Niu, S.; Chen, Q.; De Sisternes, L.; Ji, Z.; Zhou, Z.; Rubin, D.L. Robust noise region-based active contour model via local similarity factor for image segmentation. Pattern Recognit. 2017, 61, 104–119. [Google Scholar] [CrossRef] [Green Version]
  36. Zhang, J.; Lu, Z.; Li, M. Active contour-based method for finger-vein image segmentation. IEEE Trans. Instrum. Meas. 2020, 69, 8656–8665. [Google Scholar] [CrossRef]
  37. Fernandez, G.; Kunt, M.; Zryd, J.P. A new plant cell image segmentation algorithm. In Proceedings of the International Conference on Image Analysis and Processing, Washington, DC, USA, 23–26 October 1995; Springer: Berlin/Heidelberg, Germany, 1995; pp. 229–234. [Google Scholar]
  38. He, Y.; Meng, Y.; Gong, H.; Chen, S.; Zhang, B.; Ding, W.; Luo, Q.; Li, A. An automated three-dimensional detection and segmentation method for touching cells by integrating concave points clustering and random walker algorithm. PLoS ONE 2014, 9, e104437. [Google Scholar] [CrossRef] [Green Version]
  39. Wang, H.; Zhang, H.; Ray, N. Clump splitting via bottleneck detection and shape classification. Pattern Recognit. 2012, 45, 2780–2787. [Google Scholar] [CrossRef]
  40. Xing, F.; Yang, L. Chapter 4—Machine learning and its application in microscopic image analysis. In Machine Learning and Medical Imaging; Wu, G., Shen, D., Sabuncu, M.R., Eds.; The Elsevier and MICCAI Society Book Series; Academic Press: Cambridge, MA, USA, 2016; pp. 97–127. [Google Scholar] [CrossRef]
  41. Pavlidis, T. Algorithms for shape analysis of contours and waveforms. IEEE Trans. Pattern Anal. Mach. Intell. 1980, 4, 301–312. [Google Scholar] [CrossRef]
  42. Harris, C.G.; Stephens, M. A combined corner and edge detector. In Proceedings of the Alvey Vision Conference, Manchester, UK, 9–10 September 1988; Citeseer: University Park, PA, USA, 1988; Volume 15, pp. 10–5244. [Google Scholar]
  43. Miró-Nicolau, M.; Moyà-Alcover, B.; González-Hidalgo, M.; Jaume-i Capó, A. Segmenting overlapped objects in images. A study to support the diagnosis of sickle cell disease. arXiv 2020, arXiv:2008.00997. [Google Scholar]
  44. Zafari, S.; Eerola, T.; Sampo, J.; Kälviäinen, H.; Haario, H. Segmentation of overlapping elliptical objects in silhouette images. IEEE Trans. Image Process. 2015, 24, 5942–5952. [Google Scholar] [CrossRef]
  45. Zhang, G.; Jayas, D.S.; White, N.D. Separation of Touching Grain Kernels in an Image by Ellipse Fitting Algorithm. Biosyst. Eng. 2005, 92, 135–142. [Google Scholar] [CrossRef]
  46. Panagiotakis, C.; Argyros, A.A. Cell Segmentation Via Region-Based Ellipse Fitting. In Proceedings of the 2018 25th IEEE International Conference on Image Processing (ICIP), Athens, Greece, 7–10 October 2018; pp. 2426–2430. [Google Scholar] [CrossRef]
  47. Panagiotakis, C.; Argyros, A. Parameter-Free Modelling of 2D Shapes with Ellipses. Pattern Recogn. 2016, 53, 259–275. [Google Scholar] [CrossRef]
  48. Panagiotakis, C.; Argyros, A. Region-based Fitting of Overlapping Ellipses and its application to cells segmentation. Image Vis. Comput. 2020, 93, 103810. [Google Scholar] [CrossRef]
  49. Abeyrathna, D.; Life, T.; Rauniyar, S.; Ragi, S.; Sani, R.; Chundi, P. Segmentation of Bacterial Cells in Biofilms Using an Overlapped Ellipse Fitting Technique. In Proceedings of the 2021 IEEE International Conference on Bioinformatics and Biomedicine (BIBM), Houston, TX, USA, 9–12 December 2021; pp. 3548–3554. [Google Scholar] [CrossRef]
  50. Yang, L.; Zhang, Y.; Guldner, I.H.; Zhang, S.; Chen, D.Z. 3D segmentation of glial cells using fully convolutional networks and k-terminal cut. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Athens, Greece, 17–21 October 2016; Springer: Berlin/Heidelberg, Germany, 2016; pp. 658–666. [Google Scholar]
  51. Saleh, H.M.; Saad, N.H.; Isa, N.A.M. Overlapping chromosome segmentation using U-net: Convolutional networks with test time augmentation. Procedia Comput. Sci. 2019, 159, 524–533. [Google Scholar] [CrossRef]
  52. He, K.; Gkioxari, G.; Dollár, P.; Girshick, R.B. Mask R-CNN. In Proceedings of the 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; pp. 2980–2988. [Google Scholar]
  53. Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional Networks for Biomedical Image Segmentation. In Proceedings of the Medical Image Computing and Computer-Assisted Intervention—MICCAI 2015, Munich, Germany, 5–9 October 2015; pp. 234–241. [Google Scholar] [CrossRef] [Green Version]
  54. Hu, R.L.; Karnowski, J.; Fadely, R.; Pommier, J.P. Image segmentation to distinguish between overlapping human chromosomes. arXiv 2017, arXiv:1712.07639. [Google Scholar]
  55. Kurnianingsih.; Allehaibi, K.H.S.; Nugroho, L.E.; Widyawan.; Lazuardi, L.; Prabuwono, A.S.; Mantoro, T. Segmentation and Classification of Cervical Cells Using Deep Learning. IEEE Access 2019, 7, 116925–116941. [Google Scholar] [CrossRef]
  56. Lu, Z.; Carneiro, G.; Bradley, A.P. An Improved Joint Optimization of Multiple Level Set Functions for the Segmentation of Overlapping Cervical Cells. IEEE Trans. Image Process. 2015, 24, 1261–1272. [Google Scholar] [CrossRef]
  57. Carlson, C.; Singh, N.K.; Bibra, M.; Sani, R.K.; Venkateswaran, K. Pervasiveness of UVC254-resistant Geobacillus strains in extreme environments. Appl. Microbiol. Biotechnol. 2018, 102, 1869–1887. [Google Scholar] [CrossRef] [PubMed]
  58. Lam, C.; Yi, D.; Guo, M. Automated Detection of Diabetic Retinopathy using Deep Learning. Amia Jt. Summits Transl. Sci. Proc. 2018, 2018, 147–155. [Google Scholar]
  59. Dutta, A.; Zisserman, A. The VIA Annotation Software for Images, Audio and Video. In Proceedings of the 27th ACM International Conference on Multimedia, MM’19, Nice, France, 21–25 October 2019; ACM: New York, NY, USA, 2019. [Google Scholar] [CrossRef] [Green Version]
  60. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks. Adv. Neural Inf. Process. Syst. 2012, 25, 84–90. [Google Scholar] [CrossRef] [Green Version]
  61. Tan, S.; Ma, X.; Mai, Z.; Qi, L.; Wang, Y. Segmentation and counting algorithm for touching hybrid rice grains. Comput. Electron. Agric. 2019, 162, 493–504. [Google Scholar] [CrossRef]
  62. Maragos, P.; Schafer, R. Morphological skeleton representation and coding of binary images. IEEE Trans. Acoust. Speech Signal Process. 1986, 34, 1228–1244. [Google Scholar] [CrossRef]
  63. Yao, Q.; Zhou, Y.; Wang, J. An automatic segmentation algorithm for touching rice grains images. In Proceedings of the 2010 International Conference on Audio, Language and Image Processing, Shanghai, China, 23–25 November 2010; pp. 802–805. [Google Scholar] [CrossRef]
  64. Rana, D.S. Segmentation of Overlapping Wheat Grains for Quality Detection. 2018. Available online: https://www.semanticscholar.org/paper/Segmentation-of-Overlapping-Wheat-Grains-for-Rana/5fffb195cb6d3bf329310f40e0e9b71be7db6377 (accessed on 1 August 2022).
  65. Zhou, C.; Lin, K.; Xu, D.; Liu, J.; Zhang, S.; Sun, C.; Yang, X. Method for segmentation of overlapping fish images in aquaculture. Int. J. Agric. Biol. Eng. 2019, 12, 135–142. [Google Scholar] [CrossRef]
  66. Dice-Score, L.R. Measures of the Amount of Ecologic Association Between Species. Ecology 1945, 26, 297–302. [Google Scholar] [CrossRef]
  67. Kingma, D.P.; Ba, J. Adam: A method for stochastic optimization. arXiv 2014, arXiv:1412.6980. [Google Scholar]
  68. Abdulla, W. Mask R-CNN for Object Detection and Instance Segmentation on Keras and TensorFlow. 2017. Available online: https://github.com/matterport/Mask_RCNN (accessed on 1 August 2022).
  69. Foresti, R.; Rossi, S.; Pinelli, S.; Alinovi, R.; Barozzi, M.; Sciancalepore, C.; Galetti, M.; Caffarra, C.; Lagonegro, P.; Scavia, G.; et al. Highly-defined bioprinting of long-term vascularized scaffolds with Bio-Trap: Complex geometry functionalization and process parameters with computer aided tissue engineering. Materialia 2020, 9, 100560. [Google Scholar] [CrossRef]
Figure 1. The flowchart of the proposed architecture.
Figure 1. The flowchart of the proposed architecture.
Make 04 00052 g001
Figure 2. Sample SEM images from the dataset. The first row shows the raw SEM image and the second row shows the corresponding contrast enhanced image.
Figure 2. Sample SEM images from the dataset. The first row shows the raw SEM image and the second row shows the corresponding contrast enhanced image.
Make 04 00052 g002
Figure 3. The common vertex types: (a) X-type, (b) Y-type, (c) I-type, and (d) H-type; the green markers represent the branching points ( b p ), the blue lines represent slope lines of the edge starting from a vertex (branching point), the red markers represent the corner points within c r radius, and the blue markers (×) represent endpoints of the branches.
Figure 3. The common vertex types: (a) X-type, (b) Y-type, (c) I-type, and (d) H-type; the green markers represent the branching points ( b p ), the blue lines represent slope lines of the edge starting from a vertex (branching point), the red markers represent the corner points within c r radius, and the blue markers (×) represent endpoints of the branches.
Make 04 00052 g003
Figure 4. The detection of feature points of bacteria cells. (a) Corner detection sketch map with two overlapping cells: the red markers are the extraction of corner points using the circular template approach [61] and the green markers are the detection of corner points using the Harris corner detector [42]. The yellow dashed circle shows the c r radius of the branching point (the center of a circle). (b) Corner point detection with the circular template approach. The x-axis represents the a p ratio and y-axis represents the boundary pixels. The two red horizontal lines are the specified intensity range and (b) shows that the pixels have exceeded the threshold values.
Figure 4. The detection of feature points of bacteria cells. (a) Corner detection sketch map with two overlapping cells: the red markers are the extraction of corner points using the circular template approach [61] and the green markers are the detection of corner points using the Harris corner detector [42]. The yellow dashed circle shows the c r radius of the branching point (the center of a circle). (b) Corner point detection with the circular template approach. The x-axis represents the a p ratio and y-axis represents the boundary pixels. The two red horizontal lines are the specified intensity range and (b) shows that the pixels have exceeded the threshold values.
Make 04 00052 g004
Figure 5. An example of graph representation of over segmented bacterial images. The edges represent the skeleton connections; the blue vertices represent branching points ( b p ) of the skeleton, the orange points represent the endpoints ( e p ), and the cross points represent the extracted concave points ( c p ). Here, “A” represents an instance of X-type, “B” represents an instance of Y-type, “C” represents an instance of I-type, and “D” represents an instance of H-type.
Figure 5. An example of graph representation of over segmented bacterial images. The edges represent the skeleton connections; the blue vertices represent branching points ( b p ) of the skeleton, the orange points represent the endpoints ( e p ), and the cross points represent the extracted concave points ( c p ). Here, “A” represents an instance of X-type, “B” represents an instance of Y-type, “C” represents an instance of I-type, and “D” represents an instance of H-type.
Make 04 00052 g005
Figure 6. The examples of graph representations of different skeleton connection types. The edges represent the skeleton connections, similar edge colors represent single bacteria cell, the green vertices represent branching points (denoted as b p ) of the skeleton, the orange points represent the endpoints (denoted as e p ), and the cross points represent the extracted concave points (denoted as c p ). Here, (ad) represents an instance of X-type, Y-type, I-type, and H-type, respectively.
Figure 6. The examples of graph representations of different skeleton connection types. The edges represent the skeleton connections, similar edge colors represent single bacteria cell, the green vertices represent branching points (denoted as b p ) of the skeleton, the orange points represent the endpoints (denoted as e p ), and the cross points represent the extracted concave points (denoted as c p ). Here, (ad) represents an instance of X-type, Y-type, I-type, and H-type, respectively.
Make 04 00052 g006
Figure 7. The segmentation results for two instances of (a) original SEM images, column (b) shows Zou et al. [13] ellipse fitting segmentation, column (c) shows Abeyrathna et al. [49] ellipse fitting segmentation, column (d) shows mask R-CNN instance segmentation, and column (e) shows segmentation through the current approach.
Figure 7. The segmentation results for two instances of (a) original SEM images, column (b) shows Zou et al. [13] ellipse fitting segmentation, column (c) shows Abeyrathna et al. [49] ellipse fitting segmentation, column (d) shows mask R-CNN instance segmentation, and column (e) shows segmentation through the current approach.
Make 04 00052 g007
Figure 8. The sample failure cases of the proposed method: (a) two or more bacteria cells appear close and parallel to each other; (b) the U-Net models output partial semantic segmentation results; (c,d) chain-shaped cell arrangements. The first row shows the original SEM images and the second row shows the corresponding instance segmentation outputs from the proposed method where our solution could not obtain accurate cellular segmentation.
Figure 8. The sample failure cases of the proposed method: (a) two or more bacteria cells appear close and parallel to each other; (b) the U-Net models output partial semantic segmentation results; (c,d) chain-shaped cell arrangements. The first row shows the original SEM images and the second row shows the corresponding instance segmentation outputs from the proposed method where our solution could not obtain accurate cellular segmentation.
Make 04 00052 g008
Table 1. The performance comparison of the proposed method with the other three overlapping object segmentation methods. The best results of micro-averaged S R , S E R R , S E F R , and D i c e s c o r e (mean ± std) are highlighted.
Table 1. The performance comparison of the proposed method with the other three overlapping object segmentation methods. The best results of micro-averaged S R , S E R R , S E F R , and D i c e s c o r e (mean ± std) are highlighted.
MethodSRSERR (%)SEFR (%)Dice-Score (%)Avg Time (s)
U-Net + [13]1.6074.6952.6379.10 ± 0.0415.2
U-Net + [49]0.7923.1177.0280.60 ± 5.0832.6
Mask R-CNN0.8227.0868.2583.44 ± 3.6220.1
In the Present Study1.1314.7188.2889.52 ± 1.3122.9
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Abeyrathna, D.; Rauniyar, S.; Sani, R.K.; Huang, P.-C. A Morphological Post-Processing Approach for Overlapped Segmentation of Bacterial Cell Images. Mach. Learn. Knowl. Extr. 2022, 4, 1024-1041. https://doi.org/10.3390/make4040052

AMA Style

Abeyrathna D, Rauniyar S, Sani RK, Huang P-C. A Morphological Post-Processing Approach for Overlapped Segmentation of Bacterial Cell Images. Machine Learning and Knowledge Extraction. 2022; 4(4):1024-1041. https://doi.org/10.3390/make4040052

Chicago/Turabian Style

Abeyrathna, Dilanga, Shailabh Rauniyar, Rajesh K. Sani, and Pei-Chi Huang. 2022. "A Morphological Post-Processing Approach for Overlapped Segmentation of Bacterial Cell Images" Machine Learning and Knowledge Extraction 4, no. 4: 1024-1041. https://doi.org/10.3390/make4040052

Article Metrics

Back to TopTop