Next Article in Journal
Functional Programming for the Internet of Things: A Comparative Study of Implementation of a LoRa-MQTT Gateway Written in Elixir and C++
Previous Article in Journal
Schottky Barrier Formation Mechanism and Thermal Stability in Au-Free Cu/Metal–Silicide Contacts to GaN-Cap/AlGaN/AlN-Spacer/GaN-on-Si Heterostructure
Previous Article in Special Issue
Automated Left Ventricle Segmentation in Echocardiography Using YOLO: A Deep Learning Approach for Enhanced Cardiac Function Assessment
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Regionally Adaptive Active Learning Framework for Nuclear Segmentation in Microscopy Image

1
School of Communications and Information Engineering, Xi’an University of Posts and Telecommunications, Xi’an 710121, China
2
International Joint Research Center for Wireless Communication and Information Processing Technology of Shaanxi Province, Xi’an 710121, China
*
Author to whom correspondence should be addressed.
Electronics 2024, 13(17), 3430; https://doi.org/10.3390/electronics13173430
Submission received: 16 July 2024 / Revised: 25 August 2024 / Accepted: 26 August 2024 / Published: 29 August 2024

Abstract

:
Recent innovations in tissue clearing and light-sheet microscopy allow the rapid acquisition of intact micron-resolution images in fluorescently labeled samples. Automated, accurate, and high-throughput nuclear segmentation methods are in high demand to quantify the number of cells and evaluate cell-type specific marker co-labeling. Complete quantification of cellular level differences in genetically manipulated animal models will allow localization of organ structural differences well beyond what has previously been accomplished through slice histology or MRI. This paper proposes a nuclei identification tool for accurate nuclear segmentation from tissue-cleared microscopy images by regionally adaptive active learning. We gradually improved high-level nuclei-to-nuclei contextual heuristics to determine a non-linear mapping from local image appearance to the segmentation label at the center of each local neighborhood. In addition, we propose an adaptive fine-tuning (FT) strategy to tackle the complex segmentation task of separating nuclei in close proximity, allowing for the precise quantification of structures where nuclei are often densely packed. Compared to the current nuclei segmentation methods, we have achieved more accurate and robust nuclear segmentation results in various complex scenarios.

1. Introduction

Cell biology occurs over an enormous range of scales, with critical events taking place at the atomic, molecular, and cellular levels [1]. Ideally, we would like to visualize all of the events that take place in a cell, since this would give the most complete picture of cell function and the complex networks of interactions that give rise to the observed phenotype and cellular behaviors. For example, our understanding of nervous system function and dysfunction is critically dependent on visualizing and quantifying the three-dimensional structure of the brain. Light microscopy together with tissue clearing can be applied to image intact brain samples at high throughput in order to quantify phenotypic differences [1,2,3]. The accurate detection, localization, and quantification of all cells across the entire brain will further our understanding of how genetic variation alters brain structure resulting in neuropsychiatric disorders [4]. Automatic nuclei or cell segmentation methods are in high demand in many neuroscience studies to calculate the number of cells of a given type [5].
However, none of the existing image segmentation algorithms address the complexity of nuclear segmentation within whole microscopy images (usually target local image patch, where it is not very urgent to automatically count cells), especially targeting the following challenges:
  • Poor image quality. Image quality is limited by the acquisition microscope hardware and imaging time. As shown in Figure 1, intensity inhomogeneity (displayed in the red box) is a common issue in microscopy. It is difficult to find an optimal threshold to segment nuclei that works well for the entire microscopy image. Other difficulties include image noise and low image contrast.
  • Densely packed nuclei. When nuclei are in close proximity, the boundaries between nuclei are difficult to delineate. As shown in Figure 1, the cells in the blue box are densely packed, making accurate segmentation more difficult. The situation is very common, such as in the dentate gyrus of brain hippocampus. Since the topology of densely packed nuclei is very complex in terms of the number of nuclei in close proximity, current state-of-the-art methods, which work well to segment a single nucleus, usually fail to separate the boundaries between multiple nuclei touching one another.
  • Large image dimensionality. In general, the computational challenges are based on the data size and complexity. A typical mouse brain (volume of ∼1000 mm3) imaged at high-resolution (e.g., 0.4853 μm × 0.4853 μm × 1 μm) results in approximately 8 TB of data (16-bit) for each acquired channel. Thus, it is computationally prohibitive to use sophisticated image processing methods that require matrix inversion or fit the contour of nuclei in an iterative manner.
For the goal of nuclear segmentation within microscopy images, we refer to several effective methods in computer vision. In deep watershed transform [6], the complex learning task is broken down into several intermediate tasks and networks are trained step by step. In our proposed method, we follow a similar approach to gradually improve the segmentation in a cascaded manner. Our regionally adaptive nuclear segmentation strategy is conceptually similar to the focal loss function in RetinaNet [7] in terms of hierarchically treating the segmentation (or object detection) with respect to the confidence in the correct classification. Januszewski presented a flood-filling network [8] to iteratively refine the tracing of the neurons from electron microscopy data by adding a recurrent loop that can check the current predictions and fix the segmentation errors in the next networks. Reka et al. comprehensively reviewed the most recent single-cell segmentation tools [9].
To address these challenges, we propose a regionally adaptive active-learning solution for nuclei segmentation in tissue-cleared microscopy images. Specifically, a cascaded approach is used, where both low-level image appearance features and high-level nuclei-to-nuclei contextual information are used to gradually improve the nuclear segmentation result by refining the contextual features based upon a more and more accurate nuclei probability map. Meanwhile, we propose a cascading architecture with a cross-training scheme to resolve the error accumulation problem in the cascading architecture. In addition, we have further developed a regionally adaptive fine-tuning strategy to actively learn the most challenging situation where multiple touching nuclei are in close proximity with blurry boundaries.

2. Related Work on Nuclei Segmentation

Various computational methods have been proposed in the last decade to automatically segment nuclei from microscopy images. In brief, the existing nuclei/cell segmentation algorithms can be classified into the following categories, as described below:
  • Classic image processing techniques. The majority of current nucleus/cell segmentation methods are based on several classic image processing algorithms such as intensity thresholding [10,11,12,13,14], morphology operations [15,16,17], region accumulation [18,19,20,21], and deformable models [22,23]. Intensity thresholding might be the simplest and computationally cheapest method for cell segmentation. However, intensity thresholding, even the local thresholding technique such as local Otsu [24], does not lead to accurate segmentations within regions of densely packed nuclei since the boundaries are usually blurry. Similarly, a multi-level segmentation method using Darwinian particle swarm optimization (DPSO) was proposed in [25], which works in the frequency domain and assumes the nuclei voxels usually have a high response to certain specifically designed band-pass filters. The region-growing technique starts from a set of selected seed points in the image and iteratively adds connected points to form the labeled regions [5]. The significant limitation of the region-growing and watershed transform is its propensity to over-segment, due to image noise. To address this limitation, marker-based watershed methods have been proposed [18,26] to improve the segmentation accuracy by using the detected nuclei as the seeds to guide the segmentation of nuclei. However, it is very challenging to determine the localization of markers in large-scale microscopy images [27]. Recently, a cell detection method called CUBIC was proposed in [3] to construct a single-cell resolution whole brain atlas. CUBIC is a hybrid 2D/3D method where a 2D mean filter is first used to detect low maximum intensity values at the center of the nuclei and then a 3D filter is applied to unify the multiply-detected cells. However, CUBIC is unable to provide morphological information such as size and shape of nuclei, which might limit its application.
  • Nuclei segmentation using unsupervised learning. K-means clustering has been applied to obtain an initial segmentation in [28,29], followed by a set of post-processing steps to refine the segmentation results. Probability inference under the EM (expectation maximization) framework is also used in automated nuclei segmentation for various biomedical data [30,31,32]. In [33,34], spectral clustering is used to identify the boundary of a cell, where the spatial and color information are used to measure similarity between the neighboring points. Furthermore, Song et al. proposed an improved nuclei segmentation method using the adaptive hierarchical clustering algorithm [35]. To improve the robustness of nuclei segmentation results, Su et al. [36] proposed a semi-automatic segmentation method that uses super-pixels to determine the segmentation label for each super-pixel, instead of each point. These unsupervised approaches, which are globally optimized for all the points in the entire image, are often not robust against the presence of substantial intensity inhomogeneity.
  • Nuclei segmentation using supervised learning techniques. A coarse-to-fine nucleus segmentation framework was proposed in [37], where a multiscale convolutional network (MSCN) was integrated with the graph partition-based method for cytoplasm and nucleus segmentation in cervical images. Ronneberger et al. proposed the U-net architecture and applied the U-net to a cell segmentation task in light microscopic images [38]. Attention U-net introduces the attention and residual modules in the U-Net for the segmentation of live cells in microscopy images [39]. Cellpose proposed a generalist for a wide range of images of cell-based U-net [40]. Mask-RCNN used object detection technologies to achieve instance segmentation of nuclei [41]. ClearMap 1.0 [2] is a widely used computational software in the neuroscience community that provides functions such as cell detection, localization, and quantification of nuclei. The ilastik [42] is the component in ClearMap for nuclear segmentation, which uses random forest techniques [43].
For comparison, we show the characteristics of the most representative methods and our proposed method, named a regionally adaptive network via cascading architecture (RANCA), in Table 1, in terms of the following: (1) Ability to tackle touching nuclei in densely packed region; (2) Ability to quantify morphological information (such as nuclei size and shape) for each individual nucleus; (3) Adaption to local structure (global or local approach); (4) No need for post-processing touching nuclei, where × and ✓ indicate the absence and presence of these capabilities, respectively.
Our RANCA method is designed for accurate nuclear segmentation in microscopy images. After a comprehensive evaluation of the varieties of microscopy images, we have achieved more accurate and robust segmentation results than the currently available methods, indicating the promising potential of our proposed nuclei segmentation method to quantify cell-level structure differences in tissue-cleared organ samples.

3. Methods

Inspired by the hierarchical information processing system of the human visual cortex, an active-learning cascaded model is presented in this paper. In general, there are two main components in our neural network architecture: (1) a cascading architecture with a cross-training scheme for nuclei segmentation that utilizes both low-level appearance features and high-level contextual features; and (2) a fine-tuning network to adaptively refine the segmentation result for a nucleus well-separated from its neighbors and densely packed nuclei, respectively. We will first present the cascading architecture with a cross-training scheme in Section 3.1, and then present our adaptive fine-tuning strategy for nuclei in close proximity in Section 3.2. We summarize our nuclear segmentation method in Section 3.3.

3.1. Cascading Architecture with Cross-Training

Due to low image contrast, noise, and intensity inhomogeneity, low-level features derived from image intensity are often not sufficient to guide the training of neural networks. Other high-level features are necessary to improve segmentation in cases of poor image quality. The nuclei segmentation task is broken down into several intermediate tasks and networks are trained step by step. The output of the initial nuclei segmentation task includes the probability of nuclei at each pixel. The network can extract patch-wise contextual features [44,45] based on the tentative nuclei probability map, which encodes the spatial relationship between surrounding nuclei. Leveraging these high-level heuristics, we can enhance the reliability of the nuclei probability map by training another network using both appearance and contextual features of different training data, and then use the refined contextual features to initialize the next network and so on until the segmentation results converge. However, when the model generates incorrect predictions with high confidence, these incorrect predictions will be further used as input to train the next cascade network. It is difficult for the model itself to rectify these false predictions as that scheme becomes less accurate. To alleviate this problem, we use cross-training, which trains two streams of models (Model i∼a and Model i∼b) simultaneously, as shown in Figure 2. These two independent models of concurrent training have the same structure but use different training data; the details about the network architecture are in Supplementary Materials. These two models help each other to rectify the false predictions. Specifically, Model i∼b is trained using the training data a to produce a nuclei probability map P i b , and then P i b is treated as another feature channel, which is combined with image information I to jointly train Model i+1∼a. In Figure 2, the solid box and dotted box represent the training and inference processes, respectively.
The single model is the same as DeepCell [46], which produces initial nuclei segmentation and progressively improves the nuclei segmentation results by using recurrent connections. The output of the model is the probability of the underlying center point being a nucleus, nucleus edge, or background. The larger the probability value is, the more likely the center of the input patch belongs to the corresponding category. In the cascading architecture, the input to the first model (Model 1∼a/b) only consists of image intensity patches. After we apply the first model to all image points of training samples, we can obtain an initial probability map of nuclei that conveys high-level nuclei-to-nuclei information. The nuclei probability map produced by the previous model is treated as another feature channel. Compared to a single model, the convolutional procedure in the first convolutional block becomes a two-channel one. The nuclei probability map P i a / b produced by the previous model is combined with image information to check the current predictions and fix the segmentation errors in the next networks. The cascading architecture enhances the discriminative power of the model based on the improved nuclei probability map. Thus, by repeating the same cascading procedure we can progressively improve the segmentation result by leveraging more and more discriminative contextual features that are derived from the more and more accurate nuclei probability maps.

3.2. Regionally Adaptive Fine-Tuning Strategy

The cascading architecture with the cross-training scheme presented in Section 3.1 does not consider the spatial distribution of nuclei. The density of nuclei is very high in some regions, such as the dentate gyrus of the hippocampus, where accurately segmenting densely packed nuclei becomes very difficult. Since boundaries of touching nuclei in a densely packed region often have relatively low contrast compared with well-separated nuclei, we propose to treat these two nuclear segmentation scenarios differently in order to achieve active learning from selected samples. Specifically, we propose a regionally adaptive fine-tuning strategy to train a specific model with the most relevant annotated training samples for identifying nuclei boundaries when nuclei are in close proximity and appear in the image as touching, as demonstrated in Figure 3. To do so, we need to have (1) a detection method to mark the regions having unconnected nuclei and regions with touching nuclei, (2) a specific model to identify boundaries for touching nuclei in densely-packed regions.
(1) Detect touching nuclei. First, we apply the learned cascaded models to the existing training samples. Then, we binarize the nuclei probability map by using a thresholding technique (set to 0.5), obtaining the approximate mask of nuclei in the microscopy images. Next, we examine the shape of every connected component in the binary image to classify single nuclei and clumped nuclei. Ovality, concavity, and area of connected regions are used as the criteria to distinguish the unconnected nuclei from the touching nuclei.
  • Ovality is measured by the ratio of the major axis and minor axis of fitting an ellipse (red ellipse in Figure 3a). For most individual nuclei, the contour is close to a circle. We can identify the region of suspected nuclei clumps if the ratio is greater than a threshold (here we use 3 for our image data).
  • Concavity is measured by the difference in size between convex hulls (red curve in Figure 3b) and connected regions (green curve in Figure 3b). If the concavity degree is greater than a certain threshold (here we use 30 for our image data), it is highly possible that the underlying region is a clump of nuclei.
  • The area of connected region. As shown in Figure 3c, it is straightforward to detect a region containing densely packed nuclei based on the area of the connected region being larger than a common nucleus area (here we used 500 pixels for our image data as threshold).
An example of the detected touching nuclei is shown in Figure 4, where the regions with touching nuclei are marked in blue.
(2) Active learning from selected samples. Given a separated probability map and the carefully delineated manual annotations of each nucleus, the network training procedure is identical to the previous networks. This specific model for touching nuclei focuses on the blurry boundaries between nuclei in close proximity. The advantage of adaptively considering single nuclei and multiple touching nuclei is shown in Figure 5b,c. As the intensity profile along the blue line shown on the right, it is much easier to separate two touching cells (designated by red arrows) after applying the specific models (Figure 5c) than only using the cascaded models (Figure 5b). It is possible that oblong endothelial nuclei could be mistakenly detected as touching nuclei because they match the first criterion in the detection module. However, it is less likely that our specific models for densely packed nuclei would split the single oblong nucleus since the intensity profile inside a single oblong nucleus (relatively bright in the middle) is completely different as compared to the appearance pattern of touching nuclei (relatively dark in the middle).

3.3. Complete Workflow of Regionally Adaptive Network via Cascading Architecture (RANCA)

Considering the computational cost, we cascaded three layers of models for segmenting nuclei from background. In our study, in the fine-tuning scheme, one specific model is used for well-separated nuclei and three specific models for densely packed nuclei. To segment nuclei in a tissue-cleared microscopy image, the following steps are performed sequentially.
1.
Apply the cascaded models at each point of a microscopy image patch, where the output of a layer is the average of output from the concurrent training models. The input is the image intensity image patch (Figure 6a). The image intensities are normalized by the mean of image pixel values. The output is the probability of being background, edge, and nuclei for the center point. After that, we can obtain the probability map for background, nuclei, and nuclei edge, as shown in Figure 6b;
2.
Apply the nuclei clump detection approach to partition whole images into regions with multiple touching nuclei and unconnected nuclei, respectively.
3.
Apply the specific models on the separated probability maps of touching and single nuclei, respectively. The outputs are shown in Figure 6c and Figure 6d, respectively.
4.
Combine (by adding) the segmentation results to obtain the final segmentation, as shown in Figure 6e.
The final segmentation includes nuclei boundaries and masks. Hereafter, we use CAC to denote the nuclear segmentation method only using a cascading architecture with cross-training, as well as RANCA to denote our complete nuclear segmentation method using a cascading architecture with cross-training and our regionally adaptive fine-tuning strategy.

4. Experiments and Results Analysis

4.1. Experimental Setup

The data we used are mainly from the image set BBB038, available from the Broad Bioimage Benchmark Collection [47]. A sample image is shown in Figure 1. We selected 42 patches in the dataset and manually labeled more than 9000 nuclei in 2D images. In the training stage, we selected training samples from 16 manually annotated patches (others are for testing). Our training data are increased by rotation and reflection of the original image patches to a total of over 156,000 training patches (71 × 71 pixels), each patch with the known label (nucleus, edge, and background) at the patch center. The purpose of augmenting training data by rotation and reflection is to obtain robust features for various nuclear morphologies. Max-pooling of a 2 × 2 window is operated to combine the activation vectors from convolutional filters. All weights in models were learned using stochastic gradient descent. The learning rate is initialized as 0.01 and decayed by 0.005%, and the momentum is set to 0.9.
Besides visual inspection with respect to the manual annotations, we further calculate the degree of spatial overlap between the manual annotations and the automatic nuclei segmentation results, which include Dice, Jaccard coefficient, Dice false positive (Dice FP), and Dice false negative (Dice FN), as well as nuclei counting. The overlap degree is measured by the Dice and Jaccard coefficients, which are widely used to evaluate segmentation accuracy. The Dice and Jaccard coefficients are defined as:
Dice ( R , S ) = 2 | R S | | R | + | S |
Jaccard ( R , S ) = | R S | | R S |
where R and S are manual segmentation and binary segmentation results by an automatic nuclei segmentation method, respectively. In addition, Dice false positive (Dice FP) and Dice false negative (Dice FN) are also used to evaluate the specificity of segmentation performance, where the Dice FP quantifies the error ratio of mis-segmenting background as nuclei and Dice FN quantifies an error ratio of missed nuclei, respectively. The Dice FP and Dice FN are defined as:
Dice FP ( R , S ) = 2 | R ¯ S | | R | + | S |
Dice FN ( R , S ) = 2 | R S ¯ | | R | + | S |
where R ¯ and S ¯ stand for the complements of R and S, respectively.

4.2. Evaluation of Segmentation Performance

4.2.1. Comparion with Other Methods

We evaluate the segmentation results in various challenges in nuclei segmentation by comparing with other methods. The comparison to other counterpart methods include local Otsu’s method using region adaptive threshold, referred to as (1) ilastik [42], (2) U-net [38], (3) Attention U-net [39], (4) Mask-RCNN [41], (5) Cellpose, and (6) our complete nuclear segmentation method (referred to as RANCA), respectively.
For analyzing the effectiveness and robustness of methods, we evaluated the segmentation results for representative nuclei segmentation challenges. In the top of Figure 7, we show five representative segmentation challenge cases, including (a) low image contrast, (b) intensity inhomogeneity, (c) touching nuclei, (d) intensity inhomogeneity and low image contrast, and (e) touching nuclei with intensity inhomogeneity. The corresponding segmentation results by RANCA are shown in the bottom of Figure 7, where red contours denote automatically predicted nuclei boundaries.
Table 2 shows the statistical mean of these performance measures by six different segmentation approaches performed on 938 nuclei. Our RANCA method shows significant improvement in terms of segmentation accuracy. It is noteworthy that the Dice FP of our method is much lower than other counterpart methods, indicating the accuracy of detected nuclei by our method is very consistent. For the detection recall indicated by Dice FN, our method also shows improved results.

4.2.2. Ablation Study

This section illustrates the advantage of our complete method over the partial method for the densely packed regions. Figure 8a shows the segmentation result if we stop the cascaded models right before the clump identification component. Since there is no fine-tuning step designed for the touching nuclei, the segmentation result is reasonable for single nuclei but does not work well for touching nuclei. Figure 8b shows the segmentation result from cascaded models without the nuclei clump identification scheme. The nuclei clump identification scheme is for regionally adaptive active learning. Without the scheme distinguishing single-nuclei and nuclei-clump regions, touching nuclei are not handled well (e.g., indicated by the orange and blue arrows), despite cascading more models.
We have seen that performance on dense cells depends significantly on the receptive field size, and networks with a matched receptive field will perform better than networks with unmatched ones (either bigger or smaller). We specifically examine segmentation performance with respect to the receptive field size. We evaluate the impact of receptive field size in our proposed RANCA method and the partial method CAC. Specifically, we set the receptive field as a square region and the size of the receptive field to 30, 40, 50, 60, 70, and 80 pixels, respectively. For each setting, we trained the networks on the same training data and then evaluated the nuclear segmentation results in terms of the overlap ratio between the manually labeled nuclei and automatic segmentation results. We also repeated the same procedure to the two networks (RANCA and CAC) for segmenting 837 nuclei, as the Dice ratio shown in Table 3. RANCA consistently achieves better segmentation performance as the receptive field size increases from 30 to 80 pixels. We found our RANCA achieved the best performance at 70 pixels for our data. In addition, we found the significant differences are from very dense nuclei clumps and noise areas, such as the regions displayed in the yellow box in Figure 9. The networks with a matched receptive field perform better in close proximity and noise image regions.
For the parameter setting of the nuclei clump detection component, we inspect three selection criteria (ovality, concavity, and size) separately (evaluate one by fixing the other two). By adjusting the parameters (detailed in Section 3.2, the identification result is the union of that of the three criteria), we demonstrate the curve of the Dice ratio with respect to the strictness of each touching nuclei criterion. As shown in Figure 10, by fixing the other two parameters, the segmentation performance is insensitive to the parameter selection of the nuclei clump identification scheme. There are two reasons for this: (1) when the other two parameters are fixed, only a few nuclei are influenced by the variation of one parameter; (2) not only does nuclei clump identification determine the performance, but also the followed specified models play an important role. In addition, in the extreme cases (such as Ovality is at 1, the threshold of ratio of major axis and minor axis of ellipse fitting, all nuclei are identified as clumped nuclei; the cases of Concavity at 0 and Very large size at 0 are similar), the value of Dice decreases sharply. We found the segmentation performance is robust with parameter selection of the nuclei clump identification scheme, and performance deteriorates only in extreme cases.

4.3. Nuclear Counting Application

Recall that we segment each pixel into either nucleus (as shown in Figure 11a), nuclei edge (as shown in Figure 11b), or background after voting across the probability maps. The algorithm can be used to count nuclei in two steps: (1) Nuclei trimming procedure. We first subtract the edge probability from the nuclei probability map. Thus, we can separate the nuclei in the cases where the underlying nuclei are connected. (2) Counting procedure. Counting the number of nuclei eventually turns to the counting of connected regions (eliminating very small regions, for example number of pixels is less than 10) based on the nuclear segmentation results. The result of nuclei counts in nuclei images is shown in Figure 11c, where every connected area is labeled with a unique index. There are in total 256 nuclei detected by our method, which is very close to the ground truth, i.e., 258 nuclei in total.
The quantitative evaluation of nuclei counting is shown in Figure 12. Specifically, the nuclei localization results in four example images are shown in Figure 12a, where we label the center of nuclei with green dots. The white numbers in Figure 12b denote the nuclei index, where the largest index is the total number of nuclei in the cropped region. For inspecting the accuracy of detecting nuclei, the edge (red circle) and ground truth (green dot) are overlaid. Furthermore, we repeat the same nuclei counting method based on the nuclei segmentation results obtained by DeepCell and CAC. Here, we specifically evaluate the advantage of our fine-tuning strategy in separating nuclei clumps by adding CAC (our method without adaptive fine-tuning strategy) for comparison. The total numbers of detected nuclei by these nuclei counting methods are displayed in Figure 12c. As the density of nuclei increases, the counting performances of DeepCell, CAC, and CUBIC become less accurate. It is worth noting that the nuclei counting result here is consistent with the segmentation results shown in the previous section since DeepCell, CAC, and ClearMap have limited capability to separate nuclei in densely packed regions. By contrast, the nuclei counting result using RANCA is much more accurate than other methods, even in the case of high nuclei density.

5. Conclusions

In this paper, we propose an active-learning-based nuclei segmentation method for microscopy images. We formulate nuclei identification to learn a non-linear mapping between local image appearance and the probability of a nucleus at the center of the underlying patch. In order to address the issues of low image contrast and high-intensity inhomogeneity, we developed a cascaded learning network that utilizes both low-level image appearance and high-level contextual features to segment nuclei from microscopy images. The complex learning task is broken down into several intermediate ones in order to train the networks step by step, so our network can progressively improve nuclei segmentation results in a cascaded manner. We further developed an adaptive fine-tuning strategy to actively learn the most challenging problem where multiple nuclei are in close proximity with blurry boundaries. We have evaluated the segmentation results on nuclei images and compared them with other methods. More accurate and robust segmentation results have been achieved, indicating the promising potential of our proposed nuclei segmentation method for use in large microscopy images.
Future work includes (1) improving the segmentation speed by screening the background points, necessitating the application of the cascaded model only on difficult-to-segment image regions; (2) improving the accuracy by extending our current method into a multi-resolution framework; (3) further optimizing the current implementation to fully utilize both CPU and GPU computational resources.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/electronics13173430/s1.

Author Contributions

Conceptualization, Q.W.; methodology, Q.W. and J.W.; software, J.W.; validation, B.Q.; writing—original draft, Q.W. and J.W.; writing—review and editing, J.W. and B.Q. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Shaanxi International Science and Technology Cooperation Program (2021KW-06) and the National Natural Science Foundation of China (41874173).

Data Availability Statement

All data are available from the authors.

Conflicts of Interest

The authors declare no conflicts of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

References

  1. Fürth, D.; Vaissière, T.; Tzortzi, O.; Xuan, Y.; Märtin, A.; Lazaridis, I.; Spigolon, G.; Fisone, G.; Tomer, R.; Deisseroth, K.; et al. An interactive framework for whole-brain maps at cellular resolution. Nat. Neurosci. 2018, 21, 139–149. [Google Scholar] [CrossRef] [PubMed]
  2. Renier, N.; Adams, E.L.; Kirst, C.; Wu, Z.; Azevedo, R.; Kohl, J.; Autry, A.E.; Kadiri, L.; Venkataraju, K.U.; Zhou, Y.; et al. Mapping of brain activity by automated volume analysis of immediate early genes. Cell 2016, 165, 1789–1802. [Google Scholar] [CrossRef] [PubMed]
  3. Murakami, T.C.; Mano, T.; Saikawa, S.; Horiguchi, S.A.; Shigeta, D.; Baba, K.; Sekiya, H.; Shimizu, Y.; Tanaka, K.F.; Kiyonari, H.; et al. A three-dimensional single-cell-resolution whole-brain atlas using CUBIC-X expansion microscopy and tissue clearing. Nat. Neurosci. 2018, 21, 625–637. [Google Scholar] [CrossRef] [PubMed]
  4. Iritani, S. Neuropathology of schizophrenia: A mini review. Neuropathology 2007, 27, 604–608. [Google Scholar] [CrossRef]
  5. Meijering, E. Cell segmentation: 50 years down the road [life sciences]. IEEE Signal Process. Mag. 2012, 29, 140–145. [Google Scholar] [CrossRef]
  6. Bai, M.; Urtasun, R. Deep watershed transform for instance segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 5221–5229. [Google Scholar]
  7. Lin, T.Y.; Goyal, P.; Girshick, R.; He, K.; Dollár, P. Focal loss for dense object detection. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 2980–2988. [Google Scholar]
  8. Januszewski, M.; Kornfeld, J.; Li, P.H.; Pope, A.; Blakely, T.; Lindsey, L.; Maitin-Shepard, J.; Tyka, M.; Denk, W.; Jain, V. High-precision automated reconstruction of neurons with flood-filling networks. Nat. Methods 2018, 15, 605–610. [Google Scholar] [CrossRef] [PubMed]
  9. Hollandi, R.; Moshkov, N.; Paavolainen, L.; Tasnadi, E.; Piccinini, F.; Horvath, P. Nucleus segmentation: Towards automated solutions. Trends Cell Biol. 2022, 32, 295–310. [Google Scholar] [CrossRef]
  10. Callau, C.; Lejeune, M.; Korzynska, A.; García, M.; Bueno, G.; Bosch, R.; Jaén, J.; Orero, G.; Salvadó, T.; López, C. Evaluation of cytokeratin-19 in breast cancer tissue samples: A comparison of automatic and manual evaluations of scanned tissue microarray cylinders. BioMedical Eng. OnLine 2015, 14, 1–11. [Google Scholar] [CrossRef]
  11. Kong, J.; Wang, F.; Teodoro, G.; Liang, Y.; Zhu, Y.; Tucker-Burden, C.; Brat, D.J. Automated cell segmentation with 3D fluorescence microscopy images. In Proceedings of the 2015 IEEE 12th International Symposium on Biomedical Imaging (ISBI), Brooklyn, NY, USA, 16–19 April 2015; pp. 1212–1215. [Google Scholar]
  12. Forsberg, D.; Monsef, N. Evaluating cell nuclei segmentation for use on whole-slide images in lung cytology. In Proceedings of the 2014 22nd International Conference on Pattern Recognition, Stockholm, Sweden, 24–28 August 2014; pp. 3380–3385. [Google Scholar]
  13. Wienert, S.; Heim, D.; Saeger, K.; Stenzinger, A.; Beil, M.; Hufnagl, P.; Dietel, M.; Denkert, C.; Klauschen, F. Detection and segmentation of cell nuclei in virtual microscopy images: A minimum-model approach. Sci. Rep. 2012, 2, 503. [Google Scholar] [CrossRef]
  14. Su, H.; Xing, F.; Lee, J.D.; Peterson, C.A.; Yang, L. Automatic myonuclear detection in isolated single muscle fibers using robust ellipse fitting and sparse representation. IEEE/ACM Trans. Comput. Biol. Bioinform. 2013, 11, 714–726. [Google Scholar] [CrossRef]
  15. Schmitt, O.; Hasse, M. Morphological multiscale decomposition of connected regions with emphasis on cell clusters. Comput. Vis. Image Underst. 2009, 113, 188–201. [Google Scholar] [CrossRef]
  16. Wang, Q.; Niemi, J.; Tan, C.M.; You, L.; West, M. Image segmentation and dynamic lineage analysis in single-cell fluorescence microscopy. Cytom. Part A J. Int. Soc. Adv. Cytom. 2010, 77, 101–110. [Google Scholar] [CrossRef] [PubMed]
  17. Dorini, L.B.; Minetto, R.; Leite, N.J. Semiautomatic white blood cell segmentation based on multiscale analysis. IEEE J. Biomed. Health Inform. 2012, 17, 250–256. [Google Scholar] [CrossRef]
  18. Koyuncu, C.F.; Akhan, E.; Ersahin, T.; Cetin-Atalay, R.; Gunduz-Demir, C. Iterative h-minima-based marker-controlled watershed for cell nucleus segmentation. Cytom. Part A 2016, 89, 338–349. [Google Scholar] [CrossRef]
  19. Qu, L.; Long, F.; Liu, X.; Kim, S.; Myers, E.; Peng, H. Simultaneous recognition and segmentation of cells: Application in C. elegans. Bioinformatics 2011, 27, 2895–2902. [Google Scholar] [CrossRef] [PubMed]
  20. Liu, F.; Xing, F.; Zhang, Z.; Mcgough, M.; Yang, L. Robust muscle cell quantification using structured edge detection and hierarchical segmentation. In Proceedings of the Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th International Conference, Munich, Germany, 5–9 October 2015; Proceedings, Part III 18. Springer: Cham, Switzerland, 2015; pp. 324–331. [Google Scholar]
  21. Santamaria-Pang, A.; Huang, Y.; Rittscher, J. Cell segmentation and classification via unsupervised shape ranking. In Proceedings of the 2013 IEEE 10th International Symposium on Biomedical Imaging, San Francisco, CA, USA, 7–11 April 2013; pp. 406–409. [Google Scholar]
  22. Delgado-Gonzalo, R.; Uhlmann, V.; Schmitter, D.; Unser, M. Snakes on a plane: A perfect snap for bioimage analysis. IEEE Signal Process. Mag. 2014, 32, 41–48. [Google Scholar] [CrossRef]
  23. Dufour, A.; Thibeaux, R.; Labruyere, E.; Guillen, N.; Olivo-Marin, J.C. 3-D active meshes: Fast discrete deformable models for cell tracking in 3-D time-lapse microscopy. IEEE Trans. Image Process. 2010, 20, 1925–1937. [Google Scholar] [CrossRef]
  24. Baxi, A. A review on Otsu image segmentation algorithm. Int. J. Adv. Res. Comput. Eng. Technol. (IJARCET) 2013, 2, 387–389. [Google Scholar]
  25. Ghamisi, P.; Couceiro, M.S.; Martins, F.M.; Benediktsson, J.A. Multilevel image segmentation based on fractional-order Darwinian particle swarm optimization. IEEE Trans. Geosci. Remote Sens. 2013, 52, 2382–2394. [Google Scholar] [CrossRef]
  26. Yang, L.; Qiu, Z.; Greenaway, A.H.; Lu, W. A new framework for particle detection in low-SNR fluorescence live-cell images and its application for improved particle tracking. IEEE Trans. Biomed. Eng. 2012, 59, 2040–2050. [Google Scholar] [CrossRef]
  27. Xing, F.; Yang, L. Robust nucleus/cell detection and segmentation in digital pathology and microscopy images: A comprehensive review. IEEE Rev. Biomed. Eng. 2016, 9, 234–263. [Google Scholar] [CrossRef] [PubMed]
  28. Chen, X.; Zhu, Y.; Li, F.; Zheng, Z.Y.; Chang, E.C.; Ma, J.; Wong, S.T. Accurate segmentation of touching cells in multi-channel microscopy images with geodesic distance based clustering. Neurocomputing 2015, 149, 39–47. [Google Scholar] [CrossRef]
  29. Kothari, S.; Chaudry, Q.; Wang, M.D. Automated cell counting and cluster segmentation using concavity detection and ellipse fitting techniques. In Proceedings of the 2009 IEEE International Symposium on Biomedical Imaging: From Nano to Macro, Boston, MA, USA, 28 June–1 July 2009; pp. 795–798. [Google Scholar]
  30. Fatakdawala, H.; Xu, J.; Basavanhally, A.; Bhanot, G.; Ganesan, S.; Feldman, M.; Tomaszewski, J.E.; Madabhushi, A. Expectation–maximization-driven geodesic active contour with overlap resolution (emagacor): Application to lymphocyte segmentation on breast cancer histopathology. IEEE Trans. Biomed. Eng. 2010, 57, 1676–1689. [Google Scholar] [CrossRef]
  31. Irshad, H.; Veillard, A.; Roux, L.; Racoceanu, D. Methods for nuclei detection, segmentation, and classification in digital histopathology: A review—Current status and future potential. IEEE Rev. Biomed. Eng. 2013, 7, 97–114. [Google Scholar] [CrossRef]
  32. Jung, C.; Kim, C.; Chae, S.W.; Oh, S. Unsupervised segmentation of overlapped nuclei using Bayesian classification. IEEE Trans. Biomed. Eng. 2010, 57, 2825–2832. [Google Scholar] [CrossRef] [PubMed]
  33. Kovacheva, V.N.; Khan, A.M.; Khan, M.; Epstein, D.B.; Rajpoot, N.M. DiSWOP: A novel measure for cell-level protein network analysis in localized proteomics image data. Bioinformatics 2014, 30, 420–427. [Google Scholar] [CrossRef]
  34. Ge, J.; Gong, Z.; Chen, J.; Liu, J.; Nguyen, J.; Yang, Z.; Wang, C.; Sun, Y. A system for counting fetal and maternal red blood cells. IEEE Trans. Biomed. Eng. 2014, 61, 2823–2829. [Google Scholar] [CrossRef]
  35. Song, Y.; Tan, E.L.; Jiang, X.; Cheng, J.Z.; Ni, D.; Chen, S.; Lei, B.; Wang, T. Accurate cervical cell segmentation from overlapping clumps in pap smear images. IEEE Trans. Med. Imaging 2016, 36, 288–300. [Google Scholar] [CrossRef]
  36. Su, H.; Yin, Z.; Huh, S.; Kanade, T.; Zhu, J. Interactive cell segmentation based on active and semi-supervised learning. IEEE Trans. Med. Imaging 2015, 35, 762–777. [Google Scholar] [CrossRef]
  37. Song, Y.; Zhang, L.; Chen, S.; Ni, D.; Lei, B.; Wang, T. Accurate segmentation of cervical cytoplasm and nuclei based on multiscale convolutional network and graph partitioning. IEEE Trans. Biomed. Eng. 2015, 62, 2421–2433. [Google Scholar] [CrossRef]
  38. Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. In Proceedings of the Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th International Conference, Munich, Germany, 5–9 October 2015; Proceedings, Part III 18. Springer: Cham, Switzerland, 2015; pp. 234–241. [Google Scholar]
  39. Ghaznavi, A.; Rychtáriková, R.; Saberioon, M.; Štys, D. Cell segmentation from telecentric bright-field transmitted light microscopy images using a Residual Attention U-Net: A case study on HeLa line. Comput. Biol. Med. 2022, 147, 105805. [Google Scholar] [CrossRef] [PubMed]
  40. Stringer, C.; Wang, T.; Michaelos, M.; Pachitariu, M. Cellpose: A generalist algorithm for cellular segmentation. Nat. Methods 2021, 18, 100–106. [Google Scholar] [CrossRef]
  41. He, K.; Gkioxari, G.; Dollár, P.; Girshick, R. Mask r-cnn. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 2961–2969. [Google Scholar]
  42. Berg, S.; Kutra, D.; Kroeger, T.; Straehle, C.N.; Kausler, B.X.; Haubold, C.; Schiegg, M.; Ales, J.; Beier, T.; Rudy, M.; et al. Ilastik: Interactive machine learning for (bio) image analysis. Nat. Methods 2019, 16, 1226–1232. [Google Scholar] [CrossRef] [PubMed]
  43. Ho, T.K. The random subspace method for constructing decision forests. IEEE Trans. Pattern Anal. Mach. Intell. 1998, 20, 832–844. [Google Scholar]
  44. Kim, M.; Wu, G.; Guo, Y.; Shen, D. Joint labeling of multiple regions of interest (ROIS) by enhanced auto context models. In Proceedings of the 2015 IEEE 12th International Symposium on Biomedical Imaging (ISBI), Brooklyn, NY, USA, 16–19 April 2015; pp. 1560–1563. [Google Scholar]
  45. Tu, Z.; Bai, X. Auto-context and its application to high-level vision tasks and 3D brain image segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2009, 32, 1744–1757. [Google Scholar]
  46. Van Valen, D.A.; Kudo, T.; Lane, K.M.; Macklin, D.N.; Quach, N.T.; DeFelice, M.M.; Maayan, I.; Tanouchi, Y.; Ashley, E.A.; Covert, M.W. Deep learning automates the quantitative analysis of individual cells in live-cell imaging experiments. PLoS Comput. Biol. 2016, 12, e1005177. [Google Scholar] [CrossRef]
  47. Caicedo, J.C.; Goodman, A.; Karhohs, K.W.; Cimini, B.A.; Ackerman, J.; Haghighi, M.; Heng, C.; Becker, T.; Doan, M.; McQuin, C.; et al. Nucleus segmentation across imaging experiments: The 2018 Data Science Bowl. Nat. Methods 2019, 16, 1247–1253. [Google Scholar] [CrossRef]
Figure 1. The challenges in segmenting nuclei from microscopy image, such as large dimensionality, low image contrast, inhomogeneous intensity, and densely packed nuclei.
Figure 1. The challenges in segmenting nuclei from microscopy image, such as large dimensionality, low image contrast, inhomogeneous intensity, and densely packed nuclei.
Electronics 13 03430 g001
Figure 2. The workflow of our proposed method, which can progressively improve the segmentation results by using more and more distinguishable high-level contextual features from a more and more accurate nuclei probability map.
Figure 2. The workflow of our proposed method, which can progressively improve the segmentation results by using more and more distinguishable high-level contextual features from a more and more accurate nuclei probability map.
Electronics 13 03430 g002
Figure 3. Three cases of nuclei clumps.
Figure 3. Three cases of nuclei clumps.
Electronics 13 03430 g003
Figure 4. Separated regions of single and touching nuclei. (a) The detected touching nuclei are labeled in blue. The zoomed-in views of the touching nucleus in the bounding boxes are shown in (b) and (c), respectively.
Figure 4. Separated regions of single and touching nuclei. (a) The detected touching nuclei are labeled in blue. The zoomed-in views of the touching nucleus in the bounding boxes are shown in (b) and (c), respectively.
Electronics 13 03430 g004
Figure 5. Scheme of the regionally adaptive fine-tuning strategy to address the issue of densely packed nuclei. (a) is the original image, (b) is the result from the cascaded models, and (c) is the result from the specific models. The intensities profile along the blue line are shown on the right. As the red arrow indicates, the nuclei are gradually separated from (a) to (c).
Figure 5. Scheme of the regionally adaptive fine-tuning strategy to address the issue of densely packed nuclei. (a) is the original image, (b) is the result from the cascaded models, and (c) is the result from the specific models. The intensities profile along the blue line are shown on the right. As the red arrow indicates, the nuclei are gradually separated from (a) to (c).
Electronics 13 03430 g005
Figure 6. Complete workflow of segmenting nuclei by our proposed method. The input is the image intensity image patch (a). The probability maps of nuclei and nuclei edge after the cascading architecture are shown in (b). Next, we partition the whole image into regions with multiple nuclei clumps and single nuclei. Furthermore, we apply the specific models for densely packed nuclei on the separated probability maps of single and touching nuclei, respectively. We obtain the nuclei segmentation results for touching nuclei and single nuclei, respectively, in (c,d). The nuclei segmentation results for touching and single nuclei are combined to obtain the final segmentation, as shown in (e).
Figure 6. Complete workflow of segmenting nuclei by our proposed method. The input is the image intensity image patch (a). The probability maps of nuclei and nuclei edge after the cascading architecture are shown in (b). Next, we partition the whole image into regions with multiple nuclei clumps and single nuclei. Furthermore, we apply the specific models for densely packed nuclei on the separated probability maps of single and touching nuclei, respectively. We obtain the nuclei segmentation results for touching nuclei and single nuclei, respectively, in (c,d). The nuclei segmentation results for touching and single nuclei are combined to obtain the final segmentation, as shown in (e).
Electronics 13 03430 g006
Figure 7. The nuclear segmentation result by RANCA (bottom row) for five representative segmentation challenges (raw images shown in top row) which are, from left to right, (a) low image contrast, (b) intensity inhomogeneity, (c) touching, (d) intensity inhomogeneity/low contrast, and (e) intensity inhomogeneity/touching. Especially in the nuclei clump region annotated by the yellow box, our method achieve better performance.
Figure 7. The nuclear segmentation result by RANCA (bottom row) for five representative segmentation challenges (raw images shown in top row) which are, from left to right, (a) low image contrast, (b) intensity inhomogeneity, (c) touching, (d) intensity inhomogeneity/low contrast, and (e) intensity inhomogeneity/touching. Especially in the nuclei clump region annotated by the yellow box, our method achieve better performance.
Electronics 13 03430 g007
Figure 8. Performance comparison among (a) cascaded models without fine-tuning scheme, (b) cascaded models without nuclei clump identification scheme, and (c) RANCA (with adaptive fine-tuning scheme). The zoomed-in views of (ac) are displayed in (df), where the significant differences are labeled by the orange and blue arrows.
Figure 8. Performance comparison among (a) cascaded models without fine-tuning scheme, (b) cascaded models without nuclei clump identification scheme, and (c) RANCA (with adaptive fine-tuning scheme). The zoomed-in views of (ac) are displayed in (df), where the significant differences are labeled by the orange and blue arrows.
Electronics 13 03430 g008
Figure 9. Segmentation results (edge) of nuclei in close proximity using RANCA with the receptive field size of (a) 30 pixels to (f) 80 pixels. The differences in performance are that the cells in the blue box are densely packed and there is some image noise in the red box.
Figure 9. Segmentation results (edge) of nuclei in close proximity using RANCA with the receptive field size of (a) 30 pixels to (f) 80 pixels. The differences in performance are that the cells in the blue box are densely packed and there is some image noise in the red box.
Electronics 13 03430 g009
Figure 10. Effects of parameter selection in nuclei clump identification scheme. The variations of segmentation performance with parameter selection of nuclei clump identification scheme are shown in blue lines. (a) Ovality (ratio of major axis and minor axis of ellipse fitting), (b) Concavity (by ratio of size between convex hulls and connected region), and (c) Very large size (The size of connected region).
Figure 10. Effects of parameter selection in nuclei clump identification scheme. The variations of segmentation performance with parameter selection of nuclei clump identification scheme are shown in blue lines. (a) Ovality (ratio of major axis and minor axis of ellipse fitting), (b) Concavity (by ratio of size between convex hulls and connected region), and (c) Very large size (The size of connected region).
Electronics 13 03430 g010
Figure 11. Nuclei counting based on segmentation results: (a) a nuclei probability map and (b) edge probability map. The trimmed nuclei image is obtained by subtracting the segmented edge image from the nuclei probability image; (c) shows the result of nuclei count in the hippocampus, where every connected area is labeled with a unique index.
Figure 11. Nuclei counting based on segmentation results: (a) a nuclei probability map and (b) edge probability map. The trimmed nuclei image is obtained by subtracting the segmented edge image from the nuclei probability image; (c) shows the result of nuclei count in the hippocampus, where every connected area is labeled with a unique index.
Electronics 13 03430 g011
Figure 12. The quantitative evaluation of nuclei counting. (a) Original image (light-sheet image) manually labeled with the center of nuclei (green dots); (b) Results of counting nuclei based on segmentation results by RANCA. Yellow boxes denote the incorrectly segmented cases; (c) Comparative results of nuclei count by ground truth (light blue), DeepCell (orange), CAC (gray), ilastik (yellow), CUBIC (dark blue), and our RANCA (green).
Figure 12. The quantitative evaluation of nuclei counting. (a) Original image (light-sheet image) manually labeled with the center of nuclei (green dots); (b) Results of counting nuclei based on segmentation results by RANCA. Yellow boxes denote the incorrectly segmented cases; (c) Comparative results of nuclei count by ground truth (light blue), DeepCell (orange), CAC (gray), ilastik (yellow), CUBIC (dark blue), and our RANCA (green).
Electronics 13 03430 g012
Table 1. Review of all nuclear segmentation methods under comparison.
Table 1. Review of all nuclear segmentation methods under comparison.
TouchingMorphologyAdaptionw/o Post-Processing
ilastik ×
U-net ×
Attention U-net ×
Mask-RCNN
Cellpose ×
RANCA
Table 2. The segmentation results of nulei images. The segmentation accuracy is compared with other nuclei segmentation methods in terms of Dice ratio, Dice false positive (Dice FP) and Dice false negative (Dice FN), and Jaccard coefficient. The best performance is labeled in bold.
Table 2. The segmentation results of nulei images. The segmentation accuracy is compared with other nuclei segmentation methods in terms of Dice ratio, Dice false positive (Dice FP) and Dice false negative (Dice FN), and Jaccard coefficient. The best performance is labeled in bold.
MethodDice (%)Dice FP (%)Dice FN (%)Jaccard (%)
ilastik90.248.7210.8082.22
U-net82.5035.000.0270.21
Attention U-net83.1433.700.0277.12
Mask-RCNN90.221.6917.8682.19
Cellpose90.692.6416.1482.31
RANCA92.169.476.2185.46
Table 3. The segmentation accuracy with respect to the receptive field size. The ✓ indicates that this operation is used and the best performance is labeled in bold.
Table 3. The segmentation accuracy with respect to the receptive field size. The ✓ indicates that this operation is used and the best performance is labeled in bold.
MethodComonentsSize = 30Size = 40Size = 50Size = 60Size = 70Size = 80
CARA
CAC 81.65 ± 1.7683.35 ± 3.2685.79 ± 4.8685.98 ± 2.1088.04 ± 2.4486.56 ± 3.01
RANCA82.85 ± 2.4485.73 ± 1.7386.14 ± 1.7086.08 ± 1.7789.24 ± 1.8587.92 ± 1.09
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wang, Q.; Wei, J.; Quan, B. Regionally Adaptive Active Learning Framework for Nuclear Segmentation in Microscopy Image. Electronics 2024, 13, 3430. https://doi.org/10.3390/electronics13173430

AMA Style

Wang Q, Wei J, Quan B. Regionally Adaptive Active Learning Framework for Nuclear Segmentation in Microscopy Image. Electronics. 2024; 13(17):3430. https://doi.org/10.3390/electronics13173430

Chicago/Turabian Style

Wang, Qian, Jing Wei, and Bo Quan. 2024. "Regionally Adaptive Active Learning Framework for Nuclear Segmentation in Microscopy Image" Electronics 13, no. 17: 3430. https://doi.org/10.3390/electronics13173430

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop