Next Article in Journal
Regulatory and Technical Constraints: An Overview of the Technical Possibilities and Regulatory Limitations of Vehicle Telematic Data
Previous Article in Journal
Multi-Blockchain-Based IoT Data Processing Techniques to Ensure the Integrity of IoT Data in AIoT Edge Computing Environments
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Communication

A Novel Method for Effective Cell Segmentation and Tracking in Phase Contrast Microscopic Images

1
Department of Biomedical Engineering, College of Software and Digital Healthcare Convergence, Yonsei University, Wonju 26493, Korea
2
Department of Biomedical Laboratory Science, College of Software and Digital Healthcare Convergence, Yonsei University, Wonju 26493, Korea
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Sensors 2021, 21(10), 3516; https://doi.org/10.3390/s21103516
Submission received: 31 March 2021 / Revised: 12 May 2021 / Accepted: 14 May 2021 / Published: 18 May 2021
(This article belongs to the Section Intelligent Sensors)

Abstract

:
Cell migration plays an important role in the identification of various diseases and physiological phenomena in living organisms, such as cancer metastasis, nerve development, immune function, wound healing, and embryo formulation and development. The study of cell migration with a real-time microscope generally takes several hours and involves analysis of the movement characteristics by tracking the positions of cells at each time interval in the images of the observed cells. Morphological analysis considers the shapes of the cells, and a phase contrast microscope is used to observe the shape clearly. Therefore, we developed a segmentation and tracking method to perform a kinetic analysis by considering the morphological transformation of cells. The main features of the algorithm are noise reduction using a block-matching 3D filtering method, k-means clustering to mitigate the halo signal that interferes with cell segmentation, and the detection of cell boundaries via active contours, which is an excellent way to detect boundaries. The reliability of the algorithm developed in this study was verified using a comparison with the manual tracking results. In addition, the segmentation results were compared to our method with unsupervised state-of-the-art methods to verify the proposed segmentation process. As a result of the study, the proposed method had a lower error of less than 40% compared to the conventional active contour method.

1. Introduction

Cellular dynamics are important with respect to many biological processes that directly affect human health [1]. Cell migration is a basic cell function that is the source of cell life and plays a fundamental role in various diseases [2] and physiological phenomena, such as cancer metastasis, nerve development, immune function, wound healing, and embryo formulation [3]. Thus, analysis of the migration characteristics and motility of cells is essential for physiological and pathological research [4,5,6]. Analysis of the migration characteristics of metastatic cancer cells is essential for detecting the deterioration of cancer cells, and as a basic study, an analysis of the individual motility of each cell is being conducted [7]. In the case of immune cells, studies on the effectiveness of antigen targeting and the individual motility of cells to enhance the effectiveness are being conducted [8]. These eukaryotic cells have different motor forms and properties, and although they are affected by different types of extracellular matrixes, the types of feet that the cells use to exercise, such as the pseudopodium, characterize their basic motility [9]. The study of these cell migration characteristics is generally conducted through probabilistic model analyses, such as persistent random walk and levy walk, by tracking the positions of cells at each interval in the images of the observed cells after several hours of taking a picture with a real-time microscope [7,10]. Although cell tracking is an effective method for quantitative analysis of cell migration characteristics, this requires a lot of effort and cost. Therefore, many studies have been conducted on automatic cell tracking methods [11,12,13].
Recently, research has been conducted on the analysis of migration characteristics based on the morphological transformation of cells [14]. When cells migrate, they move through the pseudopodium and lamellipodia, where the shapes of the cells change in various ways, and studies are being conducted to predict mobility through the probabilistic calculation of the direction or shape in which they extend [7,15]. Thus, an important goal is to change the dynamic properties of cells, for example, by suppressing the motility of parasites. In addition, morphological changes in cells play an important role in the phagocytosis of immune cells in the antigen or host cells by parasites in the “immune synapses” [16]. To proceed with these studies, it is necessary to observe the shape of a cell precisely. Additionally, these depend on accurate information about the cell positions, requiring computational shape segmentation and tracking methods. Moreover, manual segmentation or tracking is avoided because these studies analyze a large number of cell images and require a high level of precision. Since manual segmentation is subjective, errors can occur, especially when analyzing cell movement characteristics at short intervals, resulting in relatively large errors.
To perform a kinetic analysis considering the morphological transformation of cells, it is necessary to detect and consider all the pseudopodium, podosomes, etc., to determine the shape of the entire cell. A microscope is needed to observe these structures, and phase contrast microscopes can be used to visualize a particular structure (such as a filament) [17]. Thin, transparent areas, such as the pseudopodium, can be observed more clearly [18]. Therefore, phase-contrast microscopic images are utilized in many cell migration studies because they are easy to analyze using images [19].
However, in phase contrast microscopy, a favorable high contrast at the cell boundary leads to problems, such as halo patterns, which can complicate cell segmentation [20]. While segmentation techniques are becoming increasingly common in the field of fluorescence microscopy, less accurate and robust methods have been developed for segmenting in phase-contrast images. Several methods have been developed for detecting and counting or tracking cells in a phase contrast image, but the focus has not been on detecting the shapes of cells [21,22,23,24,25,26,27,28,29]. In particular, there are not many studies that have conducted precise segmentation considering halo effects and it is rare to fundamentally eliminate the halo effect itself [30,31,32,33]. State-of-the-art techniques include deep learning-based supervised learning methods that are learned with correct answers and unsupervised methods using threshold values. Since our task is an unsupervised learning method, we compare the Empirical Gradient Threshold [29] (EGT) method using image histogram and the phase contrast microscopy segmentation toolbox [33] (PHANTAST) method using local contrast thresholding and halo effect removal with the proposed method.
It is necessary to obtain a boundary to detect the shape of an object in an image. Several segmentation methods, including active contour, which is a precise high-performance technique, have been recently employed. In addition to the cell detection, the active contour method is widely used as a method for detecting objects in various fields such as skin disease detection, tumor detection, heart detection, etc. [34,35]. In our study, the active contour model was adopted to precisely segment and detect the boundary of cells. However, since the active contour method requires an initial point for each object, it is difficult to implement when the number of cells is large. Therefore, we are limited to single cell research.
When an active contour method is applied to phase contrast microscopic images, it is difficult to distinguish between the cell boundaries and the halo. The contrast varies depending on the thickness or substance of the boundary in a cell, which makes the halo effect less constant [36]. Because of this, non-uniform results are obtained while calculating the energy value of the boundary required by the active contour. Therefore, this study proposes a novel method to eliminate the halo effect itself to solve this halo effect. Using the K-means clustering method, which is a machine learning (ML) method that is mathematically simple and rapidly calculated and converged and can be easily implemented [37], only the halo effect is extracted and removed. By eliminating halo effects, the active contour model, which guarantees excellent performance for precise segmentation, but was difficult to use due to its dependence on initial value, was easily available and precise segmentation was performed. The K-means clustering method is particularly widely used for classification purposes in data statistics and analysis studies, and also for image segmentation purposes such as cell nucleus extraction and white blood cells extraction in imaging fields [38]. It is mainly used for the purpose of obtaining a desired target, but our study shows a new applicability in that it was used as a pre-processing method to remove specific problem phenomena.
In summary, the contributions of our proposed work are as follows:
  • An active contour model showing strong performance in shape detection was used for acquiring the shape of the cell in a phase contrast microscope.
  • Since the halo effect that occurs in the phase contrast microscope interferes with the precise segmentation of the cell shape, we propose a solution to remove it.
  • The conventional methods have performed the segmentation for the cell boundary through various complex image processing techniques that distinguish between the halo effect and the cell boundary. In this work, we propose a novel method that uses the ML technique K-means clustering method to separate and correct halo effects from the background signals and cell boundaries, eliminating the basic problem cause itself after denoising.
  • The method of this study, which performed segmentation by removing the halo effect, was verified by comparing two methods, the method performed by the manual method, which is a basic method and is used a ground truth for the proposed method, and the method performed by segmentation without removing the halo effect.
  • The results ensure the novelty and reliability of the method proposed in this study.
In the following chapters, we described the process through which the shapes of cells can be accurately detected. First, the noise in the microscope image is removed, and background correction is utilized to mitigate the halo signal. Second, the shape of the cell is detected using the active contour method, followed by the center of mass. Based on the manual tracking results, we compared the tracking results of the images where the halo signal was not removed with those where it was removed and verify the reliability of the algorithms developed in this study. Furthermore, we verified the effectiveness of segmentation through comparisons with other state-of-the-art techniques. Finally, the paper concludes with future works.

2. Materials and Methods

2.1. Sample Preparation and Experiment

The immortalized line of human T lymphocyte cells, Jurkat cells, was selected for tracking. The Jurkat cells were cultured in Roswell Park Memorial Institute 1640 medium (Gibco, Grand Island, New York, USA), supplemented with 10% (v/v) fetal bovine serum (Gibco, Grand Island, New York, USA), 1% (v/v) penicillin–streptomycin (Invitrogen, Carlsbad, California, USA), 1% 1M HEPES (Gibco, Grand Island, New York, USA), 1% MEM non-essential amino acid solution (Gibco, Grand Island, New York, USA), and 1% sodium pyruvate (Gibco, Grand Island, New York, USA). They were subsequently incubated at 37 °C in 5% CO2.
After attaching the Jurkat cells to a 60∅ dish coated with fibronectin human plasma (Sigma, St. Louis, Missouri, USA) at 20 μg/mL, the camera started capturing it after 30 min of stabilization. A confluency of 2% was used to ensure a sufficient distance between cells. Before stabilizing the cell, a live cell microscopic device (OKOLAB, Ottaviano, NA, Italy) was employed for sufficient heating of the device. A Nikon TE1000 microscope with a 10× phase contrast microscope (Nikon, Minato-ku, Tokyo, Japan) was utilized for image cell migration using a charge-coupled device camera. Images were taken every 2 min for 4 h, and we used the perfect focus system in the microscope to avoid the defocus problem from long-lasting capture.

2.2. Image Acquisition and Processing

Image processing algorithms were implemented using the MATLAB and Image Processing Toolbox Release 2019b. Multiple cell images, taken using phase-contrast microscopes, were saved by cropping each individual cell. A tracking process was conducted for each cell. The entire pipeline, i.e., the process of cropping an image from the entire image to each cell described earlier, is presented in Figure 1.

2.2.1. Denoising

In microscopic images, noise can be modeled mainly as a Poisson and Gaussian distribution. Gaussian noise can be modeled as an additive and an independent form in the images, whereas Poisson noise can be modeled as a multiplicative and image-dependent form, which is difficult to handle. To eliminate these noises, three steps were performed.
The first step is noise estimation, which estimates the variance of Gaussian noise and Poisson noise [39]. In the noise estimation step, the image is divided into several patches, and each patch’s noise component and variance are calculated locally. By fitting this result to the estimated value of the noise component of the original image, the total noise component can be estimated. The following step is an Anscombe transformation [40] step, which is a variance-stabilizing transformation. Through the Anscombe transformation, the variance of the Poisson noise is fixed at a constant value, such as the variance of Gaussian noise. Therefore, scanning transmission electron microscopy images can be modeled to have Gaussian noise only. The final step is block matching 3D (BM3D) filtering, an effective denoising method for Gaussian noise [41]. Figure 2 presents a flowchart of BM3D filtering. BM3D filtering consists of two main stages.
The first stage is to organize images into blocks, group them into a 3D formation, and co-filter them to create images for use in the second stage. The distance between the blocks is calculated, and blocks smaller than a certain thresholding value are set to similar blocks. The distance between the blocks can be calculated as follows:
D i s t n o i s y   B x R , B x = B x R B x 2 2 N 1 h t 2
where · 2 is the l 2 n o r m , B x R is the reference block for x R X , B x is the block for x X , and N 1 h t indicates the length of the image. If Z x R and Z x do not overlap, this distance can be expressed as a chi-squared random variable. The expected values and variance of the distance are expressed as follows:
E D i s t n o i s y B x R ,   B x = D i s t n o i s y B x R , B x + 2 σ 2
V a r D i s t n o i s y B x R ,   B x = 8 σ 4 N 1 h t 2 + 8 σ 2 D i s t n o i s y B x R , B x N 1 h t 2
However, the probability densities of other D i s t n o i s y B x R ,   B x are likely to be severely overlapped by a relatively large sigma or small N. To solve this problem, coarse prefiltering is used to measure the distance between two blocks. Coarse prefiltering involves the application of a normalized 2D linear transformation to the blocks and hard thresholding of the obtained coefficient values. The applied distance expression is as follows:
D i s t B x R , B x = γ T 2 D h t B x R γ T 2 D h t B x 2 2 N 1 h t 2
where γ is a hard thresholding operator, and T 2 D h t is a normalized 2D linear transformation. The results, grouped into block matching, are expressed as a set containing blocks similar to B x R as follows:
S x R h t = x X : d B x R , B x τ m a t c h h t
To construct the N 1 h t × N 1 h t × S x R h t -sized 3D block, group B S x R h t is obtained by stacking matched blocks B x S x R h t . Co-filtering on group B S x R h t was performed in the 3D transformation area. The formula is expressed as follows:
Y ^ x R h t = T 3 D h t 1 ( γ ( T 3 D h t ( B S x R h t ) ) )
where T 3 D h t is a 3D transformation, T 3 D h t 1 is an inverse 3D transformation, and Y ^ x R h t is a co-filtered group estimate.
In aggregation, which is the last step of the first stage, the basic estimate of the actual image is calculated using a weighted analysis of all the overlapped block unit estimates. The weight values for the group estimates are as follows:
w x R h t = 1 σ 2 N h t x R , i f   N h t x R 1 1 , o t h e r w i s e
where N h t x R is the number of non-zero coefficients after hard thresholding.
The overall basic estimate, y ^ b a s i c , is available as the weighted average of the block-unit estimate.
y ^ b a s i c x = x R X x m S x R h t w x R h t Y ^ x m h t , x R x x R X x m S x R h t w x R h t κ x m x
where κ x m : X 0 ,   1 is the characteristic function of the square support in blocks located at x m X .
In the second stage, improved grouping and co-Wiener filtering were performed using the basic estimates obtained in the first stage. Because the noise in y ^ b a s i c is assumed to be significantly reduced, the distance is replaced by the square of the normalized l 2 n o r m calculated within the underlying estimate. The block set for co-Wiener filtering is calculated as follows:
S x R w i e n = { x X : Y ^ x R b a s i c Y ^ x b a s i c 2 2 N 1 w i e n 2 τ m a t c h w i e n }
where set S x R w i e n is used to divide Y ^ S x R w i e n b a s i c and B S x R w i e n into two groups.
The Wiener shrinkage coefficients can be obtained as follows:
W S x R w i e n = T 3 D w i e n Y ^ S x R w i e n b a s i c 2 T 3 D w i e n Y ^ S x R w i e n b a s i c 2 + σ 2
The co-Wiener filtering of B S x R w i e n is implemented as an element-wise multiplication of the 3D transform coefficient T 3 D w i e n ( B S x R w i e n ) of noisy data with Wiener shrinkage coefficient W S x R w i e n . Subsequently, the inverse transform, T, produces the following group estimates:
Y ^ S x R w i e n w i e n = T 3 D w i e n 1 ( W S x R w i e n T 3 D w i e n ( B S x R w i e n ) )
In step 2, aggregation is performed similarly to step 1, resulting in the final estimate, y ^ f i n a l . The weight value applied to each x R X is expressed as follows:
w x R w i e n = σ 2 W S x R w i e n 2 2
Finally, y ^ f i n a l can be obtained using the weights obtained as follows:
y ^ f i n a l x = x R X x m S x R w i e n w x R w i e n Y ^ x m w i e n , x R x x R X x m S x R w i e n w x R w i e n κ x m x

2.2.2. Halo Effect Elimination

Figure 1b depicts the process for removing the halo effect, which has been the focus of this study. Statistical classification of the signal characteristics of each grid is performed using image patch grids that show only the background or categorize the signal characteristics of each grid as a mixture of background and background pixels. Thereafter, the K-means clustering method [42] was employed to classify a given set of data using a predetermined number of clusters to estimate the background signal and correct the halo signal as a background signal. The detailed process is illustrated in Figure 3.
The image array, I(x,t), of 255 bits is divided by the sub-image ‘tile’, as shown in Figure 3b. The intensity distribution of the tiles differs between the signal and the background. These tiles are divided using statistical classification and returned to two clusters using the K-means clustering method. As shown in Figure 3c, background tiles are collected in low-density volumes because they are almost identical in statistical distribution. The mean intensity of the tiles that are classified as belonging to the background is defined as the background image, B(x,t), and the original image, I(x,t), is divided by B(x,t). Steps (d) and (e) in Figure 3 are shown in Figure 4. In I(x,t) and B(x,t),the signal corresponding to the cell in I(x,t) is smaller than 1 and closer to 0 because the signal that corresponds to the cell in I(x,t) is smaller than B(x,t). The signal corresponding to the halo of I(x,t) has a value greater than B(x,t); hence, it is divided by a value greater than 1, and a value greater than 1 is specified as a signal similar to the background signal. When this result is imaged, as shown in Figure 3e, the background and image of the cell morphology with the halo effect removed are derived.

2.2.3. Edge Detection of Cells

As shown in Figure 1c, an active contour method was used to detect the shape of a cell in the image that had been removed from the halo effect. Localizing region-based active contour [43] was applied to develop the method devised by Chan–Vese [44]. The total governance formula can be expressed as follows:
E C 1 , C 2 , ϕ = μ δ ϕ ϕ d x + ν H ϕ d x + λ 1 u 0 C 1 2 H ϕ d x + λ 2 u 0 C 2 2 1 H ϕ d x
C 1 ϕ = u 0 x H ϕ x d x H ϕ x d x , C 2 ϕ = u 0 x 1 H ϕ x d x 1 H ϕ x d x
ϕ t = δ ϕ μ · ϕ ϕ ν λ 1 u 0 C 1 2 + λ 2 u 0 C 2 2 = 0
The Chan–Vese method is simplified into a stair function in the Mumford–Shah [45] method, and the level set method is applied. C 1 and C 2 are the mean values of intensity inside and outside and are defined by formula (15). As shown in Equation (16), the active contour finds that the energy is minimal, which is the boundary of an object.
In the Chan–Vese method, the method developed based on the localized region is expressed as follows:
C 1 ϕ = B x , y u 0 x H ϕ x d x B x , y H ϕ x d x , C 2 ϕ = B x , y u 0 x 1 H ϕ x d x B x , y 1 H ϕ x d x
B x , y = 1 , x y < r 0 , o t h e r w i s e
As the active contour converges, it is calculated by considering only the regions within the r range at the initial mask image boundary as a function of B(x,y). In Figure 1d, the mass center was calculated by considering all of the calculated cell boundary coordinates to the internal coordinates, and this was taken as the center point of the cell. If the center points of the cell images are crossed over time, the path of cell movement can be identified, as shown in Figure 1e. This process is automatically performed in one parameter setting.

3. Results and Discussion

After obtaining microscopic images, we performed denoising, halo effect removal, and segmentation for tracking the cells with the MATLAB program according to the algorithmic flow chart described in Figure 1. First, we verified the effectiveness of denoising and halo effect removal. Second, we validated how the denoising and halo effect removal, which were performed earlier, affect segmentation. Finally, the proposed method, manual tracking, and active contour method without the halo effect removal were compared to analyze the cell migration.
Figure 5 presents the denoising results obtained when Poisson-Gaussian noise is removed using the block matching 3D method and the halo effect removal results. Figure 5a shows an original phase contrast microscopic image for analyzing cell migration, and Figure 5b presents the denoising results. Whereas the original image presented some noises, the denoised image exhibited noise reduction. However, we note that the halo effect still remained. Therefore, we performed additional halo effect removal. Figure 5c,d present the halo effect removal results with denoising and without denoising to validate the effectiveness of the denoising process.
After preprocessing, we performed segmentation to track the cells. Figure 6 presents the segmentation results compared to other methods. The active contour, EGT, and PHANTAST methods and the proposed methods were used for comparison. All parameters were adjusted to the manual based on the default value. The Figure 6a column shows the original phase contrast microscopic image that has long pseudopodium with noise and halo effect. The segmentation results using the active contour method are shown in Figure 6b. The active contour method could not properly segment the cells with pseudopodium affected by the halo effect. The Figure 6c,d columns present the results of EGT and PHANTAST method. The EGT method roughly segmented the pseudopodium, but due to the halo effect, the cell contours were not properly segmented. Although the PHANTAST method lowered the halo effect, some errors still remain around the cell contour. Moreover, PHANTAST method contains various parameters so that it is hard to optimize the performance. By contrast, the proposed method significantly reduced the halo effect and segmented the cell contours precisely including pseudopodium (see the Figure 6e column).
Figure 7 presents the trajectories obtained as the result of tracking the center points of the cells calculated from the detected results. Manual tracking results (blue line) using ImageJ, which were obtained by drawing along the border of the cells with direct hands and calculating the center of mass, were compared with the results of tracking without removing the halo effect (green line) and the methods applied in this study (red line). We observe that the green line has a greater difference from the blue line than the red line.
The results of the comparison of the coordinate values of the trajectories are presented in Table 1. Cell tracking was performed in five cells. A is the difference between the results of manual tracking and the proposed method in the coordinate data at each time step location, and B is the difference between the results of manual tracking and the results of the active contour method only. The average and maximum values of A and B are compared. For the case of the average value of the errors, the result of A exhibited less than 0.5 pixels precision from 0.25 to 0.45 pixels, and B exhibited less precision than A from 0.32 to 0.83 pixels. For the maximum value of the difference, A is from 1.13 pixels to 2.68 pixels and B is from 1.15 pixels to 4.49 pixels. Further, we compared a variance of A and B. The variance of A ranges from 0.04 to 0.17, and the variance of B ranges from 0.06 to 0.39. Using active contour only yields a greater difference from the results of manual tracking, and the results of the proposed method were more accurate.

4. Conclusions

In this paper, we present an effective cell segmentation and tracking method for phase-contrast microscopic images. Segmentation and tracking are designed to alleviate noise and halo effects, which are problematic for segmentation using active contours in phase-contrast microscopic images. In the phase-contrast microscopic image, noise was modeled as a Poisson and Gaussian distributions. Therefore, to reduce these noises, noise estimation, Anscombe transform, and BM3D were performed. Additionally, we performed the background correction by solving the hallo effect problem through k-means clustering, a machine learning technique that has strong performance, can be easily implemented and requires less computation. By comparing EGT and PHANTAST, state-of-the-art unsupervised learning methods with the proposed method, we confirmed that our method segments cells effectively considering the pseudopodium. Furthermore, we confirmed that the comparison with manual tracking allows the detection of more accurate boundaries; moreover, we verified the reliability of the algorithm developed in this study. The proposed method outperforms the manual method of directly obtaining cell shapes in terms of time cost of segmentation, and is also highly applicable because it performs with simple algorithms although it is excellent and is performed automatically. It is expected that the cell shape can be accurately detected and utilized for analysis of the motility of the cell considering the shape, rather than analysis via simple location tracking. In addition, the halo effect removal algorithm presented in this study can be employed for cell counting or morphology in phase-contrast microscopic images.
However, in this study, there is a limitation that it is applied only to a single cell, and if there is a circular object, it is difficult to segment it well since the initial point is located by the circular detection algorithm. In a future study, we will apply the algorithm developed in this study to cell images of various types and shapes to optimize them so that they can be used more diversely and generally.

Author Contributions

Conceptualization, H.J., J.H., and S.Y.; Data curation, H.J.; Formal analysis, H.J.; Funding acquisition, S.Y.; Investigation, H.J. and J.H.; Methodology, H.J. and J.H.; Project administration, S.Y.; Resources, Y.S.K. and Y.L.; Software, H.J. and J.H.; Supervision, S.Y.; Validation, J.H.; Visualization, H.J. and J.H.; Writing—original draft, H.J. and J.H.; Writing—review & editing, Y.S.K., Y.L., and S.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (No. 2019R1F1A1058971).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Lauffenburger, D.A.; Horwitz, A.F. Cell migration: A physically integrated molecular process. Cell 1996, 84, 359–369. [Google Scholar] [CrossRef] [Green Version]
  2. Franz, C.M.; Jones, G.E.; Ridley, A.J. Cell migration in development and disease. Dev. Cell 2002, 2, 153–158. [Google Scholar] [CrossRef] [Green Version]
  3. Masuzzo, P.; Van Troys, M.; Ampe, C.; Martens, L. Taking aim at moving targets in computational cell migration. Trends Cell Biol. 2016, 26, 88–110. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  4. Theveneau, E.; Mayor, R. Neural crest delamination and migration: From epithelium-to-mesenchyme transition to collective cell migration. Dev. Biol. 2012, 366, 34–54. [Google Scholar] [CrossRef] [Green Version]
  5. Huda, S.; Weigelin, B.; Wolf, K.; Tretiakov, K.V.; Polev, K.; Wilk, G.; Iwasa, M.; Emami, F.S.; Narojczyk, J.W.; Banaszak, M. Lévy-like movement patterns of metastatic cancer cells revealed in microfabricated systems and implicated in vivo. Nat. Commun. 2018, 9, 1–11. [Google Scholar] [CrossRef] [Green Version]
  6. Debeir, O.; Adanja, I.; Kiss, R.; Decaestecker, C. Models of cancer cell migration and cellular imaging and analysis. Motile Actin Syst. Health Dis. 2008, 123–156. [Google Scholar]
  7. Li, L.; Nørrelykke, S.F.; Cox, E.C. Persistent cell motion in the absence of external signals: A search strategy for eukaryotic cells. PLoS ONE 2008, 3, e2093. [Google Scholar] [CrossRef] [Green Version]
  8. Krummel, M.F.; Bartumeus, F.; Gérard, A. T cell migration, search strategies and mechanisms. Nat. Rev. Immunol. 2016, 16, 193. [Google Scholar] [CrossRef] [Green Version]
  9. Mogilner, A.; Keren, K. The shape of motile cells. Curr. Biol. 2009, 19, R762–R771. [Google Scholar] [CrossRef] [Green Version]
  10. Li, H.; Qi, S.; Jin, H.; Qi, Z.; Zhang, Z.; Fu, L.; Luo, Q. Zigzag generalized levy walk: The in vivo search strategy of immunocytes. Theranostics 2015, 5, 1275. [Google Scholar] [CrossRef] [Green Version]
  11. Bise, R.; Kanade, T.; Yin, Z.; Huh, S.-i. Automatic cell tracking applied to analysis of cell migration in wound healing assay. In Proceedings of the 2011 Annual International Conference of the IEEE Engineering in Medicine and Biology Society, Boston, MA, USA, 30 August–3 September 2011; pp. 6174–6179. [Google Scholar]
  12. Acton, S.T.; Wethmar, K.; Ley, K. Automatic tracking of rolling leukocytes in vivo. Microvasc. Res. 2002, 63, 139–148. [Google Scholar] [CrossRef] [Green Version]
  13. Jiang, R.M.; Crookes, D.; Luo, N.; Davidson, M.W. Live-cell tracking using SIFT features in DIC microscopic videos. IEEE Trans. Biomed. Eng. 2010, 57, 2219–2228. [Google Scholar] [CrossRef]
  14. Ebata, H.; Yamamoto, A.; Tsuji, Y.; Sasaki, S.; Moriyama, K.; Kuboki, T.; Kidoaki, S. Persistent random deformation model of cells crawling on a gel surface. Sci. Rep. 2018, 8, 1–12. [Google Scholar]
  15. Ruprecht, V.; Wieser, S.; Callan-Jones, A.; Smutny, M.; Morita, H.; Sako, K.; Barone, V.; Ritsch-Marte, M.; Sixt, M.; Voituriez, R. Cortical contractility triggers a stochastic switch to fast amoeboid cell motility. Cell 2015, 160, 673–685. [Google Scholar] [CrossRef] [Green Version]
  16. Roumier, A.; Olivo-Marin, J.C.; Arpin, M.; Michel, F.; Martin, M.; Mangeat, P.; Acuto, O.; Dautry-Varsat, A.; Alcover, A. The membrane-microfilament linker ezrin is involved in the formation of the immunological synapse and in T cell activation. Immunity 2001, 15, 715–728. [Google Scholar] [CrossRef] [Green Version]
  17. Mesquita, D.; Dias, O.; Amaral, A.; Ferreira, E. A comparison between bright field and phase-contrast image analysis techniques in activated sludge morphological characterization. Microsc. Microanal. 2010, 16, 166–174. [Google Scholar] [CrossRef] [Green Version]
  18. Zernike, F. Phase contrast, a new method for the microscopic observation of transparent objects part II. Physica 1942, 9, 974–986. [Google Scholar] [CrossRef]
  19. Liang, C.-C.; Park, A.Y.; Guan, J.-L. In vitro scratch assay: A convenient and inexpensive method for analysis of cell migration in vitro. Nat. Protoc. 2007, 2, 329. [Google Scholar] [CrossRef] [Green Version]
  20. Ersoy, I.; Bunyak, F.; Mackey, M.A.; Palaniappan, K. Cell segmentation using Hessian-based detection and contour evolution with directional derivatives. In Proceedings of the 2008 15th IEEE International Conference on Image Processing, San Diego, CA, USA, 12–15 October 2008; pp. 1804–1807. [Google Scholar]
  21. Zhi, X.-H.; Meng, S.; Shen, H.-B. High density cell tracking with accurate centroid detections and active area-based tracklet clustering. Neurocomputing 2018, 295, 86–97. [Google Scholar] [CrossRef]
  22. Essa, E.; Xie, X. Phase contrast cell detection using multilevel classification. Int. J. Numer. Methods Biomed. Eng. 2018, 34, e2916. [Google Scholar] [CrossRef] [Green Version]
  23. Wang, W.; Taft, D.A.; Chen, Y.-J.; Zhang, J.; Wallace, C.T.; Xu, M.; Watkins, S.C.; Xing, J. Learn to segment single cells with deep distance estimator and deep cell detector. Comput. Biol. Med. 2019, 108, 133–141. [Google Scholar] [CrossRef] [Green Version]
  24. Ambriz-Colin, F.; Torres-Cisneros, M.; Avina-Cervantes, J.; Saavedra-Martinez, J.; Debeir, O.; Sanchez-Mondragon, J. Detection of Biological Cells in Phase-Contrast Microscopy Images. In Proceedings of the 2006 Fifth Mexican International Conference on Artificial Intelligence, Apizaco, Mexico, 13–17 October 2006; pp. 68–77. [Google Scholar]
  25. Huh, S.; Bise, R.; Chen, M.; Kanade, T. Automated mitosis detection of stem cell populations in phase-contrast microscopy images. IEEE Trans. Med. Imaging 2010, 30, 586–596. [Google Scholar]
  26. Thirusittampalam, K.; Hossain, M.J.; Ghita, O.; Whelan, P.F. A novel framework for cellular tracking and mitosis detection in dense phase contrast microscopy images. IEEE J. Biomed. Health Inform. 2013, 17, 642–653. [Google Scholar] [CrossRef]
  27. Wang, Y.; Zhang, Z.; Wang, H.; Bi, S. Segmentation of the clustered cells with optimized boundary detection in negative phase contrast images. PLoS ONE 2015, 10, e0130178. [Google Scholar] [CrossRef] [Green Version]
  28. Debeir, O.; Van Ham, P.; Kiss, R.; Decaestecker, C. Tracking of migrating cells under phase-contrast video microscopy with combined mean-shift processes. IEEE Trans. Med. Imaging 2005, 24, 697–711. [Google Scholar] [CrossRef] [Green Version]
  29. Chalfoun, J.; Majurski, M.; Peskin, A.; Breen, C.; Bajcsy, P.; Brady, M. Empirical gradient threshold technique for automated segmentation across image modalities and cell lines. J. Microsc. 2015, 260, 86–99. [Google Scholar] [CrossRef]
  30. Binici, R.C.; Şahin, U.; Ayanzadeh, A.; Töreyin, B.U.; Önal, S.; Okvur, D.P.; Özuysal, Ö.Y.; Ünay, D. Automated segmentation of cells in phase contrast optical microscopy time series images. In Proceedings of the 2019 Medical Technologies Congress (TIPTEKNO), Izmir, Turkey, 3–5 October 2019; pp. 1–4. [Google Scholar]
  31. Tsai, H.-F.; Gajda, J.; Sloan, T.F.; Rares, A.; Shen, A.Q. Usiigaci: Instance-aware cell tracking in stain-free phase contrast microscopy enabled by machine learning. SoftwareX 2019, 9, 230–237. [Google Scholar] [CrossRef]
  32. Jaccard, N.; Szita, N.; Griffin, L.D. Segmentation of phase contrast microscopy images based on multi-scale local basic image features histograms. Comput. Methods Biomech. Biomed. Eng. Imaging Vis. 2017, 5, 359–367. [Google Scholar] [CrossRef] [Green Version]
  33. Jaccard, N.; Griffin, L.D.; Keser, A.; Macown, R.J.; Super, A.; Veraitch, F.S.; Szita, N. Automated method for the rapid and precise estimation of adherent cell culture characteristics from phase contrast microscopy images. Biotechnol. Bioeng. 2014, 111, 504–517. [Google Scholar] [CrossRef] [Green Version]
  34. Ramapraba, P.; Chitra, M.; Prem Kumar, M. Effective lesion detection of colposcopic images using active contour method. Biomed. Res. 2017, 28, S255–S264. [Google Scholar]
  35. Chen, X.; Williams, B.M.; Vallabhaneni, S.R.; Czanner, G.; Williams, R.; Zheng, Y. Learning active contour models for medical image segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 16–20 June 2019; pp. 11632–11640. [Google Scholar]
  36. Bensch, R.; Ronneberger, O. Cell segmentation and tracking in phase contrast images using graph cut with asymmetric boundary costs. In Proceedings of the 2015 IEEE 12th International Symposium on Biomedical Imaging (ISBI), New York, NY, USA, 16–19 April 2015; pp. 1220–1223. [Google Scholar]
  37. Yuan, C.; Yang, H. Research on K-value selection method of K-means clustering algorithm. J. Multidiscip. Res. 2019, 2, 226–235. [Google Scholar] [CrossRef] [Green Version]
  38. Ghane, N.; Vard, A.; Talebi, A.; Nematollahy, P. Segmentation of white blood cells from microscopic images using a novel combination of K-means clustering and modified watershed algorithm. J. Med. Signals Sens. 2017, 7, 92. [Google Scholar] [PubMed]
  39. Foi, A.; Trimeche, M.; Katkovnik, V.; Egiazarian, K. Practical Poissonian-Gaussian noise modeling and fitting for single-image raw-data. IEEE Trans. Image Process. 2008, 17, 1737–1754. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  40. Anscombe, F.J. The transformation of Poisson, binomial and negative-binomial data. Biometrika 1948, 35, 246–254. [Google Scholar] [CrossRef]
  41. Dabov, K.; Foi, A.; Katkovnik, V.; Egiazarian, K. Image denoising with block-matching and 3D filtering. In Proceedings of the Image Processing: Algorithms and Systems, Neural Networks, and Machine Learning, San Jose, CA, USA, 17 February 2006; p. 606414. [Google Scholar]
  42. Xu, R.; Wunsch, D. Survey of clustering algorithms. IEEE Trans. Neural Netw. 2005, 16, 645–678. [Google Scholar] [CrossRef] [Green Version]
  43. Lankton, S.; Tannenbaum, A. Localizing region-based active contours. IEEE Trans. Image Process. 2008, 17, 2029–2039. [Google Scholar] [CrossRef] [Green Version]
  44. Chan, T.F.; Vese, L.A. Active contours without edges. IEEE Trans. Image Process. 2001, 10, 266–277. [Google Scholar] [CrossRef] [Green Version]
  45. Mumford, D.B.; Shah, J. Optimal approximations by piecewise smooth functions and associated variational problems. Commun. Pure Appl. Math. 1989, 42, 577–685. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Entire pipeline of the cell segmentation and tracking algorithm. (a) is a denoised original image, which is obtained using the method described in the next section. (b) shows Halo effect elimination image. After denoising, the Halo effect remains, and Halo effect elimination is performed. (c) is the edge detection process. (d) shows the center point finding for cell tracking. (e) shows the cell tracking result using the proposed methods.
Figure 1. Entire pipeline of the cell segmentation and tracking algorithm. (a) is a denoised original image, which is obtained using the method described in the next section. (b) shows Halo effect elimination image. After denoising, the Halo effect remains, and Halo effect elimination is performed. (c) is the edge detection process. (d) shows the center point finding for cell tracking. (e) shows the cell tracking result using the proposed methods.
Sensors 21 03516 g001
Figure 2. BM3D flowchart. Adapted from ref. [41].
Figure 2. BM3D flowchart. Adapted from ref. [41].
Sensors 21 03516 g002
Figure 3. Background correction algorithm. (a) Original image, (b) sub-image “Tiles,” (c) divide two clusters using K-means clustering, (d) only background signal, and (e) divide original image into background image.
Figure 3. Background correction algorithm. (a) Original image, (b) sub-image “Tiles,” (c) divide two clusters using K-means clustering, (d) only background signal, and (e) divide original image into background image.
Sensors 21 03516 g003
Figure 4. Cell signal after background correction.
Figure 4. Cell signal after background correction.
Sensors 21 03516 g004
Figure 5. Noise and halo effect removal results: (a) Original phase contrast microscopic image. (b) Denoised image. (c) Halo effect removal without denoising. (d) Halo effect removal with denoising.
Figure 5. Noise and halo effect removal results: (a) Original phase contrast microscopic image. (b) Denoised image. (c) Halo effect removal without denoising. (d) Halo effect removal with denoising.
Sensors 21 03516 g005
Figure 6. Comparison of segmentation results for different methods. Each column represents the same method results. (a) Original phase contrast microscopic image. (b) Segmentation result with active contour method. (c) EGT method. (d) PHANTAST method. (e) Proposed method.
Figure 6. Comparison of segmentation results for different methods. Each column represents the same method results. (a) Original phase contrast microscopic image. (b) Segmentation result with active contour method. (c) EGT method. (d) PHANTAST method. (e) Proposed method.
Sensors 21 03516 g006
Figure 7. Trajectories of four cells. Each coordinate was represented by normalizing with respect to the starting point. The red line represents the proposed method, the green line represents the method that uses active contour only, and the blue line represents the results tracked manually.
Figure 7. Trajectories of four cells. Each coordinate was represented by normalizing with respect to the starting point. The red line represents the proposed method, the green line represents the method that uses active contour only, and the blue line represents the results tracked manually.
Sensors 21 03516 g007
Table 1. Comparison of the proposed work and without background correction.
Table 1. Comparison of the proposed work and without background correction.
ParameterA = |Manual − Proposed Method|B = |Manual − Active Contour Only|
Cell Number
Average (Pixel)Max Diff. (Pixel)Variance (Pixel)Average (Pixel)Max Diff. (Pixel)Variance (Pixel)
10.4352.1850.1640.8312.9740.331
20.3321.6070.0880.5601.8470.150
30.3721.8930.1280.6342.0900.120
40.2571.1390.0430.3221.1590.067
50.4282.6820.1690.7714.4950.398
Average0.3641.9010.1180.6232.5130.213
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Jo, H.; Han, J.; Kim, Y.S.; Lee, Y.; Yang, S. A Novel Method for Effective Cell Segmentation and Tracking in Phase Contrast Microscopic Images. Sensors 2021, 21, 3516. https://doi.org/10.3390/s21103516

AMA Style

Jo H, Han J, Kim YS, Lee Y, Yang S. A Novel Method for Effective Cell Segmentation and Tracking in Phase Contrast Microscopic Images. Sensors. 2021; 21(10):3516. https://doi.org/10.3390/s21103516

Chicago/Turabian Style

Jo, Hongju, Junghun Han, Yoon Suk Kim, Yongheum Lee, and Sejung Yang. 2021. "A Novel Method for Effective Cell Segmentation and Tracking in Phase Contrast Microscopic Images" Sensors 21, no. 10: 3516. https://doi.org/10.3390/s21103516

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop