Next Article in Journal
A Generalized Threat Model for Visual Sensor Networks
Next Article in Special Issue
Depth-Dependent High Distortion Lens Calibration
Previous Article in Journal
JLVEA: Lightweight Real-Time Video Stream Encryption Algorithm for Internet of Things
Previous Article in Special Issue
Speech Quality Feature Analysis for Classification of Depression and Dementia Patients
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Combined Deep-Learning and Lattice Boltzmann Model for Segmentation of the Hippocampus in MRI

1
School of Communication and Information Engineering, Shanghai University, Shanghai 200444, China
2
School of Electrical Engineering, Binzhou University, Binzhou 256600, China
*
Author to whom correspondence should be addressed.
Sensors 2020, 20(13), 3628; https://doi.org/10.3390/s20133628
Submission received: 24 May 2020 / Revised: 23 June 2020 / Accepted: 24 June 2020 / Published: 28 June 2020
(This article belongs to the Special Issue Data, Signal and Image Processing and Applications in Sensors)

Abstract

:
Segmentation of the hippocampus (HC) in magnetic resonance imaging (MRI) is an essential step for diagnosis and monitoring of several clinical situations such as Alzheimer’s disease (AD), schizophrenia and epilepsy. Automatic segmentation of HC structures is challenging due to their small volume, complex shape, low contrast and discontinuous boundaries. The active contour model (ACM) with a statistical shape prior is robust. However, it is difficult to build a shape prior that is general enough to cover all possible shapes of the HC and that suffers the problems of complicated registration of the shape prior and the target object and of low efficiency. In this paper, we propose a semi-automatic model that combines a deep belief network (DBN) and the lattice Boltzmann (LB) method for the segmentation of HC. The training process of DBN consists of unsupervised bottom-up training and supervised training of a top restricted Boltzmann machine (RBM). Given an input image, the trained DBN is utilized to infer the patient-specific shape prior of the HC. The specific shape prior is not only used to determine the initial contour, but is also introduced into the LB model as part of the external force to refine the segmentation. We used a subset of OASIS-1 as the training set and the preliminary release of EADC-ADNI as the testing set. The segmentation results of our method have good correlation and consistency with the manual segmentation results.

1. Introduction

The shape and volume of the hippocampus (HC) is altered in cases of Alzheimer’s disease (AD), schizophrenia and epilepsy, among other conditions [1]. Atrophy of the HC has been shown to be one of the first observable characteristics for the detection of AD or mild cognitive impairment (MCI) [2]. To diagnose the above diseases, the HC volume should be easily obtainable and reliably and consistently measured by MRI. However, the intensity distribution of different brain structures shows considerable overlap [3]. Not all edges are visible for HC, the white matter inferior to the HC is not always well resolved and a large part of its border with the amygdala is usually invisible.
Hippocampal segmentation methods include but are not limited to image-based methods, active contour models (ACM) [4], active appearance and shape models [5], atlas models [6] and deep learning methods [7]. Image-based and ACM methods suffer from a low robustness and accuracy and require extensive user interaction. Model-based methods such as active appearance and shape models can overcome the problems with previous methods and reduce user interaction at the expense of a large training set to build a general model. However, it is difficult to build a model that is general enough to cover all possible shapes of the HC. Atlas methods, especially multi-atlas based methods, have the advantage of enabling segmentation in individuals with great anatomical variability. The disadvantage of this kind of method is that it requires many registration operations, which increases its computational cost. Recently, deep learning methods have been applied in medical image segmentation, but the lack of golden standards for medical image segmentation limits the development of such methods. To solve this problem, the literature [8] uses the segmentation result of FreeSurfer [9] as the training label of deep convolutional neural networks.
The active contour model with prior information (usually shape and appearance) embedded into the external term [10,11] is robust and can solve the problem of boundary leakage. However, these shape and appearance priors are usually generated based on linear combinations of training shapes. The selection of training labels must consider the shape variability of the object to be segmented. Label alignment is a necessary step in building shape prior. In the process of segmentation, compared with the shape prior, the target to be segmented in the image may have different sizes and orientations, and it is also necessary to align the shape prior with the target to be segmented; that is, the target to be segmented is a similar transformation of the shape prior. However, it is less effective in handling nonlinear transformations like partial stretching and bending. Using ACMs with these priors, it is hard to model all variations present in the visual object of interest and the align process is time consuming. Therefore, we hope to find a patient-specific method to model the shape of the object to be segmented with a small number of training samples. Restricted Boltzmann machines (RBM) [12] is a graphical model with a layer of visible units and a layer of hidden units, where connections exist between the two layers but not between the units within each layer. This restriction facilitates inference with the model. As a generative model, RBM has been used to model brain tumor [13] and lung shape [14]. Considering that RBM lacks the ability to capture the global properties of complex shapes, some researchers used deep belief networks (DBN) to capture the global shape prior of left ventricle endocardium [15,16]. DBNs are composed of several RBM and can effectively model shape using limited training samples [17,18]. ACM with prior information from DBN can reduce the need of using a highly complex network structure.
However, a common drawback of the traditional solution methods for ACM is poor efficiency, rendering its utilization for time-critical applications problematic. Recently, the lattice Boltzmann (LB) method has attracted the attention of researchers for its natural parallel characteristics and clear physical meaning [19]. The partial differential equation (PDE) for image processing can be constructed from the LB evolution equation. From this perspective, the LB method can be regarded as an alternative solution method for PDE with higher efficiency. On the other hand, the application of the LB method in image processing can be regarded as a diffusion process; we will explain this viewpoint in Section 2.3. While LB methods have been successfully applied to image denoising [20], inpainting [21] and segmentation [22,23], there is little research on medical image segmentation. As far as we know, these studies include: Segmentation of tumors in 3D ultrasound images [24]; using the LB algorithm to solve the distance regularization level set (DRLS) to complete the segmentation of intracranial giant aneurysm thrombus [25,26]; embedding local statistical information into the LB external force term to segment brain white matter [27]; using the principal component analysis (PCA) method to extract the main components of HC labels as the shape prior of the LB segmentation model [28]. However, using PCA to obtain the shape prior is limited by the assumption that the shape prior is Gaussian distribution. This method also needs alignment and similarity transformation between the shape prior and the object to be segmented. Thus the method proposed in [28] has good segmentation results when segmenting the HC, which has little difference with the training samples, but the error is large when segmenting the HC, which has serious atrophy. In this work, we propose a DBN driven LB model for HC segmentation.
Using limited training samples, DBN can model shapes of HC. The combination of DBN and LB is patient specific and can get accurate segmentation results without a complex network structure. The shape prior inferred from DBN can solve the problem of boundary leakage caused by an ambiguous boundary, while the initial contour can reduce iterations for segmentation. Compared with the statistical shape prior, our method does not need label alignment and shape prior registration with the object to be segmented. The segmentation accuracy of the DBN-LB_joint is higher than using the DBN and LB methods independently, and it is higher than the PCA-LB_joint and the cRBM [29] -LB_joint, and is comparable with state of the art methods.

2. Method

2.1. Method Overview

The block diagram of the proposed method is depicted in Figure 1. The method is carried out in two stages: (i) The shape of the HC is inferred by using the trained DBN; (ii) the inferred shape is used for initialization and also is incorporated into the LB model for segmentation. The DBN is trained during an offline training process to obtain its optimum values of parameters. After training, we deploy the system to perform the automatic segmentation task. The two stages and the LB method are further elaborated as follows.

2.2. Shape Inferring

We utilize and train a DBN as depicted in Figure 2 to infer the shape of the HC. We exploit the model with the following joint probability:
P ( v , h 1 , h 2 , L ) = P ( h 2 , h 1 , L ) P ( h 1 | v ) ,
where v is a vector representation of the input image, L ( 0 , 1 ) represents the label of v , h denotes the hidden variables, and log P ( h 2 , h 1 , L ) ε RBM ( h 2 , h 1 , L ) with
ε RBM ( h 2 , h 1 , L ) = b 2 Τ h 2 a 1 Τ h 1 a L Τ L ( h 2 ) Τ W 2 h 1 ( h 2 ) Τ W L L ,
representing the energy function of RBM. b 2 , a 1 , a L are the bias vectors; W 2 and W L are weight matrices. Moreover, we have
P ( h 2 | h 1 ) = Π j P ( h 2 ( j ) = 1 | h 1 ) = σ ( b 2 ( j ) + h 1 Τ W 2 ( : , j ) ) ,
with P ( h 1 ( j ) = 1 | v ) = σ ( b 1 ( j ) + v Τ W 1 ( : , j ) ) , where σ ( x ) = 1 1 + e x , the operator ( j ) returns the j t h vector value and ( : , j ) returns the j t h matrix column.
This DBN is trained layer by layer in an unsupervised way by stacking RBMs. The error minimized during this unsupervised training is the reconstruction error of the visible input. The result obtained from the previous RBM is used as “visible” input for the next RBM. The supervised training begins only at the top RBM, when the segmentation label L is provided as visible inputs. The inference process consists of taking the input image and performing bottom-up inferences, until reaching the top two layers, and then initializing the layer L = 0 and performing Gibbs sampling on the layers h 2 , h 1 , and L until convergence.

2.3. Explanation of Lattice Boltzmann Method

LB methods are usually composed of two steps: Collision and streaming steps. The collision step is shown in Figure 3a: Particle collisions happen and then velocities of particles change; as a result particle density functions are redistributed in each grid. As shown in Figure 3b, the streaming step means the moving of particles after the collision step. The LB evolution equation in image processing is redefined in [19]:
I α ( r + e α Δ t , t + Δ t ) I α ( r , t ) = 1 τ ( I ) [ I α e q ( r , t ) I α ( r , t ) ] + Δ t F α ,
where I ( r , t ) denote the gray level of a pixel r = ( x , y ) at time t; it is treated as the mass of sub-pixels. In Figure 3d, the square in the solid box represents a pixel, and the square in the dotted box represents a sub-pixel. I α ( r , t ) is the gray distribution function on the sub-pixel in direction e α . I α e q ( r , t ) represents the equilibrium distribution function to describe the predicted value of redistribution. Δ t is the grid size of time, τ is relaxation time, F α is the external force along direction e α . Each pixel r has nine nearest neighbors (including itself). The lattice vector e α is the location of each sub-pixel which is defined as:
e α = { ( 0 , 0 ) α = 0 c ( cos [ ( α 1 ) π 2 ] , sin [ ( α 1 ) π 2 ] ) α = 1 ,   2 ,   3 ,   4 2 c ( cos [ ( 2 α 1 ) π 4 ] , sin [ ( 2 α 1 ) π 4 ] ) α = 5 ,   6 ,   7 ,   8 ,
During the collision step, I α has a tendency to become I α e q . In this step the gray level on each sub-pixel is divided into two parts: One part is to be redistributed, and the other part stays at the original sub-pixel. The collision equation can be expressed as:
I α ( r , t + Δ t ) I α ( r , t ) = 1 τ ( I ) [ I α e q ( r , t ) I α ( r , t ) ] ,
In the streaming step, the gray level on each sub-pixel is updated by the gray level from neighboring sub-pixels. Because I = α I α , the gray level of pixels changes. The streaming equation is expressed as:
I α ( r + e α Δ t , t + Δ t ) = I α ( r , t + Δ t ) ,
In order to improve computing efficiency, we choose the D2Q5 lattice as shown in Figure 3e and apply the Chapman–Enskog expansion method to Equation (4); we get:
I t = d i v ( 2 5 ( τ ( I ) 1 2 ) I ) + 5 F α ,

2.4. Lattice Boltzmann Model Driven by DBN

The flowchart of the LB segmentation model is depicted in Figure 4. The shape inferred from DBN is used not only as part of the external force term of the LB segmentation model, but also to determine the position of the initial contour. We construct an energy function which includes a weighted gradient term, a weighted region term and a shape energy term; these terms are defined as:
L g ( ϕ ) = Ω g δ ( ϕ ) | ϕ | d x ,
A g ( ϕ ) = Ω g H ( ϕ ) d x ,
S ( ϕ , ψ ) = Ω ( H ( ϕ ) H ( ψ ) ) 2 d x ,
where is the gradient operator, g ( x , y ) = 1 / ( 1 + | G σ I ( x , y ) | 2 ) is an edge indicator with G σ as the Gaussian kernel and I as the image gray level. ψ is the shape prior inferred from the trained DBN, which is defined as:
ψ ( x , y ) = { c , s ( x , y ) = 1 c , s ( x , y ) = 0 ,
where s represents the shape mask inferred from the DBN. H is the Heaviside function, δ is the Dirac delta function:
H ε ( x ) = { 1 2 ε [ 1 + x ε + 1 π sin ( π x ε ) ] , | x | ε 1 , x > ε 0 , x < ε ,
δ ε ( x ) = { 1 2 ε ( 1 + cos ( π x ε ) ) , | x | ε 0 , | x | > ε ,
The energy function is:
E = λ L g ( ϕ ) + α A g ( ϕ ) + μ S ( ϕ , ψ ) ,
where λ > 0 , μ > 0 , α . Using the gradient descent flow method to minimize the energy function, we get the following PDE:
ϕ t = λ δ ε ( ϕ ) d i v ( g ϕ | ϕ | ) + α g δ ε ( ϕ ) + μ δ ε ( ϕ ) ( H ε ( ϕ ) H ε ( ψ ) ) ,
Since the distance function | ϕ | = 1 , Equation (16) becomes:
ϕ t = λ δ ε ( ϕ ) d i v ( g ϕ ) + F ,
where:
F = α g δ ε ( ϕ ) + μ δ ε ( ϕ ) ( H ε ( ϕ ) H ε ( ψ ) ) ,
Comparing Equations (8) and (16), we get:
τ = 5 λ g δ ε ( ϕ ) + 1 2 ,
F α = α g δ ε ( ϕ ) + μ δ ε ( ϕ ) ( H ε ( ϕ ) H ε ( ψ ) ) 5 ,

2.5. Algorithm

The lattice Boltzmann method in image segmentation can be interpreted as the curve evolution process of the iso-density line (i.e., initial contour) under the action of the internal force (i.e., diffusion) and the external force (i.e., gradient, region, shape prior). The procedure of DBN driven LB method for image segmentation is shown in Algorithm 1 below.
Algorithm 1. DBN Driven LB Method for Image Segmentation
1: Setting the initial position of evolving curve C and defining level set function ϕ as a signed distance function, such as:
ϕ ( r , 0 ) = { c , 0 , c , r C i n r C r C o u t
where r is the position of one pixel in the image, c > 0 is a constant, C i n and C o u t denote the inside and outside region of the evolving curve C , respectively
2: Initialize local equilibrium distribution function ϕ α e q ( r , 0 ) = 1 5 ϕ ( r , 0 ) , compute relaxation parameter τ with Equation (19)
3: Compute the external force term and discretize it with Equation (20)
4: Updating the evolving curve and ϕ ( r , t ) = ϕ α e q ( r , t ) after collision and streaming described in Equations (6) and (7), separately
5: If the segmentation is not done, jump back to step (2)
6: Output the segmentation result

3. Experiments

3.1. Data Preparation and Experimental Setup

We use a subset of OASIS-1 (http://www.oasis-brains.org/) as the training set of DBN. This dataset was chosen to cover the entire age span of the subjects and to include subjects with different degrees of dementia. A professional radiologist provided manual segmentations of this subset, which are offered publicly [30]. The selected subset consists of 23 right-handed subjects. We sliced the 3D images and their labels along the sagittal direction. In the sagittal direction, each subject has about 20 2D images containing the HC. Each 2D image and its corresponding label is rotated by 60 degrees in steps of 10 degrees. This results in a total of 3220 images for training. Images are rotated so that the model learns the HC located at all possible rotations while training. In order to improve computational efficiency, we manually cropped the images of the training set to a size of 100 × 100. The dataset used for testing is the preliminary release of EADC-ADNI (http://www.hippocampal-protocol.net/) and has manual segmentation over a subset (N = 100). There are 34 patients with MCI, 29 normal control subjects and 37 patients with AD.
The experimental environment is Matlab R2014b installed onto a PC Intel (R) Core (TM) i5 3230M processor with a clock speed of 2.6 GHz and 4 GB of RAM. The parameters in Equation (16) are λ = 10 , α = 0.5 , μ = 50 , based on the training datasets by considering both the stability of the curve evolution and the effectiveness of the shape prior-based guidance. For DBN, there are two hidden layers with 1000 nodes in the first layer and 1000 in the second, and the input and segmentation layers with size 100 × 100.

3.2. Validation Framework and Evaluation Measures

Firstly, to show the positive effect of shape prior on segmentation results, we subjectively compare the previous LB model and the DRLS method with our method. Secondly, in order to verify that the segmentation method is sensitive to changes of the shape and size of the object, we carry out experiments on the synthetic fishes and HC images. Synthetic fish images have different shapes and sizes. HC images included groups of AD, MCI and NC with different HC volumes. To verify our method and can segment objects with occlusion or missing parts, we carry out experiments on the synthetic ellipse images. Synthetic ellipse images includes partially occluded ellipse and partially missing ellipse. We also compare the segmentation results with ground truth and show the segmentation results of 20 subjects. Thirdly, we use the Dice coefficient as a measure and compare our method with several other methods. Finally, we study the correlation and consistency between several different methods and the ground truth.

4. Results

4.1. Positive Effect of Shape Prior

To prove the positive effect of shape prior, we randomly choose three samples from EADC-ADNI and subjectively compare the segmentation results of the proposed method with the previous LB method proposed in [31] and the DRLS method proposed in [32]. The first is an alternative to the Chan–Vese segmentation model; there is no shape term in the segmentation model. The second is a development of the level set method with an additional regularization term in the energy function to avoid shape irregularities and instability during the level set evolution process while eliminating the need for re-initialization. Figure 5 shows the segmentation results of the above two methods and our methods on three randomly selected images: (a) The segmentation result of [31]; (b) the segmentation result of [32]—the segmentation results have serious boundary leakage; and (c) the segmentation result of the proposed method—the green line is the contour draw by expert, the red line is the contour of our method. From Figure 5, we can see that the LB and DRLS methods without prior knowledge cannot segment HC correctly, while the results of our method are close to the ground truth. It is noteworthy that the segmenting time of a slice by LB based methods is less than two seconds, while the segmenting time of the DRLS based method is about 20 s.

4.2. Sensitivity to Shape and Size Change

4.2.1. Experiments on Synthetic Images

The segmentation results of ellipses are shown in Figure 6. Experiments show that our method can correctly segment objects with partial occlusion and missing parts. The segmentation results of fishes are shown in Figure 7. Experiments show that our method is sensitive to shape and scale variance. It can be found that the initial contour needs only six iterations to reach the edge of the object.

4.2.2. Experiments on Hippocampus Images

To verify the sensitivity of our method to HC structural changes, we segmented images of the AD, MCI and NC groups of EADC-ADNI. We chose the largest HC on the sagittal plane and calculated the average area of three groups, separately. Figure 8 is the boxplot of manual and automated segmentation of the three groups. We find that, although the mean area and area ranges of the three groups after segmentation are different from those of gold standards, the mean values of the three groups are significantly different. This suggests that our method is sensitive to volume changes of HC. Figure 9 shows the segmentation results of some samples, the green line is the ground truth, the red line is the result of our method. From Figure 9 we can find that the results of our method are satisfactory. The segmentation results of samples 2, 8, 14, 15 and 16 are little worse, all of the five samples have severe brain atrophy.

4.3. Comparison with Other Methods

In Section 4.1 we proved the LB model without shape prior cannot segment HC correctly. In this part, we compared our method (DBN-LB_jonit) with DBN, PCA-LB_joint [28] and cRBM [29]-LB_jonit. Then we compared our method with state of the art methods including classifier based [33], atlas based [34] and deep learning based [8] methods. The test set used in [8] is the final release of EADC-ADNI with 135 samples. The results of [33] in Table 1 are 50 samples in the preliminary release. The Dice similarity coefficient is used as the evaluation standard, and is defined as:
D = 2 | A B | | A | + | B | ,
where set A is the estimated volume and B is the actual volume.
Table 1 shows the mean and standard deviations of the Dice coefficient from the proposed method and the other six methods on EADC-ADNI. The proposed method achieved an average Dice coefficient of 0.87, which is higher than DBN; the standard deviation of the Dice coefficient of ±0.05 is lower than DBN. This means the combination of DBN and the LB method is effective. Compared with the PCA-LB_joint and cRBM-LB_joint methods, our method has a higher Dice coefficient and lower standard deviation. This means the shape prior inferred from DBN is superior to PCA and cRBM and our method is sensitive to changes in the HC structure between different samples. The Dice coefficient of our method is higher than [8] because that method used FreeSurfer’s outputs as training labels. The multiple random forest classifier method and the multi-atlas method have better results than our method. As the most popular method of brain image segmentation, the multi-atlas method has the lowest standard deviations of the Dice coefficient.
Execution time is an important factor that determines the performance of a segmentation method. PCA-LB_joint, cRBM-LB_joint, DBN-LB_jonit and DRLS can be regarded as the active contour converging to the target boundary. DBN based segmentation is only a part of our proposed method, which is used for rough segmentation. The execution time of the above methods is directly compared in this paper. In our method, the initial contour is very close to the HC edge to be segmented, and the shape prior guides the contour evolution, so the number of convergence iterations is less. Compared with DBN, the shape prior generated from cRBM is not so close to the HC to be segmented, so the cRBM-LB_joint method needs more iterations to converge. The PCA-LB_joint method needs the registration between the shape prior and the HC to be segmented, and the registration process accounts for a large proportion of the total time consumption. DBN-based segmentation is only a part of our proposed method, which is used for rough segmentation, so it does not need an iterative process and takes the shortest time. A comparison of computational cost for the above methods is shown in Table 2. Compared with the traditional methods for solving PDE, the LB method is naturally parallel and has a higher speed. The high computational time of the multi-atlas method is due to the need of registering the target image with various different atlases. In the method proposed in [35], the computational time reported for a volume can take up to eight hours. The time complexity of random forest is related to the depth and width of the decision tree. For convolutional neural networks, the time complexity of each convolution layer is determined by the area of the output characteristic graph, the convolution kernel and the number of input and output channels. We will compare the execution time of the above three methods in the future.

4.4. Correlation and Consistency

Figure 10 shows scatter plots of the volumes estimated by DBN-LB_joint, cRBM-LB_jiont, PCA-LB_joint and DBN versus the manually measured volumes. We observe a clear correlation between the four methods and the ground truth, together with a number of outliers. The volumetric intraclass correlation coefficients (ICCs) of the four methods are 0.97, 0.94, 0.93 and 0.92 respectively. The results of the four methods are all statistically significant. Compared with other three methods, our method has the highest ICC and the fewest outliers.
Furthermore, the agreement between the automatically and manually segmented volumes was studied with the use of Bland–Altman analysis (Figure 11). cRBM-LB_joint, PCA-LB_joint and DBN methods all present an overestimation bias. Furthermore, PCA-LB_joint and DBN show a light tendency to underestimate small volumes; the cRBM-LB_joint method shows a light tendency to overestimate the large volumes. The DBN-LB_joint method has a much lower bias when compared to the other three methods. This indicates that the volumes calculated by means of the DBN-LB_joint method are closer to the manually segmented ones.

5. Discussion

DRLS and LB models without shape prior cannot segment HC correctly because its boundaries present rich, poor and missing gradient regions. Our model uses both the global constraint of shape and the local information of the target image. Global constraint ensures our model can deal with different types of images including partial occlusion, missing parts, weak boundaries. Local information ensures our model has the ability to deal with objects with shape and scale variance.
Experiments on synthetic image and an HC image show our model is sensitive to shape and scale change and can segment objects with partial occlusion and missing parts. While the effect of this method in the segmentation of HC in patients with severe brain atrophy is little worse, the statistical results are satisfactory. We are confident of solving this problem by increasing the number of hidden layers and selecting training samples reasonably.
That the segmentation accuracy of the DBN-LB_joint is higher than using DBN and the LB method independently means that this combination is effective. That the segmentation accuracy of our method is higher than PCA-LB_joint and cRBM-LB_joint means the shape prior generated from DBN is superior to those generated from PCA and cRBM. The reason for this is that PCA methods extract several principal components from a set of training images as a shape prior, which inevitably loses useful information. The cRBM lacks the ability to receive feedback from a higher layer to a lower layer and might generate a sub-optimal shape model. Compared with cRBM, DBN can capture more global information of the HC shape. The segmentation results of our method are worse than the atlas based method because the latter has mature template.

6. Conclusions

We proposed a DBN-driven LB model for HC segmentation. Compared with the statistical shape prior, the shape prior inferred from the DBN is patient specific. Our method does not need train label alignment, shape prior or target object registration. Our method is sensitive to the changes of the HC structure and has good correlation and consistency with the results of manual segmentation. While satisfactory segmentation results were achieved in this paper, there are several directions deserving further study. (1) We will study a method that can automatically locate the HC to generate the training set of DBN; (2) the number of layers and units in the hidden layer of DBN are determined manually. We will study an optimization method that can automatically determine the above parameters. (3) Considering the natural parallelism of the LB method, we plan to apply it directly to 3D HC segmentation in the future. (4) Furthermore, we plan to use the generative model such as RBM, DBN or DBM to get the occlusion sensitive map of organs, which plays an important role in medical image analysis.

Author Contributions

Idea design, performance of the experiments, analysis of the data and writing of the manuscript, Y.L. Review and editing, Z.Y. All authors have read and approved the final manuscript.

Funding

This research was funded by Shandong Provincial Natural Science Foundation, grant number ZR2015FL004; A Project of Shandong Province Higher Educational Science and Technology Program, grant number J18KA394; Binzhou University Scientific Research Fund Project, grant number BZXYG1312.

Acknowledgments

The authors would like to thank all colleagues who are involved into this study. Specially, we would like to thank Jiehui Jiang, Ting Shen and Yaowen Zhang.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Dill, V.; Franco, A.R.; Pinho, M.S. Automated methods for hippocampus segmentation: The evolution and a review of the state of the art. Neuroinformatics 2015, 13, 133–150. [Google Scholar] [CrossRef] [PubMed]
  2. Bron, E.E.; Smits, M.; van der Flier, W.M.; Vrenken, H.; Barkhof, F.; Scheltens, P.; Papma, J.M.; Steketee, R.M.E.; Orellana, C.M.; Meijboom, R.; et al. Standardized evaluation of algorithms for computer-aided diagnosis of dementia based on structural MRI: The CADDementia challenge. NeuroImage 2015, 111, 562–579. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  3. Frost, R.; Wighton, P.; Karahanoğlu, F.I.; Robertson, R.L.; Grant, P.E.; Fischl, B.; Tisdall, M.D.; van der Kouwe, A. Markerless high-frequency prospective motion correction for neuroanatomical MRI. Magn. Reson. Med. 2019, 82, 126–144. [Google Scholar] [CrossRef] [PubMed]
  4. Zarpalas, D.; Gkontra, P.; Daras, P.; Maglaveras, N. Gradient-based reliability maps for ACM-based segmentation of hippocampus. IEEE Trans. Bio-Med. Eng. 2014, 61, 1015–1026. [Google Scholar] [CrossRef] [PubMed]
  5. Hu, S.Y.; Coupé, P.; Pruessner, J.C.; Collins, D.L. Appearance-based modeling for segmentation of Hippocampus and Amygdala using multi-contrast MR imaging. Neuroimage 2011, 58, 549–559. [Google Scholar] [CrossRef] [Green Version]
  6. Zhu, H.C.; Cheng, H.W.; Yang, X.S.; Fan, Y. Alzheimer’s Disease Neuroimaging Initiative. Metric learning for multi-atlas based segmentation of hippocampus. Neuroinformatics 2017, 15, 41–50. [Google Scholar] [CrossRef] [Green Version]
  7. Chen, Y.N.; Shi, B.B.; Wang, Z.W.; Zhang, P.; Smith, C.D.; Liu, J.D. Hippocampus segmentation through multi-view ensemble ConvNets. In Proceedings of the 14th International Symposium on Biomedical Imaging, Melbourne, Australia, 18–21 April 2017; IEEE: New York, NY, USA, 2017; pp. 192–196. [Google Scholar]
  8. Thyreau, B.; Sato, K.; Fukuda, H.; Taki, Y. Segmentation of the hippocampus by transferring algorithmic knowledge for large cohort processing. Med. Image Anal. 2018, 43, 214–228. [Google Scholar] [CrossRef]
  9. Fischl, B. FreeSurfer. Neuroimage 2012, 62, 774–781. [Google Scholar] [CrossRef] [Green Version]
  10. Eltanboly, A.; Ghazal, M.; Hajjdiab, H.; Shalaby, A.; Switala, A.; Mahmoud, A.; Sahoo, P.; EL-Azab, M.; El-Baz, A. Level sets-based image segmentation approach using statistical shape priors. Appl. Math. Comput. 2019, 340, 164–179. [Google Scholar] [CrossRef]
  11. Zarpalas, D.; Gkontra, P.; Daras, P.; Maglaveras, N. Accurate and fully automatic hippocampus segmentation using subject-specific 3D optimal local maps into a hybrid active contour model. IEEE J. Transl. Eng. Health Med.-JTEHM. 2014, 2, 1800116. [Google Scholar] [CrossRef]
  12. Hinton, G.E.; Salakhutdinov, R.R. Reducing the dimensionality of data with neural networks. Science 2006, 313, 504–507. [Google Scholar] [CrossRef] [Green Version]
  13. Agn, M.; Puonti, O.; Rosensch#xF6;ld, P.M.; Law, I.; Van Leemput, K. Brain tumor segmentation using a generative model with an RBM prior on tumor shape. In Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries; Crimi, A., Menze, B., Maier, O., Reyes, M., Handels, H., Eds.; Springer: Cham, Switzerland, 2016; Volume 9556, pp. 168–180. [Google Scholar]
  14. Zhang, H.; Zhang, S.T.; Li, K.; Metaxas, D.N. Robust shape prior modeling based on Gaussian-Bernoulli restricted Boltzmann Machine. In Proceedings of the 11th IEEE International Symposium on Biomedical Imaging (ISBI), Beijing, China, 29 April–2 May 2014; IEEE: New York, NY, USA, 2010; pp. 270–273. [Google Scholar]
  15. Ngo, T.A.; Lu, Z.; Carneiro, G. Combining deep learning and level set for the automated segmentation of the left ventricle of the heart from cardiac cine magnetic resonance. Med. Image Anal. 2017, 35, 159–171. [Google Scholar] [CrossRef]
  16. Avendi, M.R.; Kheradvar, A.; Jafarkhani, H. A combined deep-learning and deformable-model approach to fully automatic segmentation of the left ventricle in cardiac MRI. Med. Image Anal. 2016, 30, 108–119. [Google Scholar] [CrossRef] [Green Version]
  17. Carneiro, G.; Nascimento, J.C. Combining multiple dynamic models and deep learning architectures for tracking the left ventricle endocardium in ultrasound data. IEEE Trans. Pattern Anal. Mach. Intell. 2013, 35, 2592–2607. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  18. Fasel, I.; Berry, J. Deep belief networks for real-time extraction of tongue contours from ultrasound during speech. In Proceedings of the 20th International Conference on Pattern Recognition, Istanbul, Turkey, 23–26 August 2010; IEEE: New York, NY, USA, 2010; pp. 1493–1496. [Google Scholar]
  19. Yan, Z.Z.; Sun, Y.B.; Jiang, J.H.; Wen, J.L.; Lin, X. Novel explanation, modeling and realization of lattice Boltzmann methods for image processing. Multidimens. Syst. Signal Process. 2015, 26, 645–663. [Google Scholar] [CrossRef]
  20. Chen, J.H.; Chai, Z.H.; Shi, B.C.; Zhang, W.H. Lattice Boltzmann method for filtering and contour detection of the natural images. Comput. Math. Appl. 2014, 68, 257–268. [Google Scholar] [CrossRef]
  21. Liji, R.F.; Sasikumar, M.; Sreejaya, P.; Seelan, K.J. A comparative study and analysis of lattice Boltzmann method and exemplar method for still color image inpainting technique. In Proceedings of the 2nd International Conference on Intelligent Computing, Instrumentation and Control Technologies, Kannur, India, 5–6 July 2019; IEEE: New York, NY, USA, 2019; pp. 45–48. [Google Scholar]
  22. Li, C.; Balla-Arabé, S.; Ginhac, D.; Yang, F. Embedded implementation of VHR satellite image segmentation. Sensors 2016, 16, 771. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  23. Wang, D.W. A fast hybrid level set model for image segmentation using lattice Boltzmann method and sparse field constraint. Int. J. Pattern Recognit. Artif. Intell. 2018, 32, 1854015. [Google Scholar] [CrossRef]
  24. Nguyen, K.L.; Tekitek, M.M.; Delachartre, P.; Berthier, M. Multiple relaxation time lattice Boltzmann models for multigrid phase-field segmentation of tumors in 3D ultrasound images. SIAM J. Imaging Sci. 2019, 12, 1324–1346. [Google Scholar] [CrossRef]
  25. Chen, Y.; Navarro, L.; Wang, Y.; Courbebaisse, G. Segmentation of the thrombus of giant intracranial aneurysms from CT angiography scans with lattice Boltzmann method. Med. Image Anal. 2014, 18, 1–8. [Google Scholar] [CrossRef]
  26. Wang, Y.; Navarro, L.; Zhang, Y.; Kao, E.; Zhu, Y.M.; Courbebaisse, G. Intracranial aneurysm phantom segmentation using a 4D lattice Boltzmann method. Comput. Sci. Eng. 2017, 19, 56–67. [Google Scholar] [CrossRef]
  27. Wen, J.L.; Jiang, J.H.; Yan, Z.Z. A new lattice Boltzmann algorithm for assembling local statistical information with MR brain imaging segmentation applications. Multidimens. Syst. Signal Process. 2017, 28, 1611–1627. [Google Scholar] [CrossRef]
  28. Wen, J.L. Hippocampus MRI Segmentation: A Method Based on Lattice Boltzmann Model. Ph.D. Thesis, Shanghai University, Shanghai, China, 2016. [Google Scholar]
  29. Agn, M.; Law, I.; Af Rosenschöld, P.M.; Van Leemput, K. A generative model for segmentation of tumor and organs-at-risk for radiation therapy planning of glioblastoma patients. In Medical Imaging 2016: Image Processing, Proceedings of the Conference on Medical Imaging-Image Processing, San Diego, CA, USA, 1–3 Mar 2016; SPIE International Society for Optical Engineering: Bellingham, WA, USA, 2016; p. 97841D. [Google Scholar]
  30. Hippocampus Segmentation Masks from Brain MRIs, Segmentation Masks of the Hippocampus from 23 Randomly Selected Images from the OASIS Dataset. Available online: http://vcl.iti.gr/hippocampus-segmentation/ (accessed on 27 December 2019).
  31. Wang, Z.Q.; Yan, Z.Z.; Chen, G. Lattice Boltzmann method of active contour for image segmentation. In Proceedings of the 6th International Conference on Image and Graphics, Hefei, China, 12–15 August 2011; IEEE: New York, NY, USA, 2011; pp. 338–343. [Google Scholar]
  32. Thivya Roopini, I.; Vasanthi, M.; Rajinikanth, V.; Rekha, M.; Sangeetha, M. Segmentation of tumor from brain MRI using fuzzy entropy and distance regularised level set. In Computational Signal Processing and Analysis, Proceedings of the International Conference on NextGen Electronic Technologies: Solicon to Software, VIT University, Chennai, India, 23–25 March 2017; Nandi, A., Sujatha, N., Menaka, R., Alex, J., Eds.; Springer: Singapore, 2018; pp. 297–304. [Google Scholar]
  33. Inglese, P.; Amoroso, N.; Boccardi, M.; Bocchetta, M.; Bruno, S.; Chincarini, A.; Errico, R.; Frisoni, G.B.; Maglietta, R.; Redolfi, A.; et al. Multiple RF classifier for the hippocampus segmentation: Method and validation on EADC-ADNI Harmonized Hippocampal Protocol. Phys. Medica 2015, 31, 1085–1091. [Google Scholar] [CrossRef] [Green Version]
  34. Zheng, Q.; Fan, Y. Integrating semi-supervised label propagation and random forests for multi-atlas based hippocampus segmentation. In Proceedings of the 15th International Symposium on Biomedical Imaging (ISBI 2018), Washington, DC, USA, 4–7 April 2018; IEEE: New York, NY, USA, 2018; pp. 154–157. [Google Scholar]
  35. van der Lijn, F.; den Heijer, T.; Breteler, M.M.; Niessen, W.J. Hippocampus segmentation in MR images using atlas registration, voxel classification, and graph cuts. NeuroImage 2008, 43, 708–720. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Block diagram of the developed algorithm.
Figure 1. Block diagram of the developed algorithm.
Sensors 20 03628 g001
Figure 2. The deep belief network (DBN) used for shape inferring.
Figure 2. The deep belief network (DBN) used for shape inferring.
Sensors 20 03628 g002
Figure 3. Explanation of the lattice Boltzmann (LB) method for image processing. (a) Collision process; (b) streaming process; (c) D2Q9 structure; (d) sub-pixel denoted by dotted box and pixel denoted by solid box; (e) D2Q5 structure.
Figure 3. Explanation of the lattice Boltzmann (LB) method for image processing. (a) Collision process; (b) streaming process; (c) D2Q9 structure; (d) sub-pixel denoted by dotted box and pixel denoted by solid box; (e) D2Q5 structure.
Sensors 20 03628 g003
Figure 4. The flowchart of the LB segmentation model.
Figure 4. The flowchart of the LB segmentation model.
Sensors 20 03628 g004
Figure 5. The positive effect of shape prior. (a) Results of the method in [33]; (b) results of the method in [34]; (c) results of our method.
Figure 5. The positive effect of shape prior. (a) Results of the method in [33]; (b) results of the method in [34]; (c) results of our method.
Sensors 20 03628 g005
Figure 6. Experiments on ellipse synthetic image. Left column is original image, middle column is initial contour, right column is segmentation results. The first row is experiment on ellipse with occlusion, the second row is experiment on ellipse with missing.
Figure 6. Experiments on ellipse synthetic image. Left column is original image, middle column is initial contour, right column is segmentation results. The first row is experiment on ellipse with occlusion, the second row is experiment on ellipse with missing.
Sensors 20 03628 g006
Figure 7. Experiments on three synthetic fishes which are different in size and shape. Left column is original image, middle column is initial contour, right column is segmentation results.
Figure 7. Experiments on three synthetic fishes which are different in size and shape. Left column is original image, middle column is initial contour, right column is segmentation results.
Sensors 20 03628 g007
Figure 8. Box plot of our method’s results and ground truths of EADC-ADNI for the three groups.
Figure 8. Box plot of our method’s results and ground truths of EADC-ADNI for the three groups.
Sensors 20 03628 g008
Figure 9. Parts of segmentation results; the green line is the ground truth; the red line is the segmentation results of our method.
Figure 9. Parts of segmentation results; the green line is the ground truth; the red line is the segmentation results of our method.
Sensors 20 03628 g009
Figure 10. Manual and automated volumes for the DBN-LB_joint, cRBM-LB_joint, PCA-LB_joint and DBN methods.
Figure 10. Manual and automated volumes for the DBN-LB_joint, cRBM-LB_joint, PCA-LB_joint and DBN methods.
Sensors 20 03628 g010
Figure 11. Bland–Altman plots for the EADC-ADNI dataset showing graphically the agreement between the manually segmented volumes and the volumes segmented by means of DBN-LB_joint, cRBM-LB_joint, PCA-LB_joint and DBN method.
Figure 11. Bland–Altman plots for the EADC-ADNI dataset showing graphically the agreement between the manually segmented volumes and the volumes segmented by means of DBN-LB_joint, cRBM-LB_joint, PCA-LB_joint and DBN method.
Sensors 20 03628 g011
Table 1. Comparison results using mean Dice’s coefficient and standard deviations of the Dice coefficient.
Table 1. Comparison results using mean Dice’s coefficient and standard deviations of the Dice coefficient.
Method Dice μ + δ Method Description
DBN 0.84 ± 0.07DBN separately
PCA-LB_joint0.84 ± 0.06PCA driven LB
cRBM-LB_joint0.85 ± 0.06RBM driven LB
DBN-LB_jonit0.87 ± 0.05DBN driven LB
multiple Random Forest classifier [33]0.87 ± 0.03multiple Random Forest classifier
multi-atlas [34]0.88 ± 0.02integrating label propagation and random forests method
Deep learning [8]0.85deep convolutional neural networks
Table 2. Average convergence time for DBN, PCA-LB_joint, cRBM-LB_joint, DBN-LB_jonit and DRLS methods on testing datasets.
Table 2. Average convergence time for DBN, PCA-LB_joint, cRBM-LB_joint, DBN-LB_jonit and DRLS methods on testing datasets.
MethodTime (s/slice) The Number of Iterations
DBN 0.878
PCA-LB_joint6.03511
cRBM-LB_joint3.5879
DBN-LB_jonit2.2206
DRLS21.984210

Share and Cite

MDPI and ACS Style

Liu, Y.; Yan, Z. A Combined Deep-Learning and Lattice Boltzmann Model for Segmentation of the Hippocampus in MRI. Sensors 2020, 20, 3628. https://doi.org/10.3390/s20133628

AMA Style

Liu Y, Yan Z. A Combined Deep-Learning and Lattice Boltzmann Model for Segmentation of the Hippocampus in MRI. Sensors. 2020; 20(13):3628. https://doi.org/10.3390/s20133628

Chicago/Turabian Style

Liu, Yingqian, and Zhuangzhi Yan. 2020. "A Combined Deep-Learning and Lattice Boltzmann Model for Segmentation of the Hippocampus in MRI" Sensors 20, no. 13: 3628. https://doi.org/10.3390/s20133628

APA Style

Liu, Y., & Yan, Z. (2020). A Combined Deep-Learning and Lattice Boltzmann Model for Segmentation of the Hippocampus in MRI. Sensors, 20(13), 3628. https://doi.org/10.3390/s20133628

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop