Next Article in Journal
Special Issue: Application of Materials Science in the Study of Cultural Heritage
Previous Article in Journal
The Evaluation of Spectral Vegetation Indexes and Redundancy Reduction on the Accuracy of Crop Type Detection
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Nailfold Microhemorrhage Segmentation with Modified U-Shape Convolutional Neural Network

1
Key Laboratory of Specialty Fiber Optics and Optical Access Networks, Shanghai Institute for Advanced Communication and Data Science, Shanghai University, 333 Nanchen Road, Shanghai 200444, China
2
School of Electron and Computer, Southeast University Chengxian College, Nanjing 210088, China
3
School of Life Sciences, Shanghai University, 333 Nanchen Road, Shanghai 200444, China
4
Zhejiang Lab, Institute of Artificial Intelligence, Kechuang Avenue, Hangzhou 311121, China
*
Authors to whom correspondence should be addressed.
These authors contributed equally to this work.
Appl. Sci. 2022, 12(10), 5068; https://doi.org/10.3390/app12105068
Submission received: 23 April 2022 / Revised: 11 May 2022 / Accepted: 13 May 2022 / Published: 18 May 2022

Abstract

:
Nailfold capillaroscopy is a reliable way to detect and analyze microvascular abnormalities. It is safe, simple, noninvasive, and inexpensive. Among all the capillaroscopic abnormalities, nailfold microhemorrhages are closely associated with early vascular damages and might be present in numerous diseases such as glaucoma, diabetes mellitus, and systemic sclerosis. Segmentation of nailfold microhemorrhages provides valuable pathological information that may lead to further investigations. A novel deep learning architecture named DAFM-Net is proposed for the accurate segmentation in this study. The network mainly consists of U-shape backbone, dual attention fusion module, and group normalization layer. The U-shape backbone generates rich hierarchical representations while the dual attention fusion module utilizes the captured features for fine adjustment. Group normalization is introduced as an effective normalization method to effectively improve the convergence ability of our deep neural network. The effectiveness of the proposed model is validated through ablation studies and segmentation experiments; the proposed method DAFM-Net achieves competitive performance for nailfold microhemorrhage segmentation with an IOU score of 78.03% and Dice score of 87.34% compared to the ground truth.

1. Introduction

Nailfold capillaroscopy (NC) provides easy access to microcirculation, which is of significant importance in analyzing several cardiovascular and rheumatic diseases. It is safe, inexpensive, noninvasive, and used as a standard clinical practice for the detection of nailfold abnormalities [1]. Common nailfold abnormalities include enlarged (giant) capillaries, microhemorrhages, loss of capillaries, disorganization of the vascular array, and bushy capillaries [2,3,4].
Nailfold microhemorrhages are defined as extravasation of red blood cells into the perivascular tissue as red or brown aggregations [5]. It is associated with the damage of the vessel wall and usually reflects the injury of capillaries. In particular, nail microhemorrhages may first manifest in nailfold microhemorrhages [6]. As illustrated in Figure 1, red cells leave the damaged capillaries and form punctuate hemorrhages and areas of the confluent hemorrhages [7].
To observe the evolvement of microhemorrhages and analyze the damage of vascula r, it is crucial to establish an accurate nailfold microhemorrhage segmentation system. Solutions for a precise assessment of microhemorrhages can be valuable in various situations, such as diagnosis of diabetic retinopathy or optic disc hemorrhage present in glaucoma.
Diabetic retinopathy (DR) is the most common microvascular complication of diabetes and it remains a major cause of preventable blindness [8]. Most patients show no symptoms at the early stage of DR, which makes it difficult for clinical screening. However, an association was reported between diabetes and several capillaroscopic abnormalities including microhemorrhages [7,8,9,10]. Beyond DR, nailfold microhemorrhages are also closely related to optic disc hemorrhages found in both normal-tension glaucoma (NTG) and primary open-angle glaucoma (POAG) patients [6,10,11] and thus provide valuable pathological information. These findings show the prognostic importance of nailfold microhemorrhages for diagnosing optic disc hemorrhage, which is a clinical risk factor for glaucoma progression [12,13].
Recently, deep learning has achieved great success in numerous visual recognition tasks, including medical image segmentation [14]. Successful methods such as FCN (fully convolutional networks) [15], U-Net [16], and DSN (deeply-supervised nets) [17] have reached a high quality in terms of common evaluation metrics. Therefore, we address this task as a pixelwise classification of the input NC images and solve it with a novel U-shape convolutional neural network. The main contributions of our work include:
(1)
An end-to-end deep learning model is proposed for nailfold microhemorrhage segmentation using a U-shape network named DAFM-Net. DAFM-Net inherits the advantages of U-Net and generates accurate probability maps for NC images.
(2)
A dual attention fusion module is designed to emphasize the informative features while suppressing the trivial ones. Spatial and channel information are captured and then thoroughly aggregated by a feature fusion module to further improve feature representation, which contributes to more precise pixelwise classification.
(3)
Instead of batch normalization, we employ group normalization for regularization and obtain satisfactory optimization performance.
(4)
At last, we verify the effectiveness of DAFM-Net on a newly collected dataset and achieve remarkable success. Furthermore, ablation experiments are also conducted to discuss the necessity of group normalization and to evaluate the representation power of our dual attention fusion module.

2. Related Work

2.1. U-Net in Medical Image Segmentation

U-shape convolutional network (U-Net) is a widely used architecture for biomedical image segmentations in various tasks [16]. Since both semantic and spatial information are critical in medical image segmentation, the missing spatial information caused by downsampling must be recovered for good segmentation results. To tackle this issue, U-Net utilizes an encoder–decoder architecture that combines high-resolution, low-level features in the encoder path with low-resolution, high-level features in the decoder path. The variants of U-Net are the state-of-art models for image segmentation [17].
Three-dimensional U-Net, a simple extension of U-Net, plays an important role in 3D medical image segmentation [16]. Compared to the original U-Net, 3D U-Net [18] employs the corresponding 3D operations and uses batch normalization [19] for faster convergence. Le et al. propose H-denseunet that exploits intraslice features and 3D contexts efficiently and produces precise liver and liver tumor segmentation [20]. Attention U-net applies the attention gate (AG) in a standard U-shape architecture that enables the proposed model to focus on the key regions [21]. For nonlocal U-Nets, significant improvements are obtained from global aggregation blocks [22]; nnUnet focuses on the medical data itself and achieves decent segmentation results by using appropriate pre-processing and training methods [23]. Zhou et al. built an interactive 3D nnU-Net by adapting nnU-Net into an interactive version and compared it with a novel quality-aware memory network [24].

2.2. Nailfold Micro Hemorrhages Segmentation

Abnormal capillaroscopic findings are quite common in a wide range of diseases. Therefore, several methods have been proposed to extract nailfold capillaries automatically [25,26,27,28,29]. Kim et al. describe the theoretical foundation of U-Net and compare the white blood cell counting variability between manual, convolutional, and traditional segmentation using capillary frames [28]. Liu et al. combine ResNet and U-Net for the segmentation of poor-quality capillary images [29]. The morphology characteristics of segmented images using Res-Unet are more suitable for qualitative measurement. However, no prior studies have evaluated automatic systems to generate high-quality prediction maps for nailfold microhemorrhage. Therefore, the purpose of our study is to develop an advanced method of nailfold microhemorrhage segmentation for the analysis and evaluation of potential illnesses.

2.3. Attention Mechanism

Attention mechanism demonstrates great improvements in CNNs and has been recently incorporated into the field of semantic segmentation. Hu et al. focus on the interchannel relationship and proposed the squeeze-and-excitation (SE) block [30]. Convolutional block attention module (CBAM), which is consists of the channel attention module (CAM) and the spatial attention module (SAM), infers quality attention map and benefits the prediction of various deep learning models [31]. For instance, MATNet adopts a modified CBAM module named scale-sensitive attention (SSA) module instead of skip-connection to select and transform encoder features [32]. BiSeNet designs a specific module to guide the feature learning process of different semantic paths [33]. Dual attention network (DANet) takes advantage of both channel and position attention to further improve the feature representation for better segmentation performance [34].

3. Materials and Methods

3.1. Data Collection and Preprocessing

Thirty NC images of nailfold microhemorrhages were collected from internet search or references and then processed based on Python 3.7. We manually cropped the region of interest on each image and set the resolution to 256 × 256 pixels. The whole dataset contained 30 NC images of nailfold microhemorrhages and the corresponding ground truths marked by a professional. We also employed contrast limited adaptive histogram equalization (CLAHE) as an additional enhancement technique and then transmitted the processed images to data augmentation, which included random rotations, zooms, shifts, and flips.

3.2. Method Description

To address the need for accurate segmentation in nailfold microhemorrhages, we present DAFM-Net, a U-shape convolutional neural network with attention mechanism and group normalization. The proposed model fuses features of different representation level via the following two schemes: (1) skip connection inherited from U-Net and (2) dual attention fusion module (DAFM), which is especially designed for feature refinement. As seen in Figure 2, DAFM-Net adopts the U-shape structure from U-Net and inserts DAFM blocks in the decoder path to enhance the representation power. Furthermore, to further improve the performance of DAFM-Net, group normalization (GN) is introduced as an effective regularization method.

3.2.1. Backbone

Although nailfold microhemorrhages have shown great potential for segmentation, the collection of appropriate images remains a problem. External factors including air bubbles in the immersion oil, dust on lenses, and movement of patients’ fingers all complicate the issue of image collection [31], causing additional difficulties in the learning process. Additionally, the morphology characteristics of microhemorrhages vary significantly from case to case. To tackle this issue, we employ U-Net as the main body of our DAFM-Net. As mentioned above, the various variants and modifications of U-Net are commonly used for biomedical image segmentation. DAFM-Net inherits the key similarity of their success: skip connection, which aggregates features of different semantic levels. The encoder path performs convolutional operations with ReLu to generate a set of low-level feature maps, and then the Max pooling layer is utilized. Afterward, the spatial dimension of the input feature maps is reduced by half while the number of channels is doubled. Correspondingly, the decoder path also utilizes the convolution layer and ReLu to product high-level feature maps, while gradually raising the width and height by upsampling. Compared to the original U-Net, DAFM-Net reduces the number of filters per layer to match the relatively small dataset.

3.2.2. Dual Attention Fusion Module

Automatic segmentation requires a solid way to generate high-quality predictions. It is a matter of fact that the classic U-shape architecture is worth further investigation despite the great success it has achieved [35], for instance, the redundant low-level expressions caused by skip connections, since the feature representation is rather poor in initial layers [21]. To fulfill this objective, we propose a dual attention fusion module (DAFM) inspired by CBAM [31] and FFM [33]. Our module utilizes both channel and spatial attention to guide the feature learning process in the network, followed by a thorough feature fusion.
CBAM is a simple yet effective attention module for adaptive feature refinement. It captures the dependencies in the spatial and channel dimensions, respectively, and uses a sequential arrangement to obtain the final result. By improving the representation power of CNNs, CBAM has been incorporated in several medical image segmentation tasks including nasopharyngeal carcinoma [36], oral leukoplakia [37], etc. Therefore, we opt to employ CBAM to U-Net and make the following observations. (1) Attention mechanism provides a certain performance gain compared to the original U-Net. (2) Both CAM and SAM deliver good performance when evaluated separately. However, concerning CBAM, a performance drop is observed. This may imply that CBAM is unable to take full advantage of the captured channel or spatial attention.
The DAFM block we propose serves as an effective solution to these limitations. It enables the network to focus on the important features and further lifts the performance of nailfold microhemorrhage segmentation.
DAFM generates two types of attention maps and adopts a parallel arrangement. For channelwise attention, CAM infers a 1D attention map Mc∈R(1×1×C). Meanwhile, SAM generates a 2D spatial attention map MS∈R(H×W×1), as shown in Figure 3. These attention matrixes model the interdependence via feature maps in separate dimensions. Then, the input features and the attention matrixes are multiplied elementwise to obtain refined feature maps. CBAM chooses to impose a channel-first sequential arrangement upon the outputs of these two attention modules, which could be less adequate because of the distinction of semantic levels [36]. Thus, we employ a feature fusion module (FFM) for better integration.
As illustrated in Figure 3, given an intermediate feature map X∈R(H×W×C), we feed it into two parallel attention modules, CAM and SAM. For CAM, both Avg pooling and Max pooling are utilized to generate the spatial descriptions we need, which are later transmitted to a multilayer perceptron (MLP) with one hidden layer. Then, an elementwise summation operation is performed on the outputs of MLP. After that, a sigmoid function is used to normalize the final channel attention map   M c R 1 × 1 × C , which can be written as the following form:
M C X = σ M L P A v g p o o l X + M L P M a x p o o l X
For SAM, we also use both Avg pooling and Max pooling to explore the spatial attention. However, unlike CAM, the outputs descriptors are concatenated and then fed to a convolution layer to generate the spatial attention map M S ∈R(H×W×1):
M S X = σ c o n v A v g p o o l X ; M a x p o o l X
The refined feature maps F C and F S can be described as given below:
F C = M C X
F S = M S X
F C and F S are different in the level of feature representation. F C explores the interchannel relationship among the input features while F S mainly focuses on spatial information. Therefore, we use FFM to take full advantage of these feature maps. FFM concatenates the two distinct inputs and adjusts the scales of different features through a normalization operation. Then, the output map is reweighted for better feature selection and combination. This is achieved by a weight vector generated by global average pooling and 1 × 1 convolution layers. As shown in Figure 3, the normalized feature map F is transmitted to a subnetwork for refinement:
F f = F + F σ c o n v G l o b a l p o o l F
In general, DFAM produces a refined feature map with higher sensitivity to informative features. Through a thorough combination of features from different representation levels, DFAM is capable of highlighting significant features while suppressing trivial information.

3.2.3. Group Normalization

Batch normalization (BN) plays an important role in deep learning [19]. By normalizing the features within a minibatch, BN improves the performance of CNNs on numerous computer vision tasks. However, due to memory limitations, most research tends to use a batch size of 1 or 2. Such a small batch size leads to inaccurate estimation of the batch statistics, which will lower the performance of deep learning networks. For the effectiveness of segmentation, group normalization (GN) [38] is employed as the normalization layer.
GN organizes channels into different groups and calculates the mean and standard deviation across a group of channels. The formulations of group normalization follow:
S i = { k | k N = i N ,   k C C / G = i C C / G }
  • Calculate the mean value
μ i = 1 m k S i x k
2.
Calculate the standard deviation
σ i =   1 m k S i x k μ i + ε
3.
Normalization
x i ^ = 1   σ i x k μ i
4.
Scale and shift
y i = γ x i ^ + β
In this case, GN takes as input a 4D vector x and outputs a channelwise normalized vector y. The index i can be described as:
i = i N , i H , i W , i C
where N presents the batch axis, (H, W) are the height and width of the input feature, C is the channel axis. S i represents the set of coefficients in the same group, m is the size of S i where · represents the floor operation. G is a predefined hyperparameter that determines the group number (C/G). Similarly, GN utilizes the trainable parameters γ and β to obtain better approximation ability.
The underlying hypothesis behind GN is that visual representation channels are not entirely independent, namely, each group of channels may follow the same distribution with shared mean and standard variance. Thus, computing different statistics of grouped channels can offer more flexibility and better expressiveness for the deep model.
In general, GN can be thought of as a powerful normalization method without utilizing the batch dimension. It helps the proposed network to converge and gain better stability.

4. Experiments and Results

Segmentation abnormalities in medical images demand a high level of accuracy conditioned on the relevant clinical requirements. As such, a new segmentation model with higher fusion efficiency and better segmentation accuracy is required. We first explore the effectiveness of DAFM blocks through a comparison of different attention methods, including SAM, CAM, CBAM, SSA, and DAFM. Then, the necessity of group normalization is verified. The details of each experiment are explained below.

4.1. Training Strategies

The network is trained with a combined loss function that consists of the Dice coefficient and binary cross-entropy (BCE), defined as follow:
l A , B = 0.5 B C E A , B D i c e A , B
B C E A , B = a i j l o g b i j + 1 a i j l o g 1 b i j
D i c e A , B = 2 i , j a i j b i j i , j a i j 2 + i , j b i j 2
where A and B denote the prediction map and the ground truth, respectively.
The proposed DAFM-Net is implemented with Python based on Keras with Tensorflow backend. During the training process, we use stochastic gradient descent (SGD) to optimize the deep model and set the basic learning rate to 〖10〗−3 with a momentum of 0.95, a minibatch of size 2 due to memory restriction. All experiments are performed on a single Nvidia Tesla P100 GPU.

4.2. Evaluation

To verify the effectiveness of DAFM-Net, we use intersection-over-union score (IOU, A B A B ) and Dice score ( 2 A B A + B ) for evaluation, which are widely used assessments for semantic segmentation [39]. IOU and Dice scores are both measured by %.

4.3. Ablation Study

We conduct the ablation study under the same training strategy and settings. The final results are summarized below.

4.3.1. Dual Attention Fusion Module

We first evaluate the necessity of the proposed dual attention fusion module compared against U-Net with different attention mechanisms. The outputs of these deep networks are two-channel probability maps for nailfold microhemorrhage detection. All the ablation experiments carried out in Table 1 use BN as the normalization layer.
As shown in Table 1, models with attention mechanisms significantly outperform the original U-Net on both IOU and Dice scores. It demonstrates that attention mechanisms offer certain value gain to the segmentation task by highlighting the informative features while suppressing the unnecessary ones. Note that SAM provides a performance gain of 1.85% on the IOU score and 1.38% on Dice score compared with CAM. It indicates that spatial attention may be superior to channel attention in the segmentation of nailfold microhemorrhages. Row 5 in Table 1 shows the performance of U-Net with CBAM is worse than U-Net with SAM by 3.16% on IOU and 2.14% on Dice score. Hence, integrating spatial and channel attention in a sequential manner is not a suitable solution for our segmentation task. To preserve the potential loss caused by simple combination in CBAM, we design DAFM to thoroughly combine the feature maps of different representation levels through feature reweight. For comparison, we insert another attention module extended from CBAM with recalibrate strategy named SSA to each pair of encoder and decoder layers (refer to [32] for more details). Both SSA and DAFM produce better results compared with CBAM, further confirming the effectiveness of feature integration. Additionally, DAFM outperforms all other methods by achieving an IOU score of 73.03% and a Dice score of 82.92%. The feature fusion structure of DAFM fully utilizes the significant information extracted by both the channel and spatial attention module, which improves the representation power of U-Net on accurate image segmentation.

4.3.2. Group Normalization

Group normalization eases the optimization issue of deep neural networks and maintains the expressiveness power during the training process, further improving the overall segmentation performance. The effectiveness of GN is evaluated both on U-Net and DAFM-Net, with the predefined hyperparameter G set to 8.
As shown in Table 2, GN can improve segmentation quality. For U-Net, it brings a performance gain of 7.36% on IOU and 9.09% on Dice compared to BN. GN is insensitive to the batch size and offers a stable performance for segmentation. As shown in rows 4 and row 5 in Table 2, with a batch size of 2, the DAFM-Net with GN outperforms that with BN due to the inaccurate estimation of batchwise statistics. In particularly, DAFM-Net with GN obtains an IOU score of 78.03% and a Dice score of 87.34%. The proposed model outperforms other methods and achieves very good segmentation results, which confirms its superiority.

4.4. Qualitative Comparison

We also visualize some prediction results in Figure 4 for qualitative evaluation. The predictions generated by U-Net lack details, especially in the segmentation of punctuate hemorrhages. Moreover, the edges of microhemorrhages are coarse and inaccurate. As such, a new segmentation model with higher fusion efficiency and better segmentation accuracy is required. From column 3 to column 8 in Figure 4, we observe that the response of specific details is noticeable after the enhancement of attention modules, and DAFM provides more accurate prediction maps than other attention mechanisms. For instance, DAFM is more sensitive to the punctate bleeds around the capillaries (see rows 3 and 5 in Figure 4) and describes more details than other models. Row 3 in Figure 4 demonstrates a challenging case for segmentation, where the morphology characteristics of microhemorrhages are rather irregular. DAFM-Net significantly outperforms other methods and provides a relatively accurate judgment.

5. Conclusions and Discussion

Accurate nailfold microhemorrhage segmentation is significantly useful for the evaluation of potential illnesses. In this paper DAFM-Net, a U-shape convolutional neural network, appears to effectively solve nailfold microhemorrhage segmentation. The proposed network exploits hierarchical features through a specially designed dual attention fusion module, which emphasizes the informative features while suppressing the trivial ones to further improve the segmentation performance. Instead of batch normalization, a competitive alternative, group normalization, is employed to avoid overfitting during the training process. The model is tested on a newly collected dataset for nailfold microhemorrhage segmentation, and the segmentation performance reaches an IOU score of 78.03% and a Dice score of 87.34%.

Author Contributions

R.L., S.L., Y.L. and J.T. performed the experiments; R.L., S.L., T.L., N.C. and J.Y. analyzed the data; N.C. and J.Y. contributed reagents/materials/apparatus; R.L., T.L., N.C. and S.L. edited the manuscript; R.L., Y.L., J.T., N.C. and S.L. conceived and designed the experiments and wrote the paper. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by National Nature Science Foundation of China (NSFC) (31972901, 31571430, 62175142, and 61875118). The authors wish to express their thanks for the support of 111 Project (D20031).

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki, and approved by Ethics Committee of Shanghai University (ECSHU 2021-106, 5 March 2021).

Informed Consent Statement

The data acquired from internet were not applicable, and the informed consent was obtained from all the subjects when we acquired the experiments data from the subjects involved in the study.

Data Availability Statement

Data available on request due to privacy.

Acknowledgments

Thanks are given to Jun Bao from Zhejiang Laboratory for language improvement and revision.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Nivedha, R.; Brinda, M.; Suma, K.V.; Rao, B. Classification of nailfold capillary images in patients with hypertension using non-linear SVM. In Proceedings of the 2016 International Conference on Circuits Controls, Communications and Computing (I4C), Bangalore, India, 4–6 October 2016. [Google Scholar] [CrossRef]
  2. Pancar, G.S.; Kaynar, T. Nailfold capillaroscopic changes in patients with chronic viral hepatitis. Microvasc. Res. 2020, 129, 103970. [Google Scholar] [CrossRef] [PubMed]
  3. Ruaro, B.; Confalonieri, M.; Salton, F.; Wade, B.; Baratella, E.; Geri, P.; Confalonieri, P.; Kodric, M.; Biolo, M.; Bruni, C. The Relationship between Pulmonary Damage and Peripheral Vascular Manifestations in Systemic Sclerosis Patients. Pharmaceuticals 2021, 14, 403. [Google Scholar] [CrossRef] [PubMed]
  4. Ruaro, B.; Nallino, M.G.; Casabella, A.; Salton, F.; Confalonieri, P.; De Tanti, A.; Bruni, C. Monitoring the microcirculation in the diagnosis and follow-up of systemic sclerosis patients: Focus on pulmonary and peripheral vascular manifestations. Microcirculation 2020, 27, e12647. [Google Scholar] [CrossRef]
  5. Barbach, Y.; Chaouche, M.; Cherif, A.D.; Elloudi, S.; Baybay, H.; Mernissi, F.Z. Dermoscopy of Nail Fold Capillaries in Connective Tissue Diseases. Madr. J. Case Rep. Stud. 2019, 3, 130–131. [Google Scholar] [CrossRef]
  6. Park, H.L.; Park, S.; Oh, Y.; Park, C.K. Nail Bed Hemorrhage: A Clinical Marker of Optic Disc Hemorrhage in Patients with Glaucoma. Arch. Ophthalmol. 2011, 129, 1299–1304. [Google Scholar] [CrossRef] [Green Version]
  7. Kayser, C.; Bredemeier, M.; Caleiro, M.T.; Capobianco, K.; Fernandes, T.M.; de Araújo Fontenele, S.M.; Freire, E.; Lonzetti, L.; Miossi, R.; Sekiyama, J.; et al. Position article and guidelines 2018 recommendations of the Brazilian Society of Rheumatology for the indication, interpretation and performance of nailfold capillaroscopy. Adv. Rheumatol. 2019, 59, 1–13. [Google Scholar] [CrossRef] [Green Version]
  8. Al-Shabrawey, M.; Zhang, W.; McDonald, D. Diabetic retinopathy: Mechanism, diagnosis, prevention, and treatment. Biomed. Res. Int. 2015, 2015, 854593. [Google Scholar] [CrossRef]
  9. Uyar, S.; Balkarlı, A.; Erol, M.K.; Yeşil, B.; Tokuç, A.; Durmaz, D.; Görar, S.; Çekin, A.H. Assessment of the Relationship between Diabetic Retinopathy and Nailfold Capillaries in Type 2 Diabetics with a Noninvasive Method: Nailfold Videocapillaroscopy. J. Diabetes Res. 2016, 2016, 7592402. [Google Scholar] [CrossRef]
  10. Ciaffi, J.; Ajasllari, N.; Mancarella, L.; Brusi, V.; Meliconi, R.; Ursini, F. Nailfold capillaroscopy in common non-rheumatic conditions: A systematic review and applications for clinical practice. Microvasc. Res. Sep. 2020, 131, 104036. [Google Scholar] [CrossRef]
  11. Pasquale, L.R.; Hanyuda, A.; Ren, A.; Giovingo, M.; Greenstein, S.H.; Cousins, C.; Patrianakos, T.; Tanna, A.P.; Wanderling, C.; Norkett, W.; et al. Nailfold Capillary Abnormalities in Primary Open-Angle Glaucoma: A Multisite Study. Investig. Ophthalmol. Vis. Sci. 2015, 56, 7021–7028. [Google Scholar] [CrossRef] [Green Version]
  12. Chung, H.S.; Harris, A.; Evans, D.W.; Kagemann, L.; Garzozi, H.J.; Martin, B. Vascular aspects in the pathophysiology of glaucomatous optic neuropathy of glaucomatous optic neuropathy. Surv. Ophthalmol. 1999, 43, S43–S50. [Google Scholar] [CrossRef]
  13. Grunwald, J.E.; Piltz, J.; Hariprasad, S.M.; DuPont, J. Optic nerve and choroidal circulation in glaucoma. Investig. Ophthalmol. Vis. Sci. 1998, 39, 2329–2336. [Google Scholar]
  14. Krizhevsky, A.; Sutskever, I.; Hinton, G. ImageNet Classification with Deep Convolutional Neural Networks. Adv. Neural Inf. Process. Syst. 2012, 25, 84–90. [Google Scholar] [CrossRef]
  15. Shelhamer, E.; Long, J.; Darrell, T. Fully Convolutional Networks for Semantic Segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 640–651. [Google Scholar] [CrossRef]
  16. Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional Networks for Biomedical Image Segmentation. In Proceedings of the Medical Image Computing and Computer-Assisted Intervention—MICCAI 2015, Munich, Germany, 5–9 October 2015; pp. 234–241. [Google Scholar]
  17. Zhou, Z.; Siddiquee, M.M.R.; Tajbakhsh, N.; Liang, J. Unet++: A nested u-net architecture for medical image segmentation. In Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support; Springer: Cham, Switzerland, 2018; pp. 3–11. [Google Scholar]
  18. Çiçek, Ö.; Abdulkadir, A.; Lienkamp, S.S.; Brox, T.; Ronneberger, O. 3D U-Net: Learning dense volumetric segmentation from sparse annotation. In Proceedings of the 19th International Conference on Medical Image Computing and Computer-Assisted Intervention, Athens, Greece, 17–21 October 2016; pp. 424–432. [Google Scholar]
  19. Ioffe, S.; Szegedy, C. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv 2015, arXiv:1505.04597. [Google Scholar]
  20. Li, X.; Chen, H.; Qi, X.; Dou, Q.; Fu, C.W.; Heng, P.A. H-DenseUNet: Hybrid Densely Connected UNet for Liver and Tumor Segmentation from CT Volumes. IEEE Trans. Med. Imaging 2018, 37, 2663–2674. [Google Scholar] [CrossRef] [Green Version]
  21. Oktay, O.; Schlemper, J.; Folgoc, L.L.; Lee, M.; Heinrich, M.; Misawa, K.; Mori, K.; McDonagh, S.; Hammerla, N.Y.; Kainz, B.; et al. Attention u-net: Learning where to look for the pancreas. arXiv 2018, arXiv:1804.03999. [Google Scholar]
  22. Wang, Z.; Zou, N.; Shen, D.; Ji, S. Non-Local U-Nets for Biomedical Image Segmentation. arXiv 2020, arXiv:1812.04103. [Google Scholar] [CrossRef]
  23. Isensee, F.; Petersen, J.; Klein, A.; Zimmerer, D.; Jaeger, P.F.; Kohl, S.; Wasserthal, J.; Koehler, G.; Norajitra, T.; Wirkert, S.; et al. nnu-net: Self-adapting framework for u-net-based medical image segmentation. arXiv 2018, arXiv:1809.10486. [Google Scholar]
  24. Zhou, T.; Li, L.; Bredell, G.; Li, J.; Konukoglu, E. Quality-aware memory network for interactive volumetric image segmentation. In Proceedings of the 24th International Conference on Medical Image Computing and Computer-Assisted Intervention, Strasbourg, France, 27 September–1 October 2021; pp. 560–570. [Google Scholar]
  25. Paradowski, M.; Kwasnicka, H.; Borysewicz, K. Avascular area detection in nailfold capillary images. In Proceedings of the International Multiconference on Computer Science and Information Technology, Mragowo, Poland, 12–14 October 2009; pp. 419–424. [Google Scholar]
  26. Tama, A.; Mengko, T.R.; Zakaria, H. Nailfold capillaroscopy image processing for morphological parameters measurement. In Proceedings of the 4th International Conference on Instrumentation Communications, Information Technology, and Biomedical Engineering (ICICI–BME), Bandung, Indonesia, 2–3 November 2015; pp. 175–179. [Google Scholar]
  27. Cutolo, M.; Trombetta, A.C.; Melsens, K.; Pizzorni, C.; Sulli, A.; Ruaro, B.; Paolino, S.; Deschepper, E.; Smith, V. Automated assessment of absolute nailfold capillary number on video capillaroscopic images: Proof of principle and validation in systemic sclerosis. Microcirculation 2018, 25, e12447. [Google Scholar] [CrossRef]
  28. Kim, B.; Hariyani, Y.S.; Cho, Y.H.; Park, C. Automated White Blood Cell Counting in Nailfold Capillary Using Deep Learning Segmentation and Video Stabilization. Sensors 2020, 20, 7101. [Google Scholar] [CrossRef] [PubMed]
  29. Liu, S.; Li, Y.; Zhou, J.; Hu, J.; Chen, N.; Shang, Y.; Chen, Z.; Li, T. Segmenting nailfold capillaries using an improved U-net network. Microvasc. Res. 2020, 130, 104011. [Google Scholar] [CrossRef] [PubMed]
  30. Hu, J.; Shen, L.; Albanie, S.; Sun, G.; Wu, E. Squeeze-and-Excitation Networks. IEEE Trans. Pattern Anal. Mach. Intell. 2018, 42, 2011–2023. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  31. Woo, S.; Park, J.; Lee, J.Y.; Kweon, I.S. Cbam: Convolutional block attention module. In Proceedings of the 15th European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 3–19. [Google Scholar]
  32. Zhou, T.; Wang, S.; Zhou, Y.; Yao, Y.; Li, J.; Shao, L. Motion-attentive transition for zero-shot video object segmentation. In Proceedings of the AAAI Conference on Artificial Intelligence, New York, NY, USA, 7–12 February 2020; Volume 34, pp. 13066–13073. [Google Scholar]
  33. Yu, C.; Wang, J.; Peng, C.; Gao, C.; Yu, G.; Sang, N. Bisenet: Bilateral segmentation network for real-time semantic segmentation. In Proceedings of the 15th European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 325–341. [Google Scholar]
  34. Fu, J.; Liu, J.; Tian, H.; Bao, Y.; Fang, Z.; Lu, H. Dual Attention Network for Scene Segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 15–20 June 2019; pp. 3141–3149. [Google Scholar] [CrossRef] [Green Version]
  35. Zhang, Z.; Zhang, X.; Peng, C.; Xue, X.; Sun, J. ExFuse: Enhancing Feature Fusion for Semantic Segmentation. In Proceedings of the European Conference on Computer Vision, Munich, Germany, 8–14 September 2018. [Google Scholar]
  36. Chen, H.; Qi, Y.; Yin, Y.; Li, T.; Liu, X.; Li, X.; Gong, G.; Wang, L. MMFNet: A multi-modality MRI fusion network for segmentation of nasopharyngeal carcinoma. Neurocomputing 2020, 394, 27–40. [Google Scholar] [CrossRef] [Green Version]
  37. Xie, F.; Mu, Y.; Guan, Z.; Shen, X.; Xu, P.; Wang, H. Oral leukoplakia (OLK) segmentation based on Mask R-CNN with spatial attention mechanism. J. Northwest Univ. 2020, 50, 9–15. [Google Scholar]
  38. Wu, Y.; He, K. Group normalization. In Proceedings of the 15th European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 3–19. [Google Scholar]
  39. Crum, W.R.; Camara, O.; Hill, D.L.G. Generalized overlap measures for evaluation and validation in medical image analysis. IEEE Trans. Med. Imaging 2006, 25, 1451–1461. [Google Scholar] [CrossRef]
Figure 1. An example of nailfold microhemorrhages [7]. Nailfold capillaroscopy (a) and the nailfold microhemorrhages (b) in the figure.
Figure 1. An example of nailfold microhemorrhages [7]. Nailfold capillaroscopy (a) and the nailfold microhemorrhages (b) in the figure.
Applsci 12 05068 g001
Figure 2. The overall architecture of the proposed DAFM-Net. Our model takes as input the images of nailfold microhemorrhages and performs precise pixelwise classification.
Figure 2. The overall architecture of the proposed DAFM-Net. Our model takes as input the images of nailfold microhemorrhages and performs precise pixelwise classification.
Applsci 12 05068 g002
Figure 3. The dual attention fusion module. ⊗ denotes elementwise multiplication, ⊕ denotes elementwise summation, and ς denotes the sigmoid function.
Figure 3. The dual attention fusion module. ⊗ denotes elementwise multiplication, ⊕ denotes elementwise summation, and ς denotes the sigmoid function.
Applsci 12 05068 g003
Figure 4. Qualitative comparison of nailfold microhemorrhages segmentation. From left to right: the original images, ground truth (GT), segmentation mask produced by U-Net, U-Net+CAM, U-Net+SAM, U-Net+CBAM, U-Net+SSA, and our DAFM-Net.
Figure 4. Qualitative comparison of nailfold microhemorrhages segmentation. From left to right: the original images, ground truth (GT), segmentation mask produced by U-Net, U-Net+CAM, U-Net+SAM, U-Net+CBAM, U-Net+SSA, and our DAFM-Net.
Applsci 12 05068 g004
Table 1. Results of U-Net using different attention mechanisms.
Table 1. Results of U-Net using different attention mechanisms.
MethodIOUDice
U-Net54.3763.89
U-Net+CAM71.0681.20
U-Net+SAM72.9182.58
U-Net+CBAM69.7580.44
U-Net+SSA71.8281.73
U-Net+DAFM73.0382.92
Table 2. Results of U-Net and the proposed DAFM-Net using different normalization methods.
Table 2. Results of U-Net and the proposed DAFM-Net using different normalization methods.
MethodNormalizerIOUDice
U-NetBN54.3763.89
GN61.7372.98
DAFM-NetBN73.0384.67
GN78.0387.34
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Liu, R.; Tian, J.; Li, Y.; Chen, N.; Yan, J.; Li, T.; Liu, S. Nailfold Microhemorrhage Segmentation with Modified U-Shape Convolutional Neural Network. Appl. Sci. 2022, 12, 5068. https://doi.org/10.3390/app12105068

AMA Style

Liu R, Tian J, Li Y, Chen N, Yan J, Li T, Liu S. Nailfold Microhemorrhage Segmentation with Modified U-Shape Convolutional Neural Network. Applied Sciences. 2022; 12(10):5068. https://doi.org/10.3390/app12105068

Chicago/Turabian Style

Liu, Ruiqi, Jing Tian, Yuemei Li, Na Chen, Jianshe Yan, Taihao Li, and Shupeng Liu. 2022. "Nailfold Microhemorrhage Segmentation with Modified U-Shape Convolutional Neural Network" Applied Sciences 12, no. 10: 5068. https://doi.org/10.3390/app12105068

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop