Next Article in Journal
Regulated Charge Pumps: A Comparative Study by Means of Verilog-AMS
Next Article in Special Issue
Deep Learning with Limited Data: Organ Segmentation Performance by U-Net
Previous Article in Journal
Spellcaster Control Agent in StarCraft II Using Deep Reinforcement Learning
Previous Article in Special Issue
Semantic Segmentation Framework for Glomeruli Detection and Classification in Kidney Histological Sections
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Novel Biased Normalized Cuts Approach for the Automatic Segmentation of the Conjunctiva

1
Department of Computer Science, University of Bari, 70125 Bari, Italy
2
Department of Computer Science, University of Pisa, 56127 Pisa, Italy
*
Author to whom correspondence should be addressed.
Electronics 2020, 9(6), 997; https://doi.org/10.3390/electronics9060997
Submission received: 18 May 2020 / Revised: 5 June 2020 / Accepted: 9 June 2020 / Published: 14 June 2020
(This article belongs to the Special Issue Biomedical Image Processing and Classification)

Abstract

:
Anemia is a common public health disease diffused worldwide. In many cases it affects the daily lives of patients needing medical assistance and continuous monitoring. Medical literature states empirical evidence of a correlation between conjunctival pallor on physical examinations and its association with anemia diagnosis. Although humans exhibit a natural expertise in pattern recognition and associative skills based on hue properties, the variance of estimates is high, requiring blood sampling even for monitoring. To design automatic systems for the objective evaluation of pallor utilizing digital images of the conjunctiva, it is necessary to obtain reliable automatic segmentation of the eyelid conjunctiva. In this study, we propose a graph partitioning segmentation approach. The semantic segmentation procedure of a diagnostically meaningful region of interest has been proposed for exploiting normalized cuts for perceptual grouping, thereby introducing a bias towards spectrophotometry features of hemoglobin. The reliability of the identification of the region of interest is demonstrated both with standard metrics and by measuring the correlation between the color of the ROI and the hemoglobin level based on 94 samples distributed in relation to age, sex and hemoglobin concentration. The region of interest automatically segmented is suitable for diagnostic procedures based on quantitative hemoglobin estimation of exposed tissues of the conjunctiva.

1. Introduction

1.1. Background

Anemia is a blood disorder in which the number of red blood cells is inadequate to carry oxygen to human tissues and organs. It affects about a third of the global population, being the most common blood disorder according to the epidemiological results [1,2,3]. Each different form of this condition has its specific underlying causes. The process of erythrocyte production in the blood involves bone marrow and erythropoietin, a hormone produced by the kidneys, which regulates the process of erythropoiesis, favoring a constant rate of change in the number of erythrocytes in the blood [4]. Adequate production of red blood cells prevents conditions such as anemia and tissue hypoxia. To promote normal erythropoiesis, correct hemoglobin synthesis is required. Hemoglobin, the iron-containing protein, represents the predominant protein found in erythrocytes, responsible for transporting oxygen from the lungs to the other tissues. Anemia caused by deficiencies of the aforementioned factors results in production patterns of abnormal and different erythrocytes [5]. Diagnosing anemia requires in most cases a complete blood count (CBC) to check different properties, including hemoglobin and hematocrit levels. Each physiological need depends on several factors, such as gender, age, different stages of pregnancy and altitude. The thresholds presented in Table 1 are used to diagnose anemia in individuals in a screening or clinical setting according to World Health Organization diagnostic guidelines [6].
There has always been a worldwide interest in providing simple, cheap and robust procedures to measure hemoglobin without requiring specialized primary health-care workers or medical laboratories [7]. In response to this need, WHO developed the hemoglobin color scale (HCS) in 2001. It consists of a small card of six shades of red from lighter to darker representing a hemoglobin g/dL concentration from 4 to 14 with a step size of 2 g/dL. The specificity of this method has been disputed in literature; for instance, in 2005 14 studies mostly reported a high sensitivity for detecting anemia (75–97%) [8]. Nevertheless, what is crucial about HCS is its potential for opening the way to different approaches requiring a mixture of expertise from different disciplines, such as computer science, in the future. Like other diagnostic-clinical and analytical-laboratory medical disciplines that are beginning to make extensive use of image, sound or signal analysis; and machine and deep learning techniques [9,10,11,12,13,14,15,16,17,18], it is worthwhile to invest in research and development of technologies such as those we deal with in this paper, with the dual purpose of significantly reducing the costs borne by the national health systems and powering the healthcare and medical services that would be exempted from a considerable amount of practically useless activities. Since the importance of the objective evaluation of the pallor of the conjunctiva has been understood, a lot has been done. Numerous researchers have worked to develop methods, techniques and devices to make the estimate of the level of hemoglobin or the determination of the condition of severe anemia, in a non-invasive way, as reliable as possible. We will report a summary of this path in the section “Related Works”.

1.2. Haemoglobin Spectrophotometry

HCS and physical examination of exposed tissues such as palpebral conjunctiva or nail beds both rely on how humans perceive colors related to the optical spectrum [19]. To better analyze and handle this phenomenon from a computer vision point of view, a chemical insight is required. Spectrophotometry in chemistry is defined as quantitative measurements of the reflective or absorption properties of a material from a wavelength perspective. The spectra of the hemoglobin molecule vary based on whether it is bound to oxygen, carbon monoxide or nothing; the the latter is also called deoxygenated Hb [20].
We relied on experimental literature data [21] for the absorption spectra of hemoglobin used for both plots in Figure 1. The absorption coefficient μ a H b for HbO 2 and Hb is calculated as follows:
μ a H b ( λ ) = 2.303 × e H b ( λ ) [ L c m × m o l ] × 150 [ g / L ] M H b [ g / m o l ] ,
where e H b ( λ ) [ L c m × m o l ] is the Hb molar extinction coefficient and M H b [ g / m o l ] is the Hb gram molecular weight, assuming a concentration of 150 grams per liter.
Over the years, the palpebral conjunctiva has been a good spot to diagnose anemia, representing a highly vascular area characterized by several capillaries. In [22] a multi-layered tissue model is proposed and investigated to approximate the lower eyelid with seven layers: conjunctival epithelium, tarsal plate, orbicularis oculi, subcutaneous tissue, dermis, epidermis and stratum corneum on the outside of the eyelid tissue. The conjunctiva is perfused from the ascending branch of the posterior conjunctival artery. The presence of interweaving capillary networks penetrating several layers of the model, with the mucous membrane being highly transparent, allows for model approximations for the digital image domain. As already visually described by Figure 1, Hb and HbO 2 both absorb wavelengths from 275 to about 550 nm corresponding to a visible spectrum from purple to light green. Each frequency above 600 nm is highly reflected, matching with colors from orange to dark red. A typical human eye is known to be aware of wavelengths in a range from 380 to 740 nm. The cytoplasm of the red blood cell is rich in hemoglobin, that being responsible for the reddish appearance of exposed tissues and blood in general. Laboratory-based experiments conducted in [23,24], inspired us to start from those results to accomplish segmentation and digital image analysis related to hemoglobin.

1.3. Related Works

Over the years many researchers have put in effort toward developing non-invasive methods for anemia detection through hemoglobin estimation. The relevance of conjunctiva hue in the clinical evaluation of anemia was tested in [25] for 219 healthy ambulatory subjects. Three educated non-clinicians, appropriately trained, overall agreed on conjunctiva hue performing with kappa coefficients between 0.27 and 0.34. As a result, hue variation strictly depends on the objective of the assessment and training of field personnel. Comparing earlier results obtained by physical examination and the latest digital photography, the latter is minimizing variance, optimizing specificity and sensitivity by using machine learning and automatic segmentation procedures. Establishing the most successful technology still leaves questions about the best region to analyze exploiting color properties associated with better results. Studies in [26] from an ophthalmology point of view open a debate for correlation of anemia between bulbar conjunctival blood column and palpebral conjunctival hue (PCH). From the results of this study, it seems that the bulbar conjunctiva can be successfully included in the set of interesting features, achieving slightly less specificity than PCH, but higher sensitivity. Paradigms of non-invasive and on-demand diagnostics based on smartphone and digital images are spreading due to the advancing of remote diagnosis and affordability [27,28,29]. A smartphone camera-based application monitoring blood hemoglobin concentration has been developed in [30]. Utilizing a light source pointed to the patient’s finger, they performed a chromatic analysis on 31 samples, achieving sensitivity and precision of 85.7% and 76.5% respectively; they received Food and Drug Administration agreement. Another smartphone-based self-screening tool is depicted in [31] utilizing fingernail beds digital images. Patients select the regions of interest by themselves, corresponding to the nailbeds, and a result is then displayed on the smartphone screen; camera flash reflections and white spots which may affect Hgb level measurements are removed with a quality control algorithm. They reported an accuracy of ±0.92 g/dL 1 of CBC hemoglobin level with personalized calibration, suggesting the relevance of those systems as a monitoring utility. In our study, we analyzed assumptions from related past works and the clinical correlation between conjunctival pallor and anemia condition [32], proposing a fully automated segmentation algorithm. Throughout this process, color features from hemoglobin reflectance spectrum provide a key role in biasing towards a region of interest proposal.
In the literature, few works deal with the automatic segmentation of the conjunctiva. In particular, reference [33] proposes a method for the automatic segmentation of the palpebral conjunctiva that carries out an image processing process based on the equalization of the image in RGB, filter unsharp masking and red channel masking. In [34] the authors developed an algorithm for automatically segmenting the image by finding a "distinctly red" region, bounded by two parallel long-running edges at the top and the bottom; this is achieved by combining the Canny edge detection technique with morphological operations in the CIELAB color space. However, with the aim of estimating anemia, they stated that their method of segmenting was less reliable than manual conjunctiva segmentation made by an expert physician. In [35] the authors use a threshold triangle (which uses triangle algorithm for thresholding) for binary differentiation between the palpebral conjunctiva and background.

1.4. Image Capturing Methodology

The technique adopted to capture digital images of a patient’s conjunctiva was based on the latest approach of a research study conducted in [36,37,38]. As a recap, the main requirements to designing an effective tool for estimating the condition of anemia through the use of digital images of the palpebral conjunctiva would be:
  • Provide an easy to us;e device with affordable hardware components
  • Its usage should not require trained medical personnel;
  • It should provide remote diagnosis and telemedicine conveniences.
The acquisition system is shown in Figure 2. It consists of a macro-lens assembled into a specially designed, 3D-printed lightened spacer Figure 2a and a typical smartphone as in the real-life application Figure 2b. The lens can take high-resolution images being attached to a smartphone (we used the Aukey PL-M1 25 mm 10x macro lens). The LED lights can be powered directly from the smartphone or a battery applied to the cover of a smartphone. The lens is fixed on the plastic cover of the smartphone: this device allows for obtaining high resolution images close to the eye, insensitive to the ambient lighting conditions.
The dataset used in the present study, which will be described later, has been created with a Samsung S6 smartphone.

2. Proposed Method

Each digital image from the dataset is converted into an RGB color space matrix representation. The segmentation process can be summarized in three different phases: dimensionality reduction by clustering approach, grouping as graph partitioning and a final ROI extraction. The introduction of a preliminary clustering step determines a speed up in N-Cuts performance arising from the theoretical proofs by the N-Cuts original paper regarding computational complexity in terms of both space and time. The algorithm constructing a region adjacency graph (RAG) does not consider each pixel from the original resolution anymore, but groups of them preserving spatial and color differences amongst them. Finally, we aim at grasping a non-linear relation between brightness intensities from the red and green channels, based on previous assumptions of reflectance rate by a spectrophotometry point of view.

2.1. K-Means Dimensionality Reduction

The objective of a clustering task is grouping data instances into subsets maximizing a similarity measure, while different instances should belong to different groups [39,40,41]. We applied the principles from k-means clustering to image segmentation tasks. The main goal in this phase is to produce a feature space similarly to Voronoi diagrams for planes, reducing the complexity of the graph representing the original image. Each pixel from now on will be referred to as a vector in a five-dimensional space: x and y coordinates from the matrix; R, G and B channel intensities from color representation.
f ( x , y ) = p = α x p x + α y p y + α r p r + α g p g + α b p b
This approach allows us to iteratively minimize the sum of distances from each pixel to its cluster centroid. We briefly summarize the steps of the algorithm as follows:
  • Initialize centroid vectors.
  • Pixels retain spatial as well as color features, allowing us to define an appropriate weighted Euclidean distance as a measure of similarity between them. For each of them, calculate the distance d between the centroid and each pixel of the image defined as:
    d ( u , v ) = u v = ( u x v x ) 2 + ( u y v y ) 2 + ( u r v r ) 2 + ( u g v g ) 2 + ( u b v b ) 2
  • Each pixel is assigned to the centroid minimizing d.
  • Recalculate the position of each centroid c k where p k i is the i t h pixel contained in k t h centroid using the relation:
    c k = 1 n i = 1 n p k i
This approach included in the broader field of unsupervised learning approaches, consists of initial batch updates, in which at each step we reassign points to their nearest cluster centroid, followed by cluster centroid recalculations. In online updates, the points are reassigned only if reducing the sum of intra-cluster distances. Those updates already converge towards a local minimum in short order.
In Figure 3, the original image is processed with a three-dimensional (R, G and B) space and in the last picture with a five-dimensional model including both color and spatial features. In the latter, there is not an increase in computational complexity since the only calculation affected is the distance function. However, in each digital image analyzed, the intra-cluster variance is minimized efficiently with properly outlined boundaries in between each group of pixels. The classified instances closer to mucocutaneous junction are noisy in the first approach, while on the second one each semantic class (iris, pupil, sclera, eyelid, and conjunctiva) appears as a compact union of clusters.

2.2. Normalized Cuts Segmentation

K-means as a clustering algorithm is a valuable approach for exploiting local impressions of a scene, but it lacks in providing a global or hierarchical perspective. For this reason, we take advantage of a grouping algorithm treating the segmentation task as a graph partitioning problem, such as NCuts. It has a better ability to generalize when applied to different scenarios. Conventionally, the normalized cut is an unbiased measure of dissimilarity between graph subgroups [42]. We have converted the set of superpixels from a five-dimensional feature space in a weighted undirected graph G = ( V , E ) . Each point is included in the set of nodes having one edge for each pair of vertices.
The region adjacency graph is constructed based on precomputed areas from the k-means segmentation algorithm. Each connection amongst them is depicted in Figure 4b and representable in a weight matrix W. The edge weight w i j from node i to node j is defined as in the standard approach of normalized cuts as a product of a feature similarity and a spatial term. X ( i ) is the coordinate vector of the centroid pixel and F ( i ) is a feature vector based on averaged R, G and B intensities of each pixel in the area. The value r acts as a proximity threshold based on the Euclidean distances amongst precomputed centroids. In our specific application we have tried different configurations ranging from 3 to 100, regulating the sparsity of the weight matrix but not impacting the segmentation outcome. Weights and features are described by the following equations:
w i , j = e F ( i ) F ( j ) 2 2 σ I e X ( i ) X ( j ) 2 2 σ X , if X ( i ) X ( j ) 2 < r 0 , otherwise
F ( i ) = 1 n j = 1 n p j r 1 n j = 1 n p j g 1 n j = 1 n p j b
The algorithm is capable of extracting significant components from each sample from the dataset, avoiding intra-cluster variations.
In Figure 5, we added a visual semantic description of the resulting cuts. With this phase, we raise the level of abstraction of the segmentation, starting from the clusters of Figure 3; we end up with features closer to an anatomical perspective. The small gap in colors between the conjunctival area and mucocutaneous junction is perfectly delineated in each sample from the dataset, paving the way for a machine-learning-based anemia estimator.
In the proposed segmentation output from Figure 5, a recursive approach could be run to further decompose regions of interest from the conjunctival area. As an example, this could lead to a better parting of the two conjunctivae, palpebral and forniceal, so as to contribute to the open debate about the prevalence of one or the other as the best estimator of anemia [43]. In fact, the palpebral conjunctiva highlights the vascularization of the underlying area better than the forniceal and probably allows highlighting minimal variations of blood color. The assumption seems confirmed by scientific literature. However, some authors take into consideration the whole conjunctiva, including both palpebral and forniceal, to construct and validate their models. It is still an open problem. Furthermore, in [43] the authors state that it should be interesting to establish whether the investigations carried out on a small portion of the conjunctiva can be sufficient and position independent. In fact, the sparsity and density of the blood micro-vessels can change in different parts of the eyelid. Therefore, the recursive identification of further clusters can help to answer the above questions.

2.3. Hemoglobin Heatmap Coefficients

In medical image or radar signal processing tasks, contrast enhancement is a widely used technique in various applications, ranging from improving the quality of photographs acquired in poor conditions [44] to emphasizing regions of interest [45,46]. Histogram equalization is one of the most common approach due to its simple mechanism and effectiveness, but as a drawback, image brightness usually changes after the procedure, caused by its flattening behavior. In our study the objective is focused on approximating the spectrophotometry multi-layered reflectance model investigated in Section 1.2, grasping a mathematical description for digital images. In the literature several studies apply spectral domain scanning, resulting in a time-consuming acquisition process and expensive equipment. This approach does not fit our needs of developing a cheap, non-invasive diagnostic tool. An example of an ill-posed problem known as spectral reconstruction from an RGB scene has been conducted with deep learning techniques in [47,48]. Lastly, researches are highly promoting the validity of these approaches, but despite this, our application domain allows us to further reduce the solution required. Our method, interpreting the image as a signal, performs a pixel pointwise non-linear transformation from red and green color space values, returning a coefficient highlighting vascularized regions. In the literature, the ratio between R and G channels has often been used as a guide to spot those areas, thereby finding the highest values in forniceal and palpebral conjunctival tissues. We propose a generalized logistic function filtering technique including more flexibility than a standard sigmoid. Considering an image I as a vector in three channel functions based on grid coordinates, we obtain the following σ transformation:
I ( x , y ) = r ( x , y ) g ( x , y ) b ( x , y ) , σ ( I , x , y ) = 1 1 + e α ( I r ( x , y ) I g ( x , y ) β )
The parameter α determines the slope of the function, emphasizing the discrepancy in terms of ratio between color channels; β acts as a minimum ratio threshold for the activation of each pixel.
A comparison of the behavior of standard and generalized logistic function with parameterization α = 4 and β = 2 is depicted in Figure 6. This parameterization yielded results with a remarkable capability of generalizing well in diagnostic imaging ranging from conjunctival tissue to endoscopic domains. Increasing values of α related to the steepness, tend towards the trivial case of a binarization step function losing information about the relationship underlying a variety of brightness ratios. An application of this model is illustrated in Figure 7 useful for digital images of the conjunctival region.
The real values range from 0 to 1 according to σ function definition. The filtering process produces a scoring matrix assigning lower values to the background, including the sclera, pupil, iris, eyelid and white support platform from the device. Palpebral and forniceal conjunctiva are primarily perfused by both internal and external carotid arteries; this is reflected in high values from the scoring matrix ranging from 0.7 to 1, and the respective blood vessels are significantly highlighted, as shown in Figure 7b.
Since we are interested in obtaining a semantic interpretation out of the regions proposed by NCut, the matrix of coefficients acts as an effective bias for calculating the probability distribution of each class. Edge weights crossed by aggregated pixels resulting from σ are strengthened or decreased, resulting in a region proposal based on the magnitude of the connection. In Figure 8, we provide a subset of 10 digital images from the dataset, showing the qualitative difference between the proposed semantic segmentation (top row) and the manually segmented ground truth (second row). In Figure 9 we provide two samples of erroneous acquisitions in order to show the robustness of the proposed segmentation in unusual conditions; in fact, only images with excellent characteristics can provide useful information for the correct estimation of anemia.

3. Results

The digital images of the patients’ eyes have been captured by the device reported in Figure 2 and assembled on a Samsung S6 smartphone; 94 patients were involved, aged 19–75 (average 34), 46 female and 48 male, with Hb level concentrations in the range of 7.6–17.1 g/dL (average of 11.45 g/dL).
Each picture underwent a manual selection process, isolating and cropping regions of palpebral and forniceal conjunctiva, as shown in Figure 10. This step is needed to compare the manually segmented images considered as the ground truth with the automatic segmentation output from the proposed model. We evaluated both spatial and color properties of regions of interest by assessing the most suitable metrics based on this specific medical image segmentation problem [49]. F1 (FMS1), also known as the Sørensen–Dice coefficient, is the harmonic mean of precision and recall, defined as follows for binary segmentation applications:
F 1 = 2 · ( P r e c i s i o n · R e c a l l P r e c i s i o n + R e c a l l ) = 2 · T P 2 · T P + F P + F N
The Dice coefficient being an overlapping measure ranging from 0 to 1, gives us a useful perspective about the quality of the segmentation. We are also interested in a calculation involving the number of pixels classified as non-relevant (false positive rate), which is not taken into account either by Dice coefficient or by Jaccard similarity. Accuracy metric is helpful in this case by outlining the rate of correctly classified pixels over the full image.
A c c u r a c y = T P + T N T P + T N + F P + F N
With the aim of assessing an average for the overlapping metrics, we computed a binary confusion matrix for each image. The values of this matrix refer to the number of pixels linked to set intersection or set difference between ground truth image and proposed segmentation, which are visually described in Figure 10c.
The averaged summation of each confusion matrix is summarized in Table 2. To give the reader the opportunity to observe the indicators for each sample included in the dataset, in Table A1 we have reported the values of the above metrics in a complete manner. Higher values of specificity for this segmentation task highlight the eligibility to disregard non conjunctival regions with proper confidence. On the other hand, sensitivity as well as F1 being overlapping measures, can reasonably fluctuate with higher variance, meaning in most cases that a finer meaningful subset of the conjunctival region has been selected.
The optimal results indicated by the above metrics are sufficient to state the effectiveness of our segmentation algorithm. Since here we are dealing with a rigorous diagnostic procedure, if comparing the precision of the overlapping between proposed and ground truth ROIs is acceptable, we think that a further investigation of the color properties for left-out or added regions would be interesting.
CIELAB is one of the most useful amongst color spaces for erythema analysis and computer vision for diagnostics, composed by an approximately uniform three-dimensional space: L*, a*, b*. A widely used dimension from this space, a*, has a well-known correlation with hemoglobin values in this domain [36,37,38]. Our purpose is to examine the strength of linear correlation between mean values of a* extracted from digital images of conjunctivas and the relative Hb g/dL concentration from blood samples taken almost at the same time of picture capturing phase (Figure 11). Generalizing the idea of Pearson correlation coefficient (PCC) from two random variables to two standardized vectors, we can estimate the weight of their linear correlation ranging from −1 to 1 and defined by the following equation:
ρ ( a , b ) = 1 N 1 i = 1 N ( a i μ a σ a ) · ( b i μ b σ b )
We computed PCC between the mean a* values for both manually and automatically segmented images and Hb g/dL through the entire dataset of 94 samples, thereby obtaining respectively 0.59 and 0.53. The results reconfirm not only the moderate linear correlation between those values, but also a robust contiguity among human based manual segmentation and fully automated segmentation approach proposed.

4. Conclusions

We developed a fully automated segmentation procedure, based on graph partitioning, that exposes conjunctival regions while maximizing the correlation between color properties and hemoglobin concentration in the blood, according to the multi-layered anatomical structures of these tissues. The ROIs extracted by the model underwent an in-depth quantitative comparison with ground truth, using state of the art metrics for similarity and PCC between the a* component from CIELAB space and hemoglobin values. The results attest to the reliability and the capability of generalizing between patients belonging to heterogeneous classes, as the accuracy of the overlap between the manual and automatic ROIs selections, measured with classic metrics, is very good, and the correlation obtained between the level of Hb measured in vivo and that estimated through the color of the manual/automatic ROI are comparable. The proposed method paves the way for further studies involving deep learning techniques for both classifications of an estimated anemia risk category and regression to predict Hb real values. With this study we contribute to the broader diagnostic research field of image processing and analysis of the conjunctival pallor related to anemia diagnosis support. The advancement provided to this non-invasive image capturing procedure will lead to the possibility of embedding the model in a wearable device screening Hb risk category in real-time, without the need for physician support.

Author Contributions

The authors contributed equally to this work. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Table A1. Results computed from the confusion matrices of the comparison between manually and automatically segmented images of the conjunctiva for the entire dataset of 94 samples.
Table A1. Results computed from the confusion matrices of the comparison between manually and automatically segmented images of the conjunctiva for the entire dataset of 94 samples.
Image IDF1-MeasureAccuracySensitivity (TPR)Specificity (TNR)
1647330.75470.90970.60601.0000
9184100.76470.92870.63690.9567
0945230.91230.96740.86960.9595
1037220.64290.96870.61030.6792
1908410.70110.86250.5860.8724
1542150.64940.95580.55050.7915
1607370.78440.93270.64700.9957
1552210.71790.98130.70440.7319
1226130.76160.91760.89530.6627
1327140.66410.87790.49711.0000
1405250.72500.93160.92550.5959
1543200.52960.89650.36021.0000
1433150.75630.89550.60811.0000
1452000.78340.98370.76770.7997
1502400.65420.91700.48611.0000
1552370.76720.95490.93740.6493
8010000.75950.94600.96130.6277
1212160.78480.95210.65340.9823
1205560.68040.90800.52070.9815
1341280.78270.96750.67150.938
1505360.82290.97690.82370.8221
1512340.73430.92850.60250.9400
1554180.83510.97570.81860.8523
1521360.74070.92640.88620.6362
1529240.68750.96530.52820.9846
1535360.68180.89580.51740.9995
1541290.86650.95960.97190.7817
1547590.84360.95590.77700.9226
1554560.84630.95390.81110.8846
1600450.62420.93330.45440.9965
1230020.79430.92440.67030.9745
1229150.77280.96640.79840.7488
2320400.62220.93000.50650.8064
1605220.80190.97900.81330.7909
1218360.59980.86460.51570.7166
1347450.74010.89440.78000.7040
2110400.48810.91460.32580.9724
2106310.91840.98380.92350.9134
2237440.76760.90130.64680.9440
2244520.6550.88270.48720.9991
2319230.71670.95130.55850.9999
2329310.80460.96360.70290.9406
1418040.77930.93100.72480.8428
1521070.66930.91440.50630.9871
1614520.78920.89550.76510.9673
1546410.81930.98060.86270.7801
2104190.85870.96750.84270.8753
2214000.80560.92560.67670.9952
2223250.80930.92980.69130.9758
1403110.62370.95940.46080.9645
1801480.82930.91540.70850.9998
1835060.75590.92140.62260.9617
1955110.71030.91490.55540.9849
2015010.71970.90310.56620.9874
1840290.75890.97150.63050.9531
1847340.85080.96360.88140.8221
1856020.88630.97220.85740.9172
1906380.82290.92670.71200.9747
1912330.81630.93880.85590.7801
1916200.66850.87370.79220.5782
1944570.72830.95080.58580.9624
1147000.63570.91330.50070.8705
1151460.62550.88000.52020.7842
1158530.80180.95260.74900.8626
1204260.64340.95880.50840.8762
2020580.69030.87370.52711.0000
1237140.77090.94150.80380.7406
1336330.60150.95390.46040.8673
1433010.81450.98030.70650.9614
1445510.71740.95400.88650.6025
1453010.65730.91240.49720.9693
1508040.64240.94470.48490.9515
1505390.83570.95470.73110.9750
1514500.73880.90200.58860.9917
1531460.77440.92950.63820.9844
1629160.79400.93690.67130.9716
2029470.90400.96410.85520.9587
1809250.71360.91240.61520.8494
1901300.82090.97760.76660.8834
1903340.65940.93540.78550.5682
1216210.84010.95490.75700.9436
1547290.48160.92930.32440.9343
2050120.85390.96510.90050.8120
2054450.83370.98870.86320.8063
2225510.79930.93940.72780.8863
2235030.85630.98340.83530.8783
2242400.73520.93790.63340.8760
2059170.71180.96910.62640.8242
2259220.79380.94920.84980.7447
2310500.74800.93860.70030.8027
1836260.59870.94630.44530.9133
1613470.78550.94660.73710.8406
1301480.68140.96900.52430.9728
1302250.78960.93830.66320.9757

References

  1. World Health Organization. Worldwide Prevalence of Anaemia 1993–2005: WHO Global Database on Anaemia; de Benoist, B., McLean, E., Egli, I., Cogswell, M., Eds.; WHO: Geneva, Switzerland, 2008. [Google Scholar]
  2. World Health Organization. The World Health Report 2002; World Health Organization: Geneva, Switzerland, 2002. [Google Scholar]
  3. McLean, E.; Cogswell, M.; Egli, I.; Wojdyla, D.; Benoist, B. Worldwide prevalence of anaemia, WHO Vitamin and Mineral Nutrition Information System, 1993–2005. Public Health Nutr. 2008, 12, 444–454. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  4. Koury, M.J. Red blood cell production and kinetics. In Rossi’s Principles of Transfusion Medicine; Wiley: Hoboken, NJ, USA, 2016; pp. 85–96. [Google Scholar] [CrossRef]
  5. White, J.; Porwit, A.M. Blood and Bone Marrow Pathology; Elsevier: Amsterdam, The Netherlands, 2011. [Google Scholar]
  6. World Health Organization; Centers for Disease Control and Prevention. Assessing the Iron Status of Populations; World Health Organization, Department of Nutrition for Health and Development: Geneva, Switzerland, 2005.
  7. Marn, H.; Critchley, J.A. Accuracy of the WHO Haemoglobin Colour Scale for the diagnosis of anaemia in primary health care settings in low-income countries: A systematic review and meta-analysis. Lancet Glob. Health 2016, 4, e251–e265. [Google Scholar] [CrossRef] [Green Version]
  8. Critchley, J.; Bates, I. Haemoglobin colour scale for anaemia diagnosis where there is no laboratory: A systematic review. Int. J. Epidemiol. 2005, 34, 1425–1434. [Google Scholar] [CrossRef] [PubMed]
  9. Dimauro, G.; Girardi, F.; Gelardi, M.; Bevilacqua, V.; Caivano, D. Rhino-Cyt: A System for Supporting the Rhinologist in the Analysis of Nasal Cytology. Intell. Comput. Theor. Appl. Lect. Notes Comput. Sci. 2018, 619–630. [Google Scholar] [CrossRef]
  10. Dimauro, G.; Ciprandi, G.; Deperte, F.; Girardi, F.; Ladisa, E.; Latrofa, S.; Gelardi, M. Nasal cytology with deep learning techniques. Int. J. Med Informatics 2019, 122, 13–19. [Google Scholar] [CrossRef] [PubMed]
  11. Triggiani, A.; Bevilacqua, V.; Brunetti, A.; Lizio, R.; Tattoli, G.; Cassano, F.; Soricelli, A.; Ferri, R.; Nobili, F.; Gesualdo, L.; et al. Classification of healthy subjects and Alzheimer’s disease patients with dementia from cortical sources of resting state EEG rhythms: A study using artificial neural networks. Front. Neurosci. 2017, 10. [Google Scholar] [CrossRef] [Green Version]
  12. Bevilacqua, V.; Pannarale, P.; Abbrescia, M.; Cava, C.; Paradiso, A.; Tommasi, S. Comparison of data-merging methods with SVM attribute selection and classification in breast cancer gene expression. BMC Bioinform. 2012, 13. [Google Scholar] [CrossRef] [Green Version]
  13. Bevilacqua, V.; Cariello, L.; Columbo, D.; Daleno, D.; Fabiano, M.D.; Giannini, M.; Mastronardi, G.; Castellano, M. Retinal fundus biometric analysis for personal identifications. In Proceedings of the International Conference on Intelligent Computing, Shanghai, China, 5–18 September 2008; Springer: Berlin, Germany, 2008; pp. 1229–1237. [Google Scholar]
  14. Bevilacqua, V.; D’Ambruoso, D.; Mandolino, G.; Suma, M. A new tool to support diagnosis of neurological disorders by means of facial expressions. In Proceedings of the IEEE International Symposium on Medical Measurements and Applications, Bari, Italy, 30–31 May 2011; pp. 544–549. [Google Scholar]
  15. Dimauro, G.; Caivano, D.; Bevilacqua, V.; Girardi, F.; Napoletano, V. VoxTester, software for digital evaluation of speech changes in Parkinson disease. In Proceedings of the IEEE International Symposium on Medical Measurements and Applications (MeMeA), Benevento, Italy, 15–18 May 2016. [Google Scholar] [CrossRef]
  16. Bevilacqua, V.; Brunetti, A.; Trotta, G.F.; Dimauro, G.; Elez, K.; Alberotanza, V.; Scardapane, A. A novel approach for Hepatocellular Carcinoma detection and classification based on triphasic CT Protocol. In Proceedings of the IEEE Congress on Evolutionary Computation (CEC), San Sebastian, Spain, 5–8 June 2017. [Google Scholar] [CrossRef]
  17. Dimauro, G.; Nicola, V.D.; Bevilacqua, V.; Caivano, D.; Girardi, F. Assessment of Speech Intelligibility in Parkinson’s Disease Using a Speech-To-Text System. IEEE Access 2017, 5, 22199–22208. [Google Scholar] [CrossRef]
  18. Dimauro, G.; Caivano, D.; Girardi, F.; Ciccone, M.M. The patient centered Electronic Multimedia Health Fascicle-EMHF. In Proceedings of the IEEE Workshop on Biometric Measurements and Systems for Security and Medical Applications (BIOMS), Rome, Italy, 17 October 2014. [Google Scholar] [CrossRef]
  19. Collings, S.; Thompson, O.; Hirst, E.; Goossens, L.; George, A.; Weinkove, R. Non-Invasive Detection of Anaemia Using Digital Photographs of the Conjunctiva. PLoS ONE 2016, 11, e0153286. [Google Scholar] [CrossRef] [Green Version]
  20. Townsend, D.; D’Aiuto, F.; Deanfield, J. Super actinic 420 nm light-emitting diodes for estimating relative microvascular hemoglobin oxygen saturation. J. Med. Biol. Eng. 2014, 34, 172–177. [Google Scholar] [CrossRef]
  21. Zhao, Y.; Qiu, L.; Sun, Y.; Huang, C.; Li, T. Optimal hemoglobin extinction coefficient data set for near-infrared spectroscopy. Biomed. Opt. Express 2017, 8, 5151. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  22. Kim, O.; McMurdy, J.; Jay, G.; Lines, C.; Crawford, G.; Alber, M. Combined reflectance spectroscopy and stochastic modeling approach for noninvasive hemoglobin determination via palpebral conjunctiva. Physiol. Rep. 2014, 2, e00192. [Google Scholar] [CrossRef] [PubMed]
  23. Sengupta, B. Biophysical Characterization of Genistein in Its Natural Carrier Human Hemoglobin Using Spectroscopic and Computational Approaches. Food Nutr. 2013, 4, 83–92. [Google Scholar]
  24. Horecker, B. The absorption spectra of hemoglobin and its derivatives in the visible and near infra-red regions. J. Biol. Chem. 1943, 148, 173–183. [Google Scholar]
  25. Sanchez-Carrillo, C. Bias due to conjunctiva hue and the clinical assessment of anemia. J. Clin. Epidemiol. 1989, 42, 751–754. [Google Scholar] [CrossRef]
  26. Kent, A.; Elsing, S.; Hebert, R. Conjunctival vasculature in the assessment of anemia. Ophthalmology 2000, 107, 274–277. [Google Scholar] [CrossRef]
  27. Kanchi, S.; Sabela, M.I.; Mdluli, P.S.; Bisetty, K. Smartphone based bioanalytical and diagnosis applications: A review. Biosens. Bioelectron. 2018, 102, 136–149. [Google Scholar] [CrossRef]
  28. Escobedo, P.; Palma, A.J.; Erenas, M.M.; Olmos, A.M.; Carvajal, M.A.; Chavez, M.T.; Gonzalez, M.A.L.; Diaz-Mochon, J.J.; Pernagallo, S.; Capitan-Vallvey, L.F.; et al. Smartphone-Based Diagnosis of Parasitic Infections With Colorimetric Assays in Centrifuge Tubes. IEEE Access 2019, 7, 185677–185686. [Google Scholar] [CrossRef]
  29. Ogirala, T.; Eapen, A.; Salvante, K.G.; Rapaport, T.; Nepomnaschy, P.A.; Parameswaran, A.M. Smartphone-based colorimetric ELISA implementation for determination of women’s reproductive steroid hormone profiles. Med Biol. Eng. Comput. 2017, 55, 1735–1741. [Google Scholar] [CrossRef] [PubMed]
  30. Wang, E.; Li, W.; Hawkins, D.; Gernsheimer, T.; Norby-Slycord, C.; Patel, S. HemaApp: Noninvasive Blood Screening of Hemoglobin Using Smartphone Cameras. Getmobile: Mob. Comput. Commun. 2017, 21, 26–30. [Google Scholar] [CrossRef]
  31. Mannino, R.; Myers, D.; Tyburski, E.; Caruso, C.; Boudreaux, J.; Leong, T.; Clifford, G.; Lam, W. Smartphone app for non-invasive detection of anemia using only patient-sourced photos. Nat. Commun. 2018, 9. [Google Scholar] [CrossRef] [Green Version]
  32. Sheth, T.; Choudhry, N.; Bowes, M.; Detsky, A. The Relation of Conjunctival Pallor to the Presence of Anemia. J. Gen. Intern. Med. 1997, 12, 102–106. [Google Scholar] [CrossRef] [PubMed]
  33. Delgado-Rivera, G.; Roman-Gonzalez, A.; Alva-Mantari, A.; Saldivar-Espinoza, B.; Zimic, M.; Barrientos-Porras, F.; Salguedo-Bohorquez, M. Method for the Automatic Segmentation of the Palpebral Conjunctiva using Image Processing. In Proceedings of the IEEE International Conference on Automation/XXIII Congress of the Chilean Association of Automatic Control (ICA-ACCA), Concepcion, Chile, 17–19 Ocotber 2018; pp. 1–4. [Google Scholar]
  34. Bevilacqua, V.; Dimauro, G.; Marino, F.; Brunetti, A.; Cassano, F.; Maio, A.D.; Nasca, E.; Trotta, G.F.; Girardi, F.; Ostuni, A.; et al. A novel approach to evaluate blood parameters using computer vision techniques. In Proceedings of the IEEE International Symposium on Medical Measurements and Applications (MeMeA), Benevento, Italy, 12–14 May 2016. [Google Scholar] [CrossRef]
  35. Bauskar, S.; Jain, P.; Gyanchandani, M. A Noninvasive Computerized Technique to Detect Anemia Using Images of Eye Conjunctiva. Pattern Recognit. Image Anal. 2019, 29, 438–446. [Google Scholar] [CrossRef]
  36. Dimauro, G.; Caivano, D.; Girardi, F. A new method and a non-invasive device to estimate anaemia based on digital images of the conjunctiva. IEEE Access 2018, 1. [Google Scholar] [CrossRef]
  37. Dimauro, G.; Guarini, A.; Caivano, D.; Girardi, F.; Pasciolla, C.; Iacobazzi, A. Detecting clinical signs of anaemia from digital images of the palpebral conjunctiva. IEEE Access 2019, 1. [Google Scholar] [CrossRef]
  38. Dimauro, G.; Baldari, L.; Caivano, D.; Colucci, G.; Girardi, F. Automatic Segmentation of Relevant Sections of the Conjunctiva for Non-Invasive Anemia Detection. In Proceedings of the 3rd International Conference on Smart and Sustainable Technologies (SpliTech), Split, Croatia, 26–29 June 2018; pp. 1–5. [Google Scholar]
  39. Dhanachandra, N.; Manglem, K.; Chanu, Y.J. Image Segmentation Using K-means Clustering Algorithm and Subtractive Clustering Algorithm. Procedia Comput. Sci. 2015, 54, 764–771. [Google Scholar] [CrossRef] [Green Version]
  40. Wu, M.N.; Lin, C.C.; Chang, C.C. Brain Tumor Detection Using Color-Based K-Means Clustering Segmentation. In Proceedings of the Third International Conference on Intelligent Information Hiding and Multimedia Signal Processing (IIH-MSP 2007), Kaohsiung, Taiwan, 26–28 November 2007. [Google Scholar] [CrossRef]
  41. Chitade, A.; Katiyar, S. Color based image segmentation using K-means clustering. Int. J. Eng. Sci. Technol. 2010, 2, 5319–5325. [Google Scholar]
  42. Shi, J.; Malik, J. Normalized Cuts and Image Segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2002, 22. [Google Scholar] [CrossRef] [Green Version]
  43. Dimauro, G.; De Ruvo, S.; Di Terlizzi, F.; Ruggieri, A.; Volpe, V.; Colizzi, L.; Girardi, F. Estimate of Anemia with New Non-Invasive Systems—A Moment of Reflection. Electronics 2020, 9, 780. [Google Scholar] [CrossRef]
  44. Tan, K.; Oakley, J. Enhancement Of Color Images In Poor Visibility Conditions. In Proceedings of the ICIP International Conference on Image Processing, Vancouver, BC, Canada, 10–13 September 2000. [Google Scholar] [CrossRef] [Green Version]
  45. Arce, G.R.; Bacca, J.; Paredes, J.L. Nonlinear Filtering for Image Analysis and Enhancement. Essent. Guide Image Process. 2009, 263–291. [Google Scholar] [CrossRef]
  46. Graif, M.; Bydder, G.M.; Steiner, R.E.; Niendorf, P.; Thomas, D.; Young, I.R. Contrast-enhanced MR imaging of malignant brain tumors. Am. J. Neuroradiol. 1985, 6, 855–862. [Google Scholar] [PubMed]
  47. Mammography, O.; Laine, A.; Fan, J.; Yang, W. Wavelets for Contrast Enhancement of Digital Mammography. IEEE Eng. Med. Biol. Mag. 1999, 14. [Google Scholar] [CrossRef]
  48. Kaya, B.; Can, Y.B.; Timofte, R. Towards Spectral Estimation from a Single RGB Image in the Wild. arXiv 2018, arXiv:cs.CV/1812.00805]. [Google Scholar]
  49. Taha, A.A.; Hanbury, A. Metrics for evaluating 3D medical image segmentation: Analysis, selection, and tool. BMC Med. Imaging 2015, 15. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Plots visualizing optical absorption and reflectance of Hb and HbO 2 , vertical dashed lines are related to human perception of colors associated with ( λ ). (a) Molar extinction coefficient ( ϵ ) related to absorbance over wavelength ( λ ) considering 15 g/dL of hemoglobin concentration and 1 cm cuvette. (b) Derived reflectance plot of absorbance under same constants.
Figure 1. Plots visualizing optical absorption and reflectance of Hb and HbO 2 , vertical dashed lines are related to human perception of colors associated with ( λ ). (a) Molar extinction coefficient ( ϵ ) related to absorbance over wavelength ( λ ) considering 15 g/dL of hemoglobin concentration and 1 cm cuvette. (b) Derived reflectance plot of absorbance under same constants.
Electronics 09 00997 g001
Figure 2. (a) The acquisition device consists of a special spacer and a macro lens to acquire images with a high-resolution smartphone at close range; (b) the moment of the acquisition of an image of the conjunctiva.
Figure 2. (a) The acquisition device consists of a special spacer and a macro lens to acquire images with a high-resolution smartphone at close range; (b) the moment of the acquisition of an image of the conjunctiva.
Electronics 09 00997 g002
Figure 3. (a) Original digital image acquired; (b) k-means clustering procedure using only three dimensional (R, G and B) channels from color space; (c) proposed k-means procedure with a model in five dimensions retaining both spatial and color properties.
Figure 3. (a) Original digital image acquired; (b) k-means clustering procedure using only three dimensional (R, G and B) channels from color space; (c) proposed k-means procedure with a model in five dimensions retaining both spatial and color properties.
Electronics 09 00997 g003
Figure 4. (a) Acquired sample; (b) region adjacency graph (RAG) displaying a measure of similarity between each region. The center of each node is considered a vertex. For each connection between two regions, there is an associated colored line according to the measure of similarity.
Figure 4. (a) Acquired sample; (b) region adjacency graph (RAG) displaying a measure of similarity between each region. The center of each node is considered a vertex. For each connection between two regions, there is an associated colored line according to the measure of similarity.
Electronics 09 00997 g004
Figure 5. Segmentation output result with semantic class description of eye anatomy.
Figure 5. Segmentation output result with semantic class description of eye anatomy.
Electronics 09 00997 g005
Figure 6. (a) Standard logistic function plot. (b) Generalized logistic function plot using parameters α = 4 and β = 2 .
Figure 6. (a) Standard logistic function plot. (b) Generalized logistic function plot using parameters α = 4 and β = 2 .
Electronics 09 00997 g006
Figure 7. (a) Acquired sample. (b) Heatmap plot of the scoring matrix displaying the magnitudes of the coefficients computed by applying the generalized sigmoid function on the acquired sample.
Figure 7. (a) Acquired sample. (b) Heatmap plot of the scoring matrix displaying the magnitudes of the coefficients computed by applying the generalized sigmoid function on the acquired sample.
Electronics 09 00997 g007
Figure 8. The top row represents a subset of samples automatically segmented with the proposed approach. The ordered second row depicts the mapping with the manual segmentation ground truth of the conjunctival region.
Figure 8. The top row represents a subset of samples automatically segmented with the proposed approach. The ordered second row depicts the mapping with the manual segmentation ground truth of the conjunctival region.
Electronics 09 00997 g008
Figure 9. Examples of two images that would normally be discarded: the first because the eyelid overlaps the edge of the white spacer and is not perfectly in focus; additionally, the second one is not in focus and the finger appears to lower the eyelid. In both cases, automatic segmentation would still provide an acceptable result.
Figure 9. Examples of two images that would normally be discarded: the first because the eyelid overlaps the edge of the white spacer and is not perfectly in focus; additionally, the second one is not in focus and the finger appears to lower the eyelid. In both cases, automatic segmentation would still provide an acceptable result.
Electronics 09 00997 g009
Figure 10. (a) Manually segmented conjunctiva used as ground truth. (b) Automatically segmented conjunctiva obtained by the proposed approach. (c) Visualization of the overlapping between green ground truth image and white automatically segmented image ( F 1 = 0.904 , a c c u r a c y = 96.41 % ).
Figure 10. (a) Manually segmented conjunctiva used as ground truth. (b) Automatically segmented conjunctiva obtained by the proposed approach. (c) Visualization of the overlapping between green ground truth image and white automatically segmented image ( F 1 = 0.904 , a c c u r a c y = 96.41 % ).
Electronics 09 00997 g010
Figure 11. (a) Linear regression and strength of correlation between a* from manual segmentation and Hb g/dL standardized vectors. (b) Linear regression and strength of correlation between a* from automatic segmentation and Hb g/dL standardized vectors.
Figure 11. (a) Linear regression and strength of correlation between a* from manual segmentation and Hb g/dL standardized vectors. (b) Linear regression and strength of correlation between a* from automatic segmentation and Hb g/dL standardized vectors.
Electronics 09 00997 g011
Table 1. Hemoglobin (Hb) thresholds used to define anemia living at sea level according to the World Health Organization guidelines [6].
Table 1. Hemoglobin (Hb) thresholds used to define anemia living at sea level according to the World Health Organization guidelines [6].
Age GroupNo AnemiaMild AnemiaModerate AnemiaSevere Anemia
Children 5–11 years≥11.5 g/dL11–11.4 g/dL8–10.9 g/dL<8 g/dL
Children 12–14 years≥12 g/dL11–11.9 g/dL8–10.9 g/dL<8 g/dL
Non-pregnant women≥12 g/dL11–11.9 g/dL8–10.9 g/dL<8 g/dL
Pregnant women≥11 g/dL10–10.9 g/dL7–9.9 g/dL<7 g/dL
Men≥13 g/dL11–12.9 g/dL8–10.9 g/dL<8 g/dL
Table 2. Metrics of averaged results of the comparison between manually and automatically segmented images of the conjunctiva.
Table 2. Metrics of averaged results of the comparison between manually and automatically segmented images of the conjunctiva.
F1-MeasureAccuracySensitivity (TPR)Specificity (TNR)
Predicted ROIs0.736393.79%86.73%94.63%

Share and Cite

MDPI and ACS Style

Dimauro, G.; Simone, L. Novel Biased Normalized Cuts Approach for the Automatic Segmentation of the Conjunctiva. Electronics 2020, 9, 997. https://doi.org/10.3390/electronics9060997

AMA Style

Dimauro G, Simone L. Novel Biased Normalized Cuts Approach for the Automatic Segmentation of the Conjunctiva. Electronics. 2020; 9(6):997. https://doi.org/10.3390/electronics9060997

Chicago/Turabian Style

Dimauro, Giovanni, and Lorenzo Simone. 2020. "Novel Biased Normalized Cuts Approach for the Automatic Segmentation of the Conjunctiva" Electronics 9, no. 6: 997. https://doi.org/10.3390/electronics9060997

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop