Next Article in Journal
Machine Learning Methods of Regression for Plasmonic Nanoantenna Glucose Sensing
Next Article in Special Issue
Computer Aided Written Character Feature Extraction in Progressive Supranuclear Palsy and Parkinson’s Disease
Previous Article in Journal
Personalized Route Planning System Based on Driver Preference
Previous Article in Special Issue
A Review on Computer Aided Diagnosis of Acute Brain Stroke
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Ultra-Widefield Fluorescein Angiography Image Brightness Compensation Based on Geometrical Features

by
Wojciech Więcławek
1,*,
Marta Danch-Wierzchowska
1,
Marcin Rudzki
1,
Bogumiła Sędziak-Marcinek
2 and
Slawomir Jan Teper
2
1
Faculty of Biomedical Engineering, Silesian University of Technology, Roosevelta St. 40, 41-800 Zabrze, Poland
2
Clinical Department of Ophthalmology, Faculty of Medical Sciences in Zabrze, Medical University of Silesia, Panewnicka St. 65, 40-760 Katowice, Poland
*
Author to whom correspondence should be addressed.
Sensors 2022, 22(1), 12; https://doi.org/10.3390/s22010012
Submission received: 26 October 2021 / Revised: 8 December 2021 / Accepted: 16 December 2021 / Published: 21 December 2021
(This article belongs to the Special Issue Innovations in Biomedical Imaging)

Abstract

:
Ultra-widefield fluorescein angiography (UWFA) is an emerging imaging modality used to characterise pathologies in the retinal vasculature, such as microaneurysms (MAs) and vascular leakages. Despite its potential value for diagnosis and disease screening, objective quantitative assessment of retinal pathologies by UWFA is currently limited because laborious manual processing is required. In this report, we describe a geometrical method for uneven brightness compensation inherent to UWFA imaging technique. The correction function is based on the geometrical eyeball shape, therefore it is fully automated and depends only on pixel distance from the center of the imaged retina. The method’s performance was assessed on a database containing 256 UWFA images with the use of several image quality measures that show the correction method improves image quality. The method is also compared to the commonly used CLAHE approach and was also employed in a pilot study for vascular segmentation, giving a noticeable improvement in segmentation results. Therefore, the method can be used as an image preprocessing step in retinal UWFA image analysis.

1. Introduction

Fluorescein angiography was introduced as a diagnostic tool over 50 years ago. Because of its constant development, it still remains a reliable diagnostic tool for many retinal diseases, i.e., diabetic retinopathy, retinal vein occlusion, age-related macular degeneration retinopathy, uveitis, or retinal/choroidal dystrophy [1,2]. The literature says that first pathological changes occur in the peripheral area of the retina and, as the disease progresses, cover an area increasingly larger towards the center. Retinal imaging methods used so far have included about 30–50 of the retina. It turns out that the depicted area is too narrow to recognize the early symptoms of retinal degeneration [3]. In recent years three approaches have been introduced to expand the visible retina area: (1) assembly of traditional angiograms, i.e., several images and at least two acquisition protocols; (2) adding an additional lens to the optical path of the classic camera; (3) introduction of new devices enabling observation of a wider area of the retina [3,4]. Ultra-wide field fluorescein angiography (UWFA) is an extension of the third group of methods and enables imaging of a significantly larger retinal area (with angle over 200 ) [1]. Comparing the UWFA with a classic fundus camera, the obtained 200 from the center of the eye corresponds to a conventional angle measure of approximately 125 [2].
The optical system, as well as the software processing UWFA images after the acquisition, are still sources of some issues that need to be improved. The associated deformations of the peripheral area may disrupt the effects of the quantitative analysis of those areas by artificially increasing the obtained values. Mapping and optimization of the three-dimensional spherical surface onto a flat image are one of the key transformations performed during the acquisition [5,6,7]. Another issue is the wavelength of excitation light and the barrier filter used, specific for each device. The images obtained by different devices may contain inherent differences, especially when evaluating the fovea. However, previous studies indicate that these differences have a minor impact when estimating the peripheral area of the eye [2].
The vital issue, which needs to be addressed during UWFA images processing, is the uneven brightness of the image. The further away from the central area, the darker the image and the lower the contrast are. This resembles the problem of bias correction known in other imaging modalities [8]. Uneven brightness compensation during image preprocessing seems crucial in retinal qualitative and quantitative image analysis. In the case of retinal image analysis, one of the latest tasks is retinal vessel segmentation. Image brightness correction sometimes is omitted in processing methods that utilize deep learning, for example, convolutional neural networks [9,10,11,12]. There are also reports where this step is skipped even in the “traditional” image processing algorithms [13,14]. However, only one of the aforementioned works is based on UWFA images, while others are based on standard retina images. Therefore abandoning the preprocessing may not always be possible for UWFA images.
The inhomogeneous brightness of retinal angiographic images stems from the geometry of the imaged structure, therefore the transformations most often applied at the preprocessing stage should be local in nature, depending on regional image properties. One such transformation, very often applied to retinal angiographic images, is the technique of Contrast Limited Adaptive Histogram Equalization (CLAHE) [15], which differs from ordinary histogram equalization by local parameter selection. Sopharak et al. [16] use this method as a preprocessing preceding microaneurysms detection. The CLAHE algorithm plays a similar role in [17]. A more comprehensive approach is introduced in [18], where the CLAHE algorithm is also used because the authors claim that CLAHE provides the utmost improvement in the prediction accuracy. CLAHE algorithm is also used during quantification of optical coherence tomography angiography images [19], and also as one of the processing steps in de-noising and contrast enhancement of retinal fundus images [20,21]. Several other attempts have also been made in brightness compensation methods to improve the segmentation accuracy in retinal imaging, e.g., homomorphic filtering function with Gaussian high-pass filter [22] or N4-bias field correction [23]. Those works, however, focus on narrow-field retinal angiograms, not UWFA, where the effect of low brightness in the periphery is not that significant.
The main goal of this work is to develop a brightness correction method, especially for UWFA images, that enhances the peripheral retinal areas without causing disruptions in the central regions. To achieve this goal, we propose an automated gamma correction procedure. The gamma value for this procedure is computed locally based on the geometrical conditions resulting from the image acquisition technique. Our method is validated on 256 images, taking into consideration the image acquisition phases (early, mid, late) and exhibiting physiological and pathologic features (leakages, ischaemia, microaneurysms) in patients diagnosed with diabetic retinopathy. Several image quality measures are calculated for the central and peripheral image regions, showing that the method introduces negligible image distortion in central areas while enhancing the peripheral image regions. The presented procedure may provide new opportunities for detecting, analyzing, and evaluating underlying image features that have important clinical and research applications.

2. Materials and Methods

Ultra-widefield fluorescein angiography images require brightness compensation, especially in the peripheral areas. It is inducted by the spherical shape of the retina and properties of the image acquisition technique. We propose a method inspired by the retina shape to overcome these inconveniences. Our proposition is presented in two following steps. The first one is a simplified 2D case, which results in a 1D correction function, and the second one is generalized to the spatial 3D case, which results in a 2D correction function. Both are based on the assumption that the eyeball can be approximated as a sphere, which is often encountered in the retinal image analysis [1,2,5,6,24].

2.1. 2D Case Projected onto 1D Space

Consider a central cross-section of an eyeball presented in Figure 1. We assume (similarly to other authors [2,5,7]) that an eyeball can be represented as a sphere. A red circle illustrates the great circle (equator), and a blue semi-circle located on the backside of the eyeball represents the retina. We assume that it spans exactly a half-circle (it covers a straight angle, π or 180 ). Outside the eyeball, opposite to the retina, there is an acquisition device, which reconstructs the image. This is indicated by a single image profile (cyan line). It is composed of 2 N ( 2 N is a horizontal resolution of an image) pixels p x having spatial size p (length in 1D, square side in 2D). This image profile, to simplify further mathematical considerations, is duplicated and is placed inside the eyeball (dotted cyan line in Figure 1).
Based on the observation that in UWFA images, the brightness and contrast fade with the distance from the center, the centrally located pixel does not require brightness correction. Thus, it is processed without changes (i.e., the correction coefficient is 1). Other pixels that express retina points situated farther away from the center require revision that depends on pixel location (represented by the distance from the center). The farther away the pixel from the center is, the more substantial pixel value correction is required. A direction of increasing demand for correction is presented in Figure 1 using a magenta curved arrow.
The proposed principle of correcting factor determination indirectly stems from the geometric distance from the center, measured by the curve length:
l A = r · α ,
where r denotes the radius of the eyeball and α a central angle in radians on which the arc l A is spanned. This angle can be determined based on geometrical relations, in particular it can be determined from a right triangle OAA’:
sin α = x · p r ,
where x is an integer number of pixels between A , which is a projection of the currently analysed point A onto image profile, and the central axis (green horizontal line) of the eyeball or point O. This value is within the range 1 N . The approach is symmetrical for the mentioned central axis. In Figure 1, an exemplary right triangle is shown, with two sides having lengths r and 3 p (i.e., x = 3 ), respectively.
Moreover, the retina radius r can be expressed using the number of pixels as r = N · p , hence:
sin α = x N .
Finally, using the inverse trigonometric function arcsin ( ) , the curve length can be defined as:
l A = r · arcsin x N .
This distance function is proportional to the angle arcsin x N expressed by an integer number of pixels p x measured relative to the central axis. For small alpha α 0 , that is pixels located centrally ( x 0 ), it gives a value of 0, and for the farthest retina pixels ( α π 2 or α 90 that is pixels located peripherally x N ) it gives a value close to r · π 2 . Moreover, the radius of the eyeball r represents a constant scaling factor in this equation. Because we focus solely on the proportions within the eyeball, we assume the eyeball to be a unit sphere, therefore r = 1 .
In order to adjust these mathematical conditions to the properties of the retinal acquisition technique, we propose the correction function for the 2D case projected onto 1D space to be defined as
γ ( x ) = cos arcsin x N .
This γ ( x ) function, as is shown in Figure 1, is decreasing. Its values (in the range 0 1 ) for a particular x are then used as an argument for the gamma image correction stage [25] according to:
J ( N ( p x ) ) = I ( N ( p x ) ) γ ( x ) .
that is performed locally for a small neighbourhood ( N ( ) ) of a pixel p located at x. For γ < 1 the resulting intensity rescaling is non-linear and results in contrast enhancement of hypointensive image regions. For γ = 1 , the rescaling function is linear, thus the image intensities remain unchanged.

2.2. 3D Case Projected onto 2D Space

The method presented above needs a generalization for spatial conditions projected onto the 2D case, as shown in Figure 2. Similarly, the basis for defining the correction function γ ( ) is the distance between the central retina point C and any other retina element e.g., point A. This distance is an arc l A spanned on a sphere.
This curve part is the shortest distance between two points denoted as C and A on the surface of a sphere, measured along the surface of the sphere. To compute the distance in spaces with curvature, straight lines are replaced with geodesics. Geodesics on the sphere are circles on the sphere whose centers coincide with the center of the sphere and are called great circles or orthodromes. The determination of the great circle distance (or orthodromic distance) is part of a more general problem of great circle navigation, which also computes the azimuths at the end-points and intermediate way-points.
Through any two points on a sphere (C and A in Figure 2b) that are not directly opposite to each other (i.e., not on a diameter), a unique great circle can be determined. The two points divide the great circle into two arcs. The length of the shorter arc ( l A in Figure 2b) is then the great circle distance between these points.
Consider two points C and A with their geographical longitude λ C , λ A and latitude ϕ C , ϕ A in radians, respectively. Their absolute differences are denoted as Δ λ and Δ ϕ . Then the central angle between them Δ σ is given by the spherical law of cosines:
Δ σ = arccos sin ϕ A sin ϕ C + cos ϕ A cos ϕ C cos Δ λ
if one of the poles is used as an auxiliary third point on the sphere ([26], pp. 323–326). Given this angle in radians, the actual arc length l A on a sphere of radius r can be trivially computed as:
l A = r Δ σ .
In the case illustrated in Figure 2a the Equation (7) can be simplified, because point C is a reference point located centrally with λ C = 0 and ϕ C = 0 . Hence, sin ϕ C = 0 and cos ϕ C = 1 . Moreover, since cos Δ λ = cos λ A , then the arc length in 3D space is given by:
l A = r arccos cos ϕ A cos λ A .
Both angles ϕ A and λ A can be computed by analogy to the 2D case presented in the previous Section 2.1. Thus, if the image resolution is 2 N × 2 M (where 2 N is the width and 2 M is the height of the image), then:
λ A = arcsin x N ,
ϕ A = arcsin y M ,
where x and y are the horizontal and vertical integer numbers of pixels between projections of actually analysed point A denoted as A x and A y and the central point C.
Considering the above conditions, the distance function defined as:
l A = r arccos cos arcsin y M cos arcsin x N .
is proportional to the angles that define the longitude λ A and the latitude ϕ A of the point A expressed by horizontal x and vertical y integer number of pixels calculated from the image center, respectively.
As previously, we assume the sphere to be a unit sphere, therefore r = 1 . Further, to obtain cos ( ) function values, not the angles themselves, the function arccos ( ) has also been left out.
Finally, the spatial version of the correction function is defined as:
γ x , y = cos arcsin y M cos arcsin x N .
The defined above function γ x , y is then used to compute γ value for the gamma image correction procedure [25]. Usually, gamma correction is performed globally for the whole image with a fixed gamma value.
Gamma correction is a nonlinear operation that allows the visibility of dark ( γ < 1 ) or bright ( γ > 1 ) image details to be enhanced, depending on the γ value. In the current study, the gamma correction procedure is performed not for the whole image but independently in small, square, non-overlapping windows of size w × w . The window size w is specified as the smallest possible square, yet larger than one pixel and, of size defined by the greatest common divisor (gcd) of the image size, i.e., w = gcd ( 2 N , 2 M ) . The value of γ in each window is a mean value of γ ( x , y ) function in this window. This principle of the proposed brightness correction procedure is presented in Figure 3.
Centrally located window (green window in Figure 3 with x y 0 ) corresponds to the gamma function value near 1, i.e., γ ( x , y ) 1 , thus in the resulting image these pixels’ intensities remain unchanged comparing to the original image. The farther away the window is from the image center, the smaller the value of the γ ( ) function is (red and gray windows in Figure 3). It means that pixels in the output image are significantly brightened the farther the moving window from the image center is. To recapitulate, visibility, as well as contrast in originally dark peripheral areas of the analysed image, are improved.
As an initial image processing procedure, contrast and brightness enhancement are required for many operations during UWFA image processing and analysis. The presented approach is automatic, local, and parameterless. Therefore, it is beneficial over traditional, especially manual and global, image enhancement methods that require a proper choice of parameters.

2.3. Image Database

Medical examinations were carried out at the Department of Ophthalmology, Faculty of Medical Sciences in Zabrze, Medical University of Silesia, Katowice, Poland. The study was performed in adherence to the tenets of the Declaration of Helsinki and approved by the Ethics Committee of the Medical University of Silesia (decision KNW/0022/KB1/125/I/18/19). The written informed consent from the participants had been obtained. Among patient inclusion criteria were: age 18 or greater, diabetes mellitus type 1 or 2, nonproliferative diabetic retinopathy. Among the exclusion criteria were: retinal photocoagulation, history of pars plana vitrectomy or cataract surgery with posterior capsule rupture, media opacity disabling to assess the fundus of the eye, proliferative diabetic retinopathy, vitreoretinal traction in the macula, nonperfusion of the foveal area, any concurrent nondiabetic retinal disease. Patient preparation to image acquisition involved pupils dilation by administration of 1% tropicamide (Polpharma, Starogard Gdański, Poland) and phenylephrine 2.5% to the conjunctival sac followed by intravenous sodium fluorescein (250 mg; SERB, Paris, France) administration. A more detailed description of the patient cohort and the medical aspects are presented in [27].
Fluorescein angiography retinal images were taken using Optos California P200DTx (Optos, Dunfermline, UK) device during three phases [28]: early (E, up to 60 s), mid (M, from 1st up to the 5th min), and late (L, 5th min and above). The testing image database contains 256 UWFA images and is summarized in Table 1. The spatial resolution of each image is 3900 × 3072 pixel, 8-bit grayscale, and stored in .tif file format.

2.4. Quality Measures

In order to assess the method, we have used several image quality measures known from the literature. Among them are: Edge Based Contrast Measure (EBCM) [29,30], indices showing edge preservation properties—Noise Suppression ( ρ ) and Measure of the Edge Preservation ( β ) [31,32], measures based on the structural information of the image, i.e., measure of structural similarity (SSIM) [33], Quality Index Q [34], and the most straightforward parameters based on pixel-to-pixel error measurement, i.e., Mean Absolute Error (MAE), Mean Squared Error (MSE), Root Mean Square (RMS) [35]. Definitions of all the used measures are presented in Appendix A. Here only the expected values of the measures are summarized for clarity: (1) EBCM = 1—no change, EBCM > 1—contrast improvement, (2) ρ , β , SSIM, Q—the closer to 1 the better, the value of 1 achievable for identical images, (3) MAE, MSE, RMS—the closer to 0 the better, the value of 0 achievable for identical images.

3. Results

Verification of the method was performed using the aforementioned image database (Section 2.3) considering the image acquisition phases (E, M, L) and globally (G) for all of the 256 available UWFA images. Images were divided between the acquisition phases since images with different timings may exhibit overall image brightness and contrast differences. The assessment was performed separately for the central and peripheral parts of the images. The image area is thus divided into 9 subregions with borders at 25% and 75% of width and height of the image. The measures are then calculated separately for the central (Figure 4) and combined peripheral (Figure 5) regions. The results are presented using boxplots grouped in the four categories of image quality measures. Boxplot colours correspond to the image acquisition phase categories from Table 1.
The qualitative performance of the proposed methodology is presented on exemplary images taken from the database. The images before and after the correction procedure are shown in Figure 6. Selected regions are enlarged for clarity to present the method’s performance. The quality measures for the selected regions from Figure 6 are given in Table 2 as well.
Because the CLAHE (Contrast Limited Adaptive Histogram Equalization [15,25]) is one of the most frequently used algorithms for image preprocessing in retinal angiography [16,17,18,19,20,21], it was also included for comparison. The assessment was performed using the same approach and quality measures as for the proposed method. For both methods, the same window size was used (the image was divided into the same number of rows and columns—’Number of tiles’—to get a window of 12 × 12 pixels). All remaining parameters of the CLAHE method were left with default values [15].
The obtained for CLAHE quality measure values are presented graphically in Figure 7 for the image central regions and in Figure 8 for the peripheral regions.
Average values of the quality measures for the central and peripheral regions for both proposed and CLAHE methods are presented in Table 3.
Visual inspection and comparison of the resulting images obtained using both approaches are shown in Figure 9 on three exemplary images from the early, mid, and late acquisition phases.

4. Discussion and Conclusions

For the proposed method, as can be seen from the Figure 4 and Table 3, the quality measures obtained from the central image region are close to 1 (EBCM, SSIM, Q, ρ and β ). This indicates that the proposed processing method does not affect the central image region much. The pixel-to-pixel measures are clustered around 0, pointing that the method does not introduce errors (noise) to the images in the central regions. The average values of all the measures are very similar within each of the four categories.
From Figure 5 and Table 3, it can be noticed that the quality measures for the peripheral region are much more dispersed. EBCM by average is >1, indicating that the resulting image can be considered better in contrast and edge visibility. SSIM and Q are about 0.47 and 0.38, respectively. Those measures, however, take into account the average change in luminance between the images [34], thus the lower value of those measures is expected since the average luminance in the peripheral regions is increased. It is worth noting that the median values of the structural information measures are very similar between all four image groups. Moreover, the distributions of EBCM and SSIM measures are not symmetrical, with more cases above the median what is preferable. The ρ coefficient appears to be the most dispersed. As a measure aimed at the assessment of noise suppression [32], it indicates that in the peripheral regions, noise may be introduced. The metric takes into consideration the average image intensity level, therefore its value is expected to be lower than for the central regions. The ρ and β measures for the central areas take values close to 1, indicating a negligible difference between the compared images. On the other hand, the values are lower for the peripheral areas, which shows the difference between the compared images. The mentioned differences indicate the edge enhancement in the resultant image.
The pixel-to-pixel measures values are higher than the corresponding values in the central image regions yet are still close to 0, which shows that the method does not introduce significant errors (noise) to the images in the peripheral regions. The distribution of one of the pixel-to-pixel measures—Absolute Error (AE, that is MAE before averaging)—is presented in Figure 10, which shows a certain image before (Figure 10a) and after (Figure 10b) being subjected to the contrast enhanced procedure and the AE values for each image pixel (Figure 10c). The AE values are small for pixels located in the image center. The further away the pixel is from the image center, the larger the coefficient value is. The highest values are achieved in the corners, where no diagnostically important information is expected.
The CLAHE method does not consider spatial relations in the image, therefore the processing is performed in the same manner in all image regions, which contrasts with the presented approach. CLAHE influences central image regions more, which is indicated by worse values of all the quality measures compared to the proposed method (Figure 7, Table 3). Those regions exhibit high brightness variability (blind spot as a region of high intensity surrounded by a relatively dark background). In extreme cases, e.g., for underexposed images, there is brightness saturation after CLAHE application (Figure 9e). The proposed method does not influence the central image regions that much, which is shown by the value of EBCM measure of about unity and is visible in Figure 9f.
In the peripheral image regions characterized by much lower contrast and overall brightness, the influence of the CLAHE approach is lower than the proposed method. It is shown by the measures from the pixel-to-pixel group (MAE, MSE, RMS) that for the CLAHE are lower than for the presented approach (Figure 8, Table 3). The edge preservation measures ( ρ and β ) seem better for the CLAHE approach as the measures are close to unity. Those measures for the proposed algorithm are of lower value because the brightness is enhanced more than the contrast. It also can be seen in Figure 9b vs. Figure 9c and Figure 9e vs. Figure 9f. The EBCM values obtained for both methods also support the observation. Structural information indices (SSIM and Q) are comparable for both processing algorithms, with SSIM being slightly higher and Q being a bit lower for the CLAHE processed images compared to the proposed method.
For underexposed images (like Figure 9d) the CLAHE approach causes saturation of central image regions and border artifacts along the vasculature (Figure 9e). The presented method does not introduce such issues as can be seen in Figure 9f. This is advantageous in cases of not correctly acquired images because the screening does not have to be repeated for the patient. Moreover, in some cases (especially visible in Figure 9h) the CLAHE algorithm causes an over sharpening of the image exposing local disturbances and influencing the background perception. The image can be perceived as more granular than the original, therefore, the diagnosis about leakages and microaneurysms may be influenced. The proposed method does not sharpen the image, thus does not introduce high frequency noise. At the same time the details in the peripheral regions are comparable to those visible after application of the CLAHE algorithm.
Summarised interpretation of the obtained quality measures’ values is presented in Table 4. The expected behaviour in the central image regions is minimal image distortion (EBCM ≈ 1, MAE, MSE, RMS ≈ 0, SSIM, Q, ρ , β 1 indicate minimal image modification). Correspondingly the expected behaviour in the peripheral regions is image enhancement (EBCM > 1, MAE, MSE, RMS > 0, SSIM, Q, ρ , β < 1 indicate that the output image differs from the input image, the larger distance from 1 the more the image is affected).
To conclude, the advantage over the CLAHE approach of the proposed method is in brightness enhancement resulting in more even intensity distribution over the whole retina image without the extra sharpening. The technique might be beneficial for underexposed images. Last but not least, the proposed approach does not require the setting of any controlling parameters. Therefore it is easy to be applied, especially by inexperienced technicians. At the same time, it does not exclude the application of more advanced processing methods if required in clinical practice for specific cases.

5. Summary and Future Work

The paper presents a straightforward, geometry-based and parameterless method for UWFA image preprocessing. The method assumes that the eyeball is spherical, and the correction depends on the distance from the centre. The contrast enhancement is done locally in non-overlapping blocks with the gamma correction method, in which the gamma value is calculated automatically. The method was assessed and compared to the commonly used CLAHE algorithm using several image quality measures that show its performance and usefulness in UWFA image processing.
The proposed UWFA image preprocessing was also applied during a pilot study dedicated to retinal blood vessels segmentation. The experiment consisted of vascular segmentation from original and brightness-enhanced images obtained by the presented approach. To detect the blood vessels, an algorithm optimized for UWFA images was adopted [36]. The database consisted of 12 UWFA images supplemented with ground truth vasculature segmentation. Exemplary segmentation outcomes for three medical cases are presented in Figure 11. The results obtained without the brightness compensation are shown in green and in red—when the presented enhancement method was applied. As can be seen, the employment of the presented preprocessing method enables the segmentation algorithm to detect more peripheral vasculature and small vessels in the whole image area. Those outcomes are promising for future works.

Author Contributions

W.W.: conceptualization, processing methodology, software, validation, writing and editing; M.D.-W.: results verification, writing and editing; M.R.: conceptualization, analysis and verification, writing and editing; B.S.-M.: image database preparation, medical consultation; and S.J.T.: image database preparation, medical consultation. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by the “Excellence Initiative—Research University” (IDUB) program at the Silesian Univeristy of Technology, 2019/2020, no. 07/010/RGJ20/0043 (W.W.), “Excellence Initiative—Research University” (IDUB) program at the Silesian Univeristy of Technology, 2020/2021, no. 07/010/SDU/10-22-01 (M.D.-W.), National Centre for Research and Development, Poland grant, no. STRATEGMED1/234261/2/NCBR/2014 (B.S.-M. & S.J.T.) and Polish Ministry of Science and the Silesian University of Technology statutory financial support, no. 07/010/BK_21/1006, BK-296/RIB1/2021 (M.R.).

Institutional Review Board Statement

Medical examinations were carried out at the Department of Ophthalmology, Faculty of Medical Sciences in Zabrze, Medical University of Silesia, Katowice, Poland. The study was performed in adherence to the tenets of the Declaration of Helsinki and it was approved by the Ethics Committee of the Medical University of Silesia in Katowice (decision KNW/0022/KB1/125/I/18/19).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The image data could be provided on request after contact with B.S.-M.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Quality Measures

The proposed methodology is primarily dedicated to improve peripheral retina areas in UWFA images. However, this improvement cannot worsen sharpness of the image, degrade its structural information, increase noise level, and last but not least, adversely affect the contrast and location of edges. Considering these conditions, the evaluation of the method is performed using metrics classified into several groups characterized below. Among them are measures assessing contrast enhancement [29,30] as well as indices satisfying edge preservation properties. Additionally, the most straightforward parameters based on pixel-to-pixel error measurement (i.e., MAE, MSE, NMSE, RMS) as well as measures based on the structural information of the image (i.e., SSIM, Quality index) are also evaluated. Definitions of all the used measures are presented in the following sections.

Appendix A.1. Edge-Based Contrast Measure

The Edge-Based Contrast Measure (EBCM) stems from the observation that the human perception mechanisms are very sensitive to contours (or edges) [29]. The gray level corresponding to object frontiers is obtained by computing the average value of the pixel gray levels weighted by their edge values. Contrast C ( p i ) for a pixel p i of image I is thus defined as:
C ( p i ) = | I ( p i ) E ( p i ) | | I ( p i ) + E ( p i ) | ,
where E ( ) is the mean edge gray level given by
E ( p i ) = p j N ( p i ) g ( p j ) · I ( p j ) p j N ( p i ) g ( p j ) ,
N ( p i ) is the set of all neighboring pixels p j of pixel p i , and g ( p j ) is the edge value at pixel p j . Without loss of generality, we employ 3 × 3 neighborhood, and g ( p j ) is the magnitude of the image gradient estimated using the Sobel operators [25]. The EBCM for image I is thus computed as the average contrast value
E B C M ( I ) = p i D C ( p i ) | D | ,
where D is the image domain (a set of all pixels p i ), | D | is the number of pixels in the image.
For an output image J obtained from an input image I, it is expected that the contrast is improved when the condition E B C M ( J ) E B C M ( I ) is satisfied [30]. In this work we used the relative value of this measure, defined as:
E B C M ( I , J ) = E B C M ( J ) E B C M ( I )
therefore values larger than 1 indicate image contrast improvement.

Appendix A.2. Pixel-to-Pixel Error Measures

In order to evaluate the noise reduction level, conventional, simple, and widely used quality metrics are used.
The first metric is the Mean Absolute Error (MAE):
M A E ( I , J ) = 1 | D | p i D | I ( p i ) J ( p i ) | ,
which is a measure of differences (errors) between paired observations expressing the same phenomenon. In the current approach it is a difference between the reference I ( ) and resulted J ( ) images, where D is the image domain (a set of all pixels p i ) and | D | is the cardinality of the image domain (number of pixels in the image).
This category of measures also includes the Root Mean Square (RMS) defined as the standard deviation of the pixel intensities [35]:
R M S = 1 | D | p i D I ( p i ) J ( p i ) 2 .
Similar in nature to MAE is the distortion index, called Mean Squared Error (MSE), which is expressed as:
M S E ( I , J ) = 1 | D | p i D I ( p i ) J ( p i ) 2 .
Small values of the measures mentioned above indicate small difference between the images being compared, therefore values close to zero are expected.

Appendix A.3. Structural Information Measures

Due to strong spatial dependencies of image pixels, Wang et al. [33] introduced a measure of structural similarity (SSIM) as an alternative framework for quality assessment based on the degradation of structural information under the assumption that human visual perception is highly adapted for extracting structural information from a scene [37].
The SSIM is defined as:
S S I M ( I , J ) = ( 2 μ I μ J + C 1 ) ( 2 σ I J + C 2 ) ( μ I 2 + μ J 2 + C 1 ) ( σ I 2 + σ J 2 + C 2 ) ,
where C 1 = L 1 · k 1 2 , C 2 = L 1 · k 2 2 , L = 256 , k 1 , k 2 1 ,
μ I = 1 | D | p i D I ( p i ) , μ J = 1 | D | p i D J ( p i ) ,
σ I 2 = 1 | D | 1 p i D I p i μ I 2 , σ J 2 = 1 | D | 1 p i D J p i μ J 2 ,
and
σ I J = 1 | D | 1 p i D I p i μ I J p i μ J .
The resultant SSIM index is a value between 1 and 1, and value 1 is only reachable in the case of two identical sets of data (images).
Another structural information measure called the Quality Index Q was defined by Wang et al. [34] as:
Q = 4 σ I J μ I μ J ( σ I 2 + σ J 2 ) ( μ I 2 + μ J 2 )
The dynamic range of Q index is [ 1 ; 1 ] , the closer the value is to 1 the better.

Appendix A.4. Edge Preservation Measures

The aforementioned indexes do not reflect the edge preservation, so further measures are introduced [32]. The first factor is a measure of the Noise Suppression ( ρ ). It is based on correlation and is defined as:
ρ = Γ ( I I ¯ , J J ¯ ) Γ ( I I ¯ , I I ¯ ) · Γ ( J J ¯ , J J ¯ )
where I ¯ and J ¯ are mean values in I and J, respectively and
Γ ( s 1 , s 2 ) = p i D s 1 ( p i ) · s 2 ( p i ) .
The second is a Measure of the Edge Preservation ( β ), defined as:
β = Γ ( Δ I Δ I ¯ , Δ J Δ J ¯ ) Γ ( Δ I Δ I ¯ , Δ I Δ I ¯ ) Γ ( Δ J Δ J ¯ , Δ J Δ J ¯ ) ,
where Δ I and Δ J are a highpass filtered versions of I and J, respectively, obtained with a 3 × 3 standard approximation of the Laplacian operator, while Δ I ¯ and Δ J ¯ are mean values of Δ I and Δ J , respectively [31,32]. The correlation measures defined by Equations (A13) and (A15) should be high, close to 1, when the estimated image is similar to the reference image.

References

  1. Kato, Y.; Inoue, M.; Hirakata, A. Quantitative Comparisons of Ultra-Widefield Images of Model Eye Obtained with Optos® 200Tx and Optos® California. BMC Ophthalmol. 2019, 19, 115. [Google Scholar] [CrossRef] [Green Version]
  2. Oishi, A.; Miyata, M.; Numa, S.; Otsuka, Y.; Oishi, M.; Tsujikawa, A. Wide-Field Fundus Autofluorescence Imaging in Patients with Hereditary Retinal Degeneration: A Literature Review. Int. J. Retin. Vitr. 2019, 5, 23. [Google Scholar] [CrossRef]
  3. Rabiolo, A.; Parravano, M.; Querques, L.; Cicinelli, M.V.; Carnevali, A.; Sacconi, R.; Centoducati, T.; Vujosevic, S.; Bandello, F.; Querques, G. Ultra-Wide-Field Fluorescein Angiography in Diabetic Retinopathy: A Narrative Review. Clin. Ophthalmol. 2017, 11, 803–807. [Google Scholar] [CrossRef] [Green Version]
  4. Shoughy, S.; Arevalo, J.F.; Kozak, I. Update on Wide- and Ultra-Widefield Retinal Imaging. Indian J. Ophthalmol. 2015, 63, 575–581. [Google Scholar] [CrossRef]
  5. Oishi, A.; Hidaka, J.; Yoshimura, N. Quantification of the Image Obtained with a Wide-Field Scanning Ophthalmoscope. Investig. Ophthalmol. Vis. Sci. 2014, 55, 2424–2431. [Google Scholar] [CrossRef] [PubMed]
  6. Croft, D.E.; Hemert, J.V.; Wykoff, C.C.; Clifton, D.; Verhoek, M.; Fleming, A.; Brown, D.M. Precise Montaging and Metric Quantification of Retinal Surface Area from Ultra-Wide-Field Fundus Photography and Fluorescein Angiography. Ophthalmic Surg. Lasers Imaging Retin. 2014, 45, 312–317. [Google Scholar] [CrossRef] [PubMed]
  7. Tan, C.S.; Chew, M.C.; van Hemert, J.; Singer, M.A.; Bell, D.; Sadda, S.R. Measuring the Precise Area of Peripheral Retinal Non-Perfusion using Ultra-Wide-Field Imaging and Its Correlation with the Ischaemic Index. Br. J. Ophthalmol. 2016, 100, 235–239. [Google Scholar] [CrossRef] [PubMed]
  8. Song, S.; Zheng, Y.; He, Y. A Review of Methods for Bias Correction in Medical Images. Biomed. Eng. Rev. 2017, 1, 1–10. [Google Scholar] [CrossRef]
  9. Soomro, T.A.; Afifi, A.J.; Gao, J.; Hellwich, O.; Zheng, L.; Manoranjan, P. Strided Fully Convolutional Neural Network for Boosting the Sensitivity of Retinal Blood Vessels Segmentation. Expert Syst. Appl. 2019, 134, 36–52. [Google Scholar] [CrossRef]
  10. Jin, Q.; Meng, Z.; Pham, T.D.; Chen, Q.; Wei, L.; Su, R. DUNet: A Deformable Network for Retinal Vessel Segmentation. Knowl.-Based Syst. 2019, 178, 149–162. [Google Scholar] [CrossRef] [Green Version]
  11. Laha, S.; LaLonde, R.; Carmack, A.E.; Foroosh, H.; Olson, J.C.; Shaikh, S.; Bagci, U. Analysis of Video Retinal Angiography With Deep Learning and Eulerian Magnification. Front. Comput. Sci. 2020, 2, 24. [Google Scholar] [CrossRef]
  12. Ding, L.; Ajay, E.K.; Rajeev, S.R.; Wykoff, C.C.; Sharma, G.; Kuriyan, A.E. Weakly-Supervised Vessel Detection in Ultra-Widefield Fundus Photography Via Iterative Multi-Modal Registration and Learning. IEEE Trans. Med. Imaging 2020, 40, 2748–2758. [Google Scholar] [CrossRef] [PubMed]
  13. Zhao, Y.; MacCormick, I.J.; Parry, D.G.; Leach, S.; Beare, N.A.; Harding, S.P.; Zheng, Y. Automated Detection of Leakage in Fluorescein Angiography Images with Application to Malarial Retinopathy. Sci. Rep. 2015, 5, 10425. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  14. Ganjee, R.; Moghaddam, M.E.; Nourinia, R. Automatic Segmentation of Abnormal Capillary Nonperfusion Regions in Optical Coherence Tomography Angiography Images using Marker-Controlled Watershed Algorithm. J. Biomed. Opt. 2018, 23, 096006. [Google Scholar] [CrossRef] [PubMed]
  15. Zuiderveld, K. Contrast Limited Adaptive Histogram Equalization. Graph. Gems 1994, IV, 474–485. [Google Scholar]
  16. Sopharak, A.; Uyyanonvara, B.; Barman, S. Simple Hybrid Method for Fine Microaneurysm Detection from Non-Dilated Diabetic Retinopathy Retinal Images. Comput. Med Imaging Graph. 2013, 37, 394–402. [Google Scholar] [CrossRef] [PubMed]
  17. Intaramanee, T.; Khoeun, R.; Chinnasarn, K. Automatic Microaneurysm Detection using Multi-Level Threshold based on ISODATA. In Proceedings of the 2017 14th International Joint Conference on Computer Science and Software Engineering (JCSSE), NakhonSiThammarat, Thailand, 12–14 July 2017; pp. 1–6. [Google Scholar] [CrossRef]
  18. Sheet, S.S.M.; Tan, T.S.; As’ari, M.; Hitam, W.H.W.; Sia, J.S. Retinal Disease Identification using Upgraded CLAHE Filter and Transfer Convolution Neural Network. ICT Express 2021, in press. [CrossRef]
  19. Mehta, N.; Braun, P.X.; Gendelman, I.; Alibhai, A.Y.; Arya, M.; Duker, J.S.; Waheed, N.K. Repeatability of Binarization Thresholding Methods for Optical Coherence Tomography Angiography Image Quantification. Sci. Rep. 2020, 10, 15368. [Google Scholar] [CrossRef] [PubMed]
  20. Sonali; Sahu, S.; Singh, A.K.; Ghrera, S.P.; Elhoseny, M. An Approach for De-Noising and Contrast Enhancement of Retinal Fundus Image Using CLAHE. Opt. Laser Technol. 2019, 110, 87–98. [Google Scholar] [CrossRef]
  21. Alwazzan, M.J.; Ismael, M.A.; Ahmed, A.N. A Hybrid Algorithm to Enhance Colour Retinal Fundus Images Using a Wiener Filter and CLAHE. J. Digit. Imaging 2021, 34, 750–759. [Google Scholar] [CrossRef]
  22. Rasta, S.H.; Nikfarjam, S.; Javadzadeh, A. Detection of Retinal Capillary Nonperfusion in Fundus Fluorescein Angiogram of Diabetic Retinopathy. BioImpacts 2015, 5, 183–190. [Google Scholar] [CrossRef] [PubMed]
  23. Kaba, D.; Salazar-Gonzalez, A.G.; Li, Y.; Liu, X.; Serag, A. Segmentation of Retinal Blood Vessels Using Gaussian Mixture Models and Expectation Maximisation. In Health Information Science; Huang, G., Liu, X., He, J., Klawonn, F., Yao, G., Eds.; Springer: Berlin/Heidelberg, Germany, 2013; pp. 105–112. [Google Scholar]
  24. Zheng, Y.; Vanderbeek, B.; Xiao, R.; Daniel, E.; Stambolian, D.; Maguire, M.; O’Brien, J.; Gee, J. Retrospective Illumination Correction of Retinal Fundus Images From Gradient Distribution Sparsity. In Proceedings of the International Symposium on Biomedical Imaging, Barcelona, Spain, 2–5 May 2012; pp. 972–975. [Google Scholar] [CrossRef]
  25. Gonzalez, R.C.; Woods, R.E. Digital Image Processing; Prentice Hall: Upper Saddle River, NJ, USA, 2008. [Google Scholar]
  26. Kells, L.M.; Kern, W.F.; Bland, J.R. Plane and Spherical Trigonometry; Creative Media Partners, LLC.: Sacramento, CA, USA, 2018. [Google Scholar]
  27. Sędziak-Marcinek, B.; Teper, S.; Chełmecka, E.; Wylęgała, A.; Marcinek, M.; Bas, M.; Wylęgała, E. Diabetic Macular Edema Treatment with Bevacizumab does not Depend on the Retinal Nonperfusion Presence. J. Diabetes Res. 2021, 2021, 6620122. [Google Scholar] [CrossRef]
  28. Shen, H.; Wang, J.; Niu, T.; Chen, J.; Xu, X. Dynamic Versus Static Ultra-Widefield Fluorescein Angiography in Eyes with Diabetic Retinopathy: A Pilot Prospective Cross-sectional Study. Int. J. Ophthalmol. 2020, 14, 409–415. [Google Scholar] [CrossRef]
  29. Beghdadi, A.; Le Negrate, A. Contrast Enhancement Technique Based on Local Detection of Edges. Comput. Vis. Graph. Image Process. 1989, 46, 162–174. [Google Scholar] [CrossRef]
  30. Celik, T.; Tjahjadi, T. Automatic Image Equalization and Contrast Enhancement Using Gaussian Mixture Modeling. IEEE Trans. Image Process. 2012, 21, 145–156. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  31. Michel-González, E.; Cho, M.H.; Lee, S.Y. Geometric Nonlinear Diffusion Filter and its Application to X-ray Imaging. Biomed. Eng. Online 2011, 10, 47. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  32. Sattar, F.; Floreby, L.; Salomonsson, G.; Lövström, B. Image Enhancement Based on a Nonlinear Multiscale Method. IEEE Trans. Image Process. 1997, 6, 888–895. [Google Scholar] [CrossRef] [PubMed]
  33. Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image Quality Assessment: From Error Visibility to Structural Similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef] [Green Version]
  34. Wang, Z.; Bovik, A.C. A Universal Image Quality Index. IEEE Signal Process. Lett. 2002, 9, 81–84. [Google Scholar] [CrossRef]
  35. Peli, E. Contrast in Complex Images. J. Opt. Soc. Am. A 1990, 7, 2032–2040. [Google Scholar] [CrossRef]
  36. Siwek, S.; Więcławek, W. Automatic Blood Vessel Segmentation Algorithm in Ultrawide-Field Fluorescein Angiography Images. In Recent Advances in Computational Oncology and Personalized Medicine; Silesian University of Technology, Ed.; Silesian University of Technology: Gliwice, Poland, 2021; in press. [Google Scholar]
  37. Jaya, V.L.; Gopikakumari, R. IEM: A New Image Enhancement Metric for Contrast and Sharpness Measurements. Int. J. Comput. Appl. 2013, 79, 1–9. [Google Scholar]
Figure 1. The principle for luminance weights determination in 2D space projected onto 1D.
Figure 1. The principle for luminance weights determination in 2D space projected onto 1D.
Sensors 22 00012 g001
Figure 2. The principle for contrast compensation determination in 3D space: (a) spherical mapping of the retina to 2D image domain with correction function γ ( x , y ) , (b) great-circle distance.
Figure 2. The principle for contrast compensation determination in 3D space: (a) spherical mapping of the retina to 2D image domain with correction function γ ( x , y ) , (b) great-circle distance.
Sensors 22 00012 g002
Figure 3. Overview of the proposed image enhancement depending on window spatial location.
Figure 3. Overview of the proposed image enhancement depending on window spatial location.
Sensors 22 00012 g003
Figure 4. Quality measures for images processed by the proposed method. Measure values calculated from the central image regions only.
Figure 4. Quality measures for images processed by the proposed method. Measure values calculated from the central image regions only.
Sensors 22 00012 g004
Figure 5. Quality measures for images processed by the proposed method. Measure values calculated for the peripheral image regions only.
Figure 5. Quality measures for images processed by the proposed method. Measure values calculated for the peripheral image regions only.
Sensors 22 00012 g005
Figure 6. Comparison of exemplary UWFA images: before (a,c,e) and after (b,d,f) the correction procedure; images taken from the: (a,b) early, (c,d) mid and (e,f) late acquisition phases.
Figure 6. Comparison of exemplary UWFA images: before (a,c,e) and after (b,d,f) the correction procedure; images taken from the: (a,b) early, (c,d) mid and (e,f) late acquisition phases.
Sensors 22 00012 g006
Figure 7. Quality measures for images processed by the CLAHE method.Measure values calculated for the central image regions only.
Figure 7. Quality measures for images processed by the CLAHE method.Measure values calculated for the central image regions only.
Sensors 22 00012 g007
Figure 8. Quality measures for images processed by the CLAHE method. Measure values calculated for the peripheral image regions only.
Figure 8. Quality measures for images processed by the CLAHE method. Measure values calculated for the peripheral image regions only.
Sensors 22 00012 g008
Figure 9. Visual comparison of the results obtained for several input images (a,d,g) by the CLAHE method (b,e,h) and the proposed method (c,f,i) for three phases: early (first row), mid (middle row) and late (last row).
Figure 9. Visual comparison of the results obtained for several input images (a,d,g) by the CLAHE method (b,e,h) and the proposed method (c,f,i) for three phases: early (first row), mid (middle row) and late (last row).
Sensors 22 00012 g009
Figure 10. Error distribution for pixel-to-pixel measures: (a) exemplary original image, (b) contrast enhanced image, (c) AE values for each image pixel.
Figure 10. Error distribution for pixel-to-pixel measures: (a) exemplary original image, (b) contrast enhanced image, (c) AE values for each image pixel.
Sensors 22 00012 g010
Figure 11. Exemplary vasculature segmentation results on images without the preprocessing by the proposed method (green) and after being processed by the proposed method (red).
Figure 11. Exemplary vasculature segmentation results on images without the preprocessing by the proposed method (green) and after being processed by the proposed method (red).
Sensors 22 00012 g011
Table 1. Summary of the image test database.
Table 1. Summary of the image test database.
SubjectsEyeImagesPhase
LRLREarlyMidLate
3419221241327414636
41256 256
Table 2. Quality measure values for selected windows from Figure 6.
Table 2. Quality measure values for selected windows from Figure 6.
PhaseEBCMMAEMSERMSSSIMQ ρ β
Early0.970.10.020.130.670.610.040.66
Mid0.970.030.000.030.940.920.990.99
Late0.990.050.000.050.840.800.830.95
Table 3. Average values of quality measures obtained from the central and peripheral image regions for processed by the proposed and CLAHE methods.
Table 3. Average values of quality measures obtained from the central and peripheral image regions for processed by the proposed and CLAHE methods.
MeasureMethodCentral RegionPeripheral Region
EMLGEMLG
EBCMProposed1.000.990.990.991.511.211.241.30
CLAHE0.620.620.590.621.881.461.991.57
MAEProposed0.020.020.020.021.120.130.140.13
CLAHE0.080.080.080.080.050.060.060.05
MSEProposed0.000.000.000.000.030.040.040.04
CLAHE0.010.010.010.010.000.000.000.00
RMSProposed0.030.030.030.030.170.180.190.18
CLAHE0.090.100.090.090.060.060.060.06
SSIMProposed0.970.970.970.970.440.490.480.47
CLAHE0.710.720.700.710.530.560.540.55
QProposed0.960.970.970.970.350.410.410.39
CLAHE0.530.540.510.530.310.360.340.34
ρ Proposed0.990.990.980.990.330.370.310.35
CLAHE0.930.920.910.920.970.960.960.96
β Proposed0.990.990.990.990.410.440.440.43
CLAHE0.920.950.950.940.960.960.970.96
Table 4. Interpretation of quality measures values for UWFA image brightness enhancement: “+” in favor of the presented method, “−” in favor of the reference method, “∘” if methods are comparable. Fields within parentheses are additionally discussed in the text.
Table 4. Interpretation of quality measures values for UWFA image brightness enhancement: “+” in favor of the presented method, “−” in favor of the reference method, “∘” if methods are comparable. Fields within parentheses are additionally discussed in the text.
Image RegionEBCMMAEMSERMSSSIMQ ρ β
Central+++++++
Peripheral(−)+++(∘/+)(+)(+)
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Więcławek, W.; Danch-Wierzchowska, M.; Rudzki, M.; Sędziak-Marcinek, B.; Teper, S.J. Ultra-Widefield Fluorescein Angiography Image Brightness Compensation Based on Geometrical Features. Sensors 2022, 22, 12. https://doi.org/10.3390/s22010012

AMA Style

Więcławek W, Danch-Wierzchowska M, Rudzki M, Sędziak-Marcinek B, Teper SJ. Ultra-Widefield Fluorescein Angiography Image Brightness Compensation Based on Geometrical Features. Sensors. 2022; 22(1):12. https://doi.org/10.3390/s22010012

Chicago/Turabian Style

Więcławek, Wojciech, Marta Danch-Wierzchowska, Marcin Rudzki, Bogumiła Sędziak-Marcinek, and Slawomir Jan Teper. 2022. "Ultra-Widefield Fluorescein Angiography Image Brightness Compensation Based on Geometrical Features" Sensors 22, no. 1: 12. https://doi.org/10.3390/s22010012

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop