Next Article in Journal
Effect of External Aeration on Cr (VI) Reduction in the Leersia hexandra Swartz Constructed Wetland-Microbial Fuel Cell System
Previous Article in Journal
The Correlation between Surface Integrity and Operating Behaviour of Slide Burnished Components—A Review and Prospects
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Luminosity and Contrast Adjustment of Fundus Images with Reflectance

by
Mofleh Hannuf AlRowaily
,
Hamzah Arof
* and
Imanurfatiehah Ibrahim
Department of Electrical Engineering, Faculty of Engineering, University of Malaya, Kuala Lumpur 50603, Malaysia
*
Author to whom correspondence should be addressed.
Appl. Sci. 2023, 13(5), 3312; https://doi.org/10.3390/app13053312
Submission received: 3 January 2023 / Revised: 28 February 2023 / Accepted: 3 March 2023 / Published: 5 March 2023

Abstract

:
This paper presents an automatic correction method for luminosity and contrast variation in fundus images. Sixty retina or fundus images with different levels of reflectance are selected from online databases and used to assess the effectiveness of the proposed method. There are five stages in the approach, and they are image input, filtering, luminosity correction, histogram stretching and post-processing. First, a color fundus image is read as input, and its three color components, red (R), green (G) and blue (B), are separated into different channels or arrays. Next, the eye region, or the region of interest (ROI), is identified along with its border via thresholding. After that, the original ratios of red-to-green and blue-to-green for every pixel in the ROI are computed and kept together with copies of the three channels. Then, the ROI for the three channels is subjected to lowpass filtering, row-wisely in the horizontal direction and column-wisely in the vertical direction, to create a smooth background luminosity surface. This surface does not contain foreground objects such as blood vessels, optic discs, lesions, microaneurysms and others. Three lowpass filters are tested for this purpose, and their efficacy is compared. The outcome is a smooth luminosity surface that estimates the background illumination of the entire ROI. Once the background illumination is established, the luminosity is equalized for all pixels in the ROI, such that every pixel will have the same background brightness. Afterward, the histogram of the ROI is stretched or equalized to enhance the contrast between the foreground objects and the background. Next, the green channel is further improved by adding details from the blue and red channels. Finally, in the post-filtering stage, the intensities of the blue and red channels are adjusted according to their original ratios to the green channel. When all three channels are recombined, the resulting color image looks similar to the original image but shows improved luminosity and contrast. The method is tested on 60 test images. It reduces luminosity variation and increases the contrast of all images. On average, this method achieves a 30% reduction in luminosity variation and a 90% increment in contrast. The proposed method was executed on AMD 5900HS CPU using MATLAB R2021b, and the mean execution time was nearly 2 s on average.

1. Introduction

When evaluating fundus images, ophthalmologists look for irregularities in the background and foreground areas. Usually, the presence of peculiar structures or conditions foretells present or impending complications [1]. The examination is painstakingly slow, and oversights may occur if there are a lot of patients to diagnose. In this case, automatic computer diagnosis can be used as a tool to support a diagnosis. The quality of fundus images is important in ensuring the effectiveness of an evaluation. In many instances, the fundus images contain contrast and luminosity variations caused by hardware and software limitations [2]. Blurring, low contrast and uneven illumination in fundus images can affect the performance of an automated system. There are many enhancement techniques for fundus images affected by contrast and luminosity variation. They are divided into three categories called spatial, frequency and deep learning approaches [3,4]. Spatial domain methods manipulate or alter the pixels or histogram of an image directly [5,6]. Meanwhile, frequency domain approaches use Fourier transform (FT), wavelet transform (DWT) and other transform techniques to turn an image into another domain before working on it, then turning it back to its original image space [7,8]. Deep learning approaches work like black boxes wherein a lot of samples are required to train the networks without any specific techniques to follow [9,10].
Many researchers utilize the green channel from RGB color images to enhance images and for further processing. In order to enhance or restore images, several adjustment methods, such as contrast-limited adaptive histogram equalization (CLAHE) and contrast normalization, have been utilized. Coa, Li and Zhang improved the contrast of retina images based on the grayscale images [11]. However, the visual color of the images changed significantly after the contrast was adjusted. Then, to restore the original color, each channel underwent adjustment, followed by a refinement step. Most papers report works that solely utilize the green channel to detect exudates, microaneurysms or hemorrhages in fundus images [12,13].
Zhou et al. used the R, G, and B color channels of retina images to improve their luminosity while keeping the color information. The R, G, and B channels contain both color and luminosity information, which are correlated via their ratio. They should be adjusted to improve luminosity whilst their ratios are maintained [14]. Rao et al. changed the RGB of an image into HSV space, which separated the luminance (V) from the hue (H) and saturation (S) [15]. Only the luminance of fundus images was improved using an adaptive gamma correction method.
Dissopa et al. used CIE L*a*b color space to enhance the contrast of fundus images [16]. Vonghirandecha et al. also presented a similar approach utilizing L*a*b color space and adopting Hubbard’s specification [17]. Meanwhile, Qureshi et al. converted the RGB color channel into CIECAM02 color space. They asserted that their technique showed better performance than that of histogram-based approaches [18]. Alwazzan et al. introduced a new method that utilized the three RGB channels to enhance retinal images. The green channel was Wiener-filtered and adjusted using CLAHE before being recombined with the original red and blue channels [19]. Cao et al. proposed an intensity transfer strategy for the three channels [20]. Kumar and Bhandari presented another approach using two color models in HSV and L*a*b with weighted average histogram equalization (WAHE) for contrast improvement [21].
Common color spaces such as RGB, HSI, HSV and L*a*b have been used to improve the luminosity and contrast of fundus images in all of the works mentioned earlier. In this study, we propose a new spatial domain approach that is designed to reduce illumination variation and enhance the contrast of fundus images affected by boundary reflectance. The method estimates the background luminosity of a fundus image via lowpass filtering. The remainder of this paper is divided into the methodology, results and discussion, and conclusion in successive sections.

2. Methodology

Regular inspection of fundus images is important for patients suffering from diabetic retinopathy. Early signs of lesions, such as hemorrhages, exudates, microaneurysms and blood vessel dilation, can be detected in fundus images. The detection process becomes harder if the fundus images are affected by non-uniform illumination and low contrast. Some fundus images suffer from boundary reflectance due to over-exposure. Boundary reflectance can obscure abnormalities and other symptoms in its vicinity. In this paper, an approach to correct uneven contrast and luminosity in fundus images suffering from boundary reflectance is presented. The method is implemented in five stages, which are image input, filtering, luminosity correction, histogram stretching and post-processing. Figure 1 shows a fundus image with boundary reflectance and its red, green and blue channels.
The gist of the method is to construct a smooth luminosity surface (LS) of the ROI via one-dimensional lowpass filtering (1DLF). This luminosity surface (LS) provides an estimate of the background brightness for each pixel in the ROI. It is assumed that the ROI is small enough that every pixel in it should have the same luminosity (or background brightness). Using the LS, the background brightness of every pixel in the ROI is equalized to 128 so that all pixels experience the same luminosity. Then, the contrast of the ROI is enhanced via histogram stretching. The flow of the stages in the process is shown in Figure 2.
In the image input stage, the blue, red and green channels of a fundus image are read and stored in separate arrays. Then, the eye area along with its border is identified via thresholding. The threshold value is obtained via trial and error. Next, the ratios of R-to-G and B-to-G of each pixel in the ROI are calculated and kept along with copies of the B, G and R channels. In the 1DLF stage, the ROI of the image is filtered for all three channels, one by one. It is noticed that in an image affected by reflectance, the epicenter of the reflectance always falls near the border of the ROI. After finding the position of the epicenter, the intensities of pixels on the boundary of the ROI should be filtered to reduce the effect of the reflectance. First, the intensities of the boundary pixels are placed in a 1D array. Then, circular 1DLF is performed on the array repeatedly, 4 times over. The length of the filter starts at 11, and it increases by 10 in each iteration. As the length of the filter increases to 41, the filtered data become smoother. Then, the circular 1DLF is repeated on another layer of pixels neighboring the boundary pixels, one pixel inward towards the center of the ROI. This step is repeated a few more times as a few contiguous layers of pixels starting from the boundary pixels are subjected to 1DLF.
The next step is to perform 1DLF row by row and then column by column throughout the ROI. One-dimensional lowpass filtering is performed on the row or column array repeatedly 4 times over, but, unlike before, it is non-circular. Again, the length of the filter starts at 11 and increases by 10 in each iteration. However, for elements at the beginning and at the end of the array, the filter is shortened since there are not enough neighbors for them. The result is a smooth luminosity surface (LS) of the background brightness. Three types of lowpass filters are used, and they are the median filter, mean filter and inverse distance filter. Figure 3 shows the LSs of the green channel obtained using the median, mean and inverse distance filters.
For every element in the array, there should be an equal number of elements before and after it, called neighbors. For instance, if the length of the filter is 11, there should be 5 neighbors before and after it. The exception would be for elements at the beginning and end of the array where there are insufficient neighbors before and after them, respectively. For circular 1DLF, this is not a problem as the beginning and end of the array are considered connected. For noncircular 1DLF, the filter is truncated. An element and its neighbors form a set called the members of the neighborhood. The mean (µ) of the members of the neighborhood must be computed before the distance between each member and the mean is calculated. The reciprocal of the distance is the inverse distance (INVD) between a member and the mean. Any inverse distance should not exceed one, and if it is bigger than one, it is set to one. The sum of all inverse distances is called the denominator sum (DENSUM). Next, each member of the neighborhood is multiplied by its inverse distance and the sum of these multiplications is divided by the DENSUM and assigned to the element being filtered. This step creates a smooth array (SA) for the row or column. The operation is summarized by Equation (1).
SA ( x ) = m N I ( m ) * INVD DENSUM
where
N is the neighborhood of element x ;
m   is a member of N, where x is the center;
I(m) is the intensity of m ;
µ is the mean of N;
INVD is the inverse distance of m to the mean = 1 / | I ( m )   µ | ;
DENSUM is the sum of all the inverse distances of the members in N;
SA(x) is the value of the smooth array at position x after filtering.
The smooth arrays obtained from row-wise 1DLF form a smooth surface that is subjected to column-wise 1DLF. The result of this operation is the luminosity surface (LS) that approximates the background brightness of the ROI.
Since the eye area is small, the background brightness of every pixel in the ROI can be equalized. In the luminosity correction stage, the background luminosity of all pixels is levelled as follows: The intensity of each pixel in the B(x,y), G(x,y) and R(x,y) channels can be lower or higher than their respective luminosity surfaces LS(x,y). Thus, at any position (x,y) in the ROI, the gap between the channel, say, G(x,y), and its background LS(x,y) is recorded, as this gap will be preserved after the background LS(x,y) is corrected for each pixel in the ROI. In order to equalize the LS(x,y), it is shifted to 128 at every location (x,y) since 128 is the middle-intensity level. For example, at position (x = m, y = n), let us assume that LS(m,n) needs to move up or down by a certain amount to become 128. Thus, G(m,n) has to move up or down by the same amount to maintain the gap to L(m,n). The results of the luminosity correction are the equalized channels, EB(x,y), ER(x,y) and EG(x,y).
Then, the equalized channels undergo histogram stretching to improve the contrast of the ROI. This step generates contrast-adjusted blue, red and green channels denoted as CB(x,y), CR(x,y) and CG(x,y). Figure 4 shows the CG(x,y) of the image in Figure 1 obtained from the three filters. In most instances, histogram stretching does not affect the equalized channels much since they already have good contrast.
The B, G and R channels of a fundus image can be diagnosed separately in greyscale. Nonetheless, it is desirable to recombine the three channels to form an improved color image. However, if the channels are combined without adjustment, the resulting color image will look different than the color of the original image. It is important to preserve the original color shades using the old R and B ratios over G. For this objective, the stored ratios of the old R and B channels divided by the old G channel are used. The new R and B channels are obtained by multiplying the respective color ratio by CG(x,y), as stated in Equation (2). The green channel is the anchor for this conversion since it contains the most information. For the G channel, CG(x,y) is used directly. Figure 5 shows corrected color images that are obtained from the three filters.
C NEW ( x , y ) =   C OLD ( x , y ) / G OLD ( x , y )   * CG ( x , y )
where
COLD(x,y) is either the old R or B channel before processing;
CG(x,y) is the contrast-adjusted G channel (histogram-stretched);
GOLD(x,y) is the old G channel before processing.

3. Results and Discussion

In our experiment, 60 fundus images taken mainly from the online databases of STARE and DIARETDB were used as test images for the proposed technique. All images showed improvements in luminosity and contrast throughout their ROIs. The improvements can be verified visually and quantitatively even in the areas that are occluded by reflectance. Figure 6 shows seven random samples that were improved with the proposed method using the three filters. As observed, the effect of reflectance around the boundary of the ROI is almost eliminated by the process. Via observation, the performances of the IDF and AF appear identical. The MF is good at removing boundary reflectance, but it also removes a lot of foreground objects.
The means and standard deviations (stds) of the green channels of the images were measured before and after the filtering process. Table 1 lists the means and stds of the seven samples before and after filtering. The means of the images approach 128 after filtering, as intentionally obtained by design. On average, the std seems to decrease after filtering, implying that there is less luminosity variation in the filtered images after the reflectance is removed. This is because, in the absence of reflectance, there are less high-intensity pixels near the ROI boundary. Consequently, the std of the ROI drops. It is noticed that the contrast is not always related to the std of an image. In the presence of a strong boundary reflectance, the std increases but contrast decreases since the reflectance obscures the foreground and background of the image. Basically, std is the spread of the intensities about the mean, while contrast is related to how well the foreground objects are distinguished from the background. The objective of this work was to reduce the std of the ROI by removing the boundary reflectance while preserving its contrast. The average std of the MF images is a bit lower than the IDF and AF images since median filtering also removes part of the exudates if they are big. This is evident in sample 6.
The luminosity of images can be calculated by transforming them from RGB to L*a*b space, where L represents luminance or luminosity. The average luminosity of the images was calculated before and after filtering to estimate the improvement introduced with the proposed method. We used Equation (3) to estimate the average (avg) luminosity gain of the images. This is simply the average luminance of the filtered images minus the average luminance of the unfiltered images divided by the average luminance of the unfiltered images.
Luminosity   Gain = Avg   Filtered   Luminance     Avg   Unfiltered   Luminance Avg   Unfiltered   Luminance * 100 %
For measuring the contrast of the images, the metric introduced by Matkovic et al. was utilized [22]. It is implemented in perceptual luminance space at different resolutions. It is the average of the absolute differences between a pixel and its nearest neighbors. The procedure starts by converting a greyscale image into the perceptual luminance space and calculating the first local contrast. Then, the size of the image is halved by down-sampling it before the second local contrast is computed. The next step is repeated a few times to generate more local contrasts as the size of the image becomes smaller and smaller. The overall or global contrast factor (GCF) for the image is the sum of all local contrasts multiplied by some weightings.
This contrast metric is similar to the gradient of an image as it aggregates absolute differences in four or eight directions. In our experiments, it was calculated in eight directions at only three resolutions. The GCF is the sum of the 3 local contrasts at the 3 resolutions, multiplied by 3 weightings of 0.12, 0.142 and 0.154, respectively. The GCF, referred to simply as the contrast, is calculated before and after filtering. The contrast gain for all the images was calculated using Equation (4).
Contrast   Gain = Avg   Filtered   Contrast     Avg   Unfiltered   Contrast Avg   Unfiltered   Contrast   * 100 %
The average luminosity and contrast gains for the seven samples are given in Table 2. It is observed that the average luminosity gain for the 3 filters is nearly identical at more than 50%, while the average contrast gain is more than 100%, according to the Matkovic metric [22].
The same findings are applicable to the remaining 53 samples. For the whole test suite, the average luminance before and after filtering are approximately 36 and 58, respectively, while the average contrast before and after filtering are about 0.4 and 1.13. However, since the Matkovic contrast is derived from the absolute difference, and its weightings are subjectively derived from the feedback of participants, it is fair to conclude that the contrast and luminosity of the images in our experiments were enhanced without paying too much attention to the numeric values. The approach was executed using an AMD 5900HS processor on the MATLAB R2021b platform, and the mean execution time was nearly 2 s for all filters.

4. Conclusions

A new approach to correct contrast and luminosity variations in fundus images is presented. The method effectively alleviates uneven luminosity and enhances the contrast of fundus images affected by reflectance. The performance of the method was evaluated using 60 test images, and all of them showed marked improvement in contrast and luminosity, in greyscale and color. The whole process was executed in RGB space. The resulting color images resembled the original ones in their color shades and distributions but showed noticeable improvements in luminosity variation and contrast gain. This technique can help ophthalmologists in the diagnosis of microaneurysm, exudates and other lesions.

Author Contributions

Methodology, H.A.; Software, I.I.; Formal analysis, M.H.A.; Investigation, M.H.A. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Informed Consent Statement

Patient consent was waived due to public domain data.

Data Availability Statement

https://cecas.clemson.edu/~ahoover/stare/ (accessed on 2 January 2023) and https://www.it.lut.fi/project/imageret/diaretdb1/ (accessed on 2 January 2023).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Kanimozhi, J.; Vasuki, P.; Roomi, S.M. Fundus image lesion detection algorithm for diabetic retinopathy screening. J. Ambient. Intell. Humaniz. Comput. 2021, 12, 7407–7416. [Google Scholar] [CrossRef]
  2. Kadomoto, S.; Nanegrungsunk, O.; Nittala, M.G.; Karamat, A.; Sadda, S.R. Enhanced Detection of Reticular Pseudodrusen on Color Fundus Photos by Image Embossing. Curr. Eye Res. 2022, 47, 1547–1552. [Google Scholar] [CrossRef] [PubMed]
  3. Mathews, M.R.; Anzar, S.M. A comprehensive review on automated systems for severity grading of diabetic retinopathy and macular edema. Int. J. Imaging Syst. Technol. 2021, 31, 2093–2122. [Google Scholar] [CrossRef]
  4. Kang, Y.; Fang, Y.; Lai, X. Automatic detection of diabetic retinopathy with statistical method and Bayesian classifier. J. Med. Imaging Health Inform. 2020, 10, 1225–1233. [Google Scholar] [CrossRef]
  5. Sahu, S.; Singh, A.K.; Ghrera, S.P.; Elhoseny, M. An approach for de-noising and contrast enhancement of retinal fundus image using CLAHE. Opt. Laser Technol. 2019, 110, 87–98. [Google Scholar]
  6. Sarhan, A.; Rokne, J.; Alhajj, R. Glaucoma detection using image processing techniques: A literature review. Comput. Med. Imaging Graph. 2019, 78, 101657. [Google Scholar] [CrossRef] [PubMed]
  7. Xiao, D.; Bhuiyan, A.; Frost, S.; Vignarajan, J.; Tay-Kearney, M.L.; Kanagasingam, Y. Major automatic diabetic retinopathy screening systems and related core algorithms: A review. Mach. Vis. Appl. 2019, 30, 423–446. [Google Scholar] [CrossRef]
  8. Palanisamy, G.; Ponnusamy, P.; Gopi, V.P. An improved luminosity and contrast enhancement framework for feature preservation in color fundus images. Signal Image Video Process. 2019, 13, 719–726. [Google Scholar] [CrossRef]
  9. Vives-Boix, V.; Ruiz-Fernández, D. Diabetic retinopathy detection through convolutional neural networks with synaptic metaplasticity. Comput. Methods Programs Biomed. 2021, 206, 106094. [Google Scholar] [CrossRef] [PubMed]
  10. Tavakoli, M.; Jazani, S.; Nazar, M. Automated detection of microaneurysms in color fundus images using deep learning with different preprocessing approaches. In Medical Imaging 2020: Imaging Informatics for Healthcare, Research, and Applications; SPIE: Bellingham, WA, USA, 2020; Volume 11318, pp. 110–120. [Google Scholar]
  11. Coa, L.; Li, H.; Zhang, Y. Retinal image enhancement using low-pass filtering and α-rooting. Signal Process. 2020, 170, 107445. [Google Scholar]
  12. Mayya, V.; Kamath, S.; Kulkarni, U. Automated microaneurysms detection for early diagnosis of diabetic retinopathy: A Comprehensive review. Comput. Methods Programs Biomed. Update 2021, 1, 100013. [Google Scholar] [CrossRef]
  13. Chudzik, P.; Majumdar, S.; Calivá, F.; Al-Diri, B.; Hunter, A. Microaneurysm detection using fully convolutional neural networks. Comput. Methods Programs Biomed. 2018, 158, 185–192. [Google Scholar] [CrossRef] [PubMed]
  14. Zhou, M.; Jin, K.; Wang, S.; Ye, J.; Qian, D. Color retinal image enhancement based on luminosity and contrast adjustment. IEEE Trans. Biomed. Eng. 2017, 65, 521–527. [Google Scholar] [CrossRef] [PubMed]
  15. Rao, K.; Bansal, M.; Kaur, G. A hybrid method for improving the luminosity and contrast of color retinal images using the JND model and multiple layers of CLAHE. Signal Image Video Process. 2022, 17, 207–217. [Google Scholar] [CrossRef]
  16. Dissopa, J.; Kansomkeat, S.; Intajag, S. Enhance Contrast and Balance Color of Retinal Image. Symmetry 2021, 13, 2089. [Google Scholar] [CrossRef]
  17. Vonghirandecha, P.; Karnjanadecha, M.; Intajag, S. Contrast and color balance enhancement for non-uniform illumination retinal images. Teh. Glas. 2019, 13, 291–296. [Google Scholar] [CrossRef] [Green Version]
  18. Qureshi, I.; Ma, J.; Shaheed, K. A hybrid proposed fundus image enhancement framework for diabetic retinopathy. Algorithms 2019, 12, 14. [Google Scholar] [CrossRef] [Green Version]
  19. Alwazzan, M.J.; Ismael, M.A.; Ahmed, A.N. A hybrid algorithm to enhance colour retinal fundus images using a Wiener filter and CLAHE. J. Digit. Imaging 2021, 34, 750–759. [Google Scholar] [CrossRef] [PubMed]
  20. Cao, L.; Li, H. Enhancement of blurry retinal image based on non-uniform contrast stretching and intensity transfer. Med. Biol. Eng. Comput. 2020, 58, 483–496. [Google Scholar] [CrossRef] [PubMed]
  21. Kumar, R.; Bhandari, A.K. Luminosity and contrast enhancement of retinal vessel images using the weighted average histogram. Biomed. Signal Process. Control 2022, 71, 103089. [Google Scholar] [CrossRef]
  22. Matkovic, K.; Neumann, L.; Neumann, A.; Psik, T.; Purgathofer, W. Global contrast factor—A new approach to image contrast. In Computer Aesthetics in Graphics, Visualization and Imaging; The Eurographics Association: Dublin, Ireland, 2005; pp. 159–167. [Google Scholar]
Figure 1. A fundus image with strong boundary reflectance at its top left border with its red, green and blue channels.
Figure 1. A fundus image with strong boundary reflectance at its top left border with its red, green and blue channels.
Applsci 13 03312 g001
Figure 2. Flow of stages in the process.
Figure 2. Flow of stages in the process.
Applsci 13 03312 g002
Figure 3. The LSs of the green channel in Figure 1 obtained using the three different filters.
Figure 3. The LSs of the green channel in Figure 1 obtained using the three different filters.
Applsci 13 03312 g003
Figure 4. The contrast-adjusted green (CG) channels obtained from the three filters.
Figure 4. The contrast-adjusted green (CG) channels obtained from the three filters.
Applsci 13 03312 g004
Figure 5. The corrected color images of sample in Figure 1 generated using the three filters.
Figure 5. The corrected color images of sample in Figure 1 generated using the three filters.
Applsci 13 03312 g005
Figure 6. Seven samples of fundus images that were selected randomly and improved using the three filters.
Figure 6. Seven samples of fundus images that were selected randomly and improved using the three filters.
Applsci 13 03312 g006
Table 1. The means and standard deviations of some images before and after filtering.
Table 1. The means and standard deviations of some images before and after filtering.
SampleBefore FilteringAfter IDFAfter AFAfter MF
MeanStdMeanStdMeanStdMeanStd
169.123.2128.415.1128.315.8128.413.45
274.323.0128.614.8128.314.9128.714.75
3109.615.2128.616.4128.516.8128.716.21
492.820.9128.522.5128.522.8128.321.64
570.918.3128.417.9128.218.3128.717.07
693.422.4128.619.9128.220.5128.917.5
768.935.1128.715.6128.415.7128.615.5
Average82.7122.6128.5417.46128.3417.83128.6116.59
Table 2. Luminance and contrast of seven sample images before and after filtering.
Table 2. Luminance and contrast of seven sample images before and after filtering.
Sample1234567Average
Before
filtering
Luminance30.533.0249.845.1833.484330.837.97
Contrast0.380.370.380.580.380.500.380.42
After IDFLuminance57.257.858.862.660.459.359.259.33
Contrast1.011.040.981.691.021.260.971.14
After AFLuminance57.1857.6758.762.660.359.1659.1759.25
Contrast1.021.040.981.71.031.270.981.15
After MFLuminance57.557.358.2662.9261.156059.659.53
Contrast0.981.030.981.671.01.210.971.12
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

AlRowaily, M.H.; Arof, H.; Ibrahim, I. Luminosity and Contrast Adjustment of Fundus Images with Reflectance. Appl. Sci. 2023, 13, 3312. https://doi.org/10.3390/app13053312

AMA Style

AlRowaily MH, Arof H, Ibrahim I. Luminosity and Contrast Adjustment of Fundus Images with Reflectance. Applied Sciences. 2023; 13(5):3312. https://doi.org/10.3390/app13053312

Chicago/Turabian Style

AlRowaily, Mofleh Hannuf, Hamzah Arof, and Imanurfatiehah Ibrahim. 2023. "Luminosity and Contrast Adjustment of Fundus Images with Reflectance" Applied Sciences 13, no. 5: 3312. https://doi.org/10.3390/app13053312

APA Style

AlRowaily, M. H., Arof, H., & Ibrahim, I. (2023). Luminosity and Contrast Adjustment of Fundus Images with Reflectance. Applied Sciences, 13(5), 3312. https://doi.org/10.3390/app13053312

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop