Specular Reflections Detection and Removal for Endoscopic Images Based on Brightness Classification
Abstract
:1. Introduction
- We design an image brightness classification and low brightness image brightness enhancement algorithm. This helps the follow-up highlight detection algorithm to achieve better results, and improves the image quality of low-light images;
- We develop an adaptive thresholding algorithm based on image brightness. It removes the barriers of color space-based methods;
- We propose an exemplar-based inpainting algorithm using an adaptive search range. Its advantage is that it not only reduces the error matching, but also solves the problem of extremely low algorithm efficiency as the image resolution increases, and is suitable for different high-definition endoscopic MIS procedures.
2. Related Works
2.1. SR Detection
2.2. SR Removal
2.3. Image Inpainting
3. Proposed Method
3.1. Image Classification
3.2. Brightness Component Enhancement
3.3. SR Detection
3.4. SR Removal
3.4.1. Exemplar-Based Algorithm
- Calculate boundary priority: The priority of the highlight boundary pixels determines the repair order of the sample blocks. The priority function is defined as the following:
- In Equation (7), is the confidence item, and is the data item. reflects the ratio of known information to unknown information in the sample block, the larger the value of , the more known information, is used to measure the structural information at pixel point , The larger the value of , the clearer the structure of the sample block.
- global search: After finding the pixel point with the highest priority on the highlight boundary, determine the sample block to be repaired with point as the center. Search the sample block globally in the image to find the matching block most similar to the target block to be repaired. The similarity matching criterion is the SSD distance :
- In Equation (11), R, G, and B represent the intensity values of each color channel. The smaller the SSD value, the higher the similarity between the target and matching blocks.
- Update confidence: After copying the corresponding area in to to fill the area corresponding to , must be updated, and the confidence item of the corresponding pixel must also be updated. Repeat the above steps in a loop until all the specular areas in the image are restored.
3.4.2. Improved Exemplar-Based Algorithm
- Local priority calculation: Figure 8 shows the calculation range of highlight boundary priority, {A, B, C, D, E} is the specular region, { is the corresponding highlight boundary, The original exemplar-based algorithm will find the boundary of the global highlight area (blue boundary in Figure 8a), by calculating the priority value of the pixel point on each highlight boundary to find For the sample block with the highest priority, the image repair sequence may jump from highlight A to another highlight B, and each time it is necessary to compare the priority of the global boundary pixels. This global calculation method may be effective for a single damaged area, but in the endoscopic image, the damaged area is generally multiple freely distributed highlights, and there is no correlation between different highlight areas. It is completely unnecessary when repairing a highlight Consider another issue of prioritization of highlights. Therefore, in the repair process, this paper only repairs one highlight at a time, and then repairs the rest of the highlights in the image based on the highlight boundary set { until the repair is completed. For example, when inpainting highlight A, only the priority of pixels on the current highlight A boundary (red boundary in Figure 8b) needs to be calculated for determining a target block, rather than the global highlight boundary. Only when the A highlight repair is completed, can the other highlight be repaired, and so on. This can reduce the number of priority calculations and shorten the running time of the algorithm without reducing the repair effect.
- Improved priority function: The original exemplar-based algorithm determines the repair order of the image according to the priority function. This function will cause two problems in the actual repair: (1) The confidence item drops sharply and tends to 0 quickly, resulting in the random selection of repair sample blocks in the repair process. To avoid this phenomenon, Jing [29] introduced a regularization factor to control the smoothness of the confidence curve in the confidence item, and Ouattara [30] changed the multiplicative definition of the priority function to a weighted summation. (2) When the confidence term is large, the data term may be zero. To solve this problem, Yin [28] added a curvature term to the data term. To sum up, this article defines the new priority function calculation formula as follows:
- Adaptive local search range: The original exemplar-based algorithm adopts a global search strategy, which means that every time a sample block is repaired, it needs to scan the whole image to find the best matching block, and the amount of calculation data is huge. The endoscopic image data is organ tissue, which is characterized by high local characteristics and significant global changes, and global information may reduce the quality of image restoration. Through a large number of statistical experiments, we found that the best matching blocks are usually distributed near the target block to be repaired. Based on the above reasons, we change the global search to an adaptive local search and adopt different expansion coefficients to determine the matching block search area according to the contour length . Expansion factor :
4. Experimental Results and Analysis
4.1. Datasets and Evaluation
- DYY-Spec: The DYY dataset was collected from the First Affiliated Hospital of Anhui Medical University, the Second Affiliated Hospital of Anhui Medical University, the First Affiliated Hospital of the University of Science and Technology of China, and Hefei Cancer Hospital of the Chinese Academy of Sciences. It is obtained from 4K video footage of endoscopic real surgery with a duration of about 180 min and consists of more than 5000 color images in 8-bit JPG format, including the endoscopic highlight image dataset DYY-Spec. DYY-Spec contains 1000 endoscopic specular images from data of organs with different reflective properties: uterine fibroids, bladder, groin, nasal cavity, abdominal stomach, prostate, and liver. In addition, the DYY-Spec dataset also includes specular ground truth (GT) labels annotated by experienced urologists.
- CVC-ClinicSpec: The CVC-ClinicSpec dataset contains 612 colonoscopy images with specular ground truth labels annotated by experienced radiologists.
- Hyper-Kvasir: Hyper-Kvasir is the largest publicly released gastrointestinal image dataset, the dataset contains a total of 110,079 images and 373 videos.
- Highlight detection evaluation standard: In order to evaluate the segmentation effect of the detection algorithm on the high-light area, the six indicators of Accuracy, Precision, Recall, F1-score, Dice, and Jaccard are used for quantitative evaluation. Accuracy indicates the ratio of the correctly segmented pixels by the detection algorithm to the actual pixels, Precision indicates the ratio of the correct highlight pixels detected by the algorithm to the actual highlight pixels detected by the algorithm, and Recall indicates the difference between the correct high-light pixels detected by the algorithm and the actual GT image Ratio of highlight pixels. F1-score represents the comprehensive performance of the detection algorithm, which is a composite index of precision and recall and combines the results of the two. Dice and Jaccard indicate how similar the algorithm detection results are to the true results (GT). The value range of these indicators is [0, 1], the higher the value, the better the detection effect. Their calculation formulas are as follows:Among them, TP (True Positive) indicates that the pixels on the GT label image are highlights, and the pixels on the detection result image are also highlights, and the segmentation is correct. TN (True Negative) indicates that the pixels on the GT label image are non-highlights, and the pixels on the detection result image are also non-highlights, and the segmentation is correct. FP (False Positive) means that the pixels on the GT label image are non-highlights, but the pixels on the detection result image are highlights, and the segmentation error belongs to over-detection. FN (False Positive) indicates that the pixels on the GT label image are highlights, but the pixels on the detection result image are non-highlights, and the segmentation error belongs to missing detection.
- 5.
- Highlight Removal Evaluation Criteria: Considering that the endoscopic highlight dataset lacks real highlight-free ground truth images, we use the No-Reference (NR) image evaluation metric NIQE [40] and Coefficient of Variation (COV) to quantitatively evaluate the proposed inpainting method. The no-reference image quality evaluation index refers to directly calculating the visual quality of the restored image in the absence of a reference image. The smaller the NIQE value, the higher the image quality. COV is also a no-reference image evaluation index, which indicates the intensity uniformity in the area. The smaller the COV value, the better the highlight recovery effect. The formula for calculating COV is as follows:
- 6.
- In Equation (24), σ and μ represent the standard deviation and standard value in the measurement area, respectively.
4.2. Specular Reflection Detection Results
4.3. Specular Reflection Suppression Results
4.4. Clinical Validation
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Luo, X.; Mori, K.; Peters, T.M. Advanced endoscopic navigation: Surgical big data, methodology, and applications. Annu. Rev. Biomed. Eng. 2018, 20, 221–251. [Google Scholar] [CrossRef] [PubMed]
- Sdiri, B.; Beghdadi, A.; Cheikh, F.A.; Pedersen, M.; Elle, O.J. An adaptive contrast enhancement method for stereo endoscopic images combining binocular just noticeable difference model and depth information. Electron. Imaging 2016, 28, 1–7. [Google Scholar] [CrossRef] [Green Version]
- Artusi, A.; Banterle, F.; Chetverikov, D. A Survey of Specularity Removal Methods. Comput. Graph. Forum 2011, 30, 2208–2230. [Google Scholar] [CrossRef]
- Funke, I.; Bodenstedt, S.; Riediger, C.; Weitz, J.; Speidel, S. Generative adversarial networks for specular highlight removal in endoscopic images. In Proceedings of the Conference on Medical Imaging—Image-Guided Procedures, Robotic Interventions, and Modeling, Houston, TX, USA, 12–15 February 2018. [Google Scholar]
- Kudva, V.; Prasad, K.; Guruvare, S. Detection of Specular Reflection and Segmentation of Cervix Region in Uterine Cervix Images for Cervical Cancer Screening. IRBM 2017, 38, 281–291. [Google Scholar] [CrossRef]
- Wang, X.X.; Li, P.; Du, Y.Z.; Lv, Y.C.; Chen, Y.L. Detection and Inpainting of Specular Reflection in Colposcopic Images with Exemplar-based Method. In Proceedings of the 2019 IEEE 13th International Conference on Anti-counterfeiting, Security, and Identification (ASID), Xiamen, China, 25–27 October 2019; pp. 90–94. [Google Scholar]
- Wang, X.; Li, P.; Lv, Y.; Xue, H.; Xu, T.; Du, Y.; Liu, P. Integration of Global and Local Features for Specular Reflection Inpainting in Colposcopic Images. J. Healthc. Eng. 2021, 2021, 5401308. [Google Scholar] [CrossRef]
- Akbari, M.; Mohrekesh, M.; Najarian, K.; Karimi, N.; Samavi, S.; Soroushmehr, S.M.R. Adaptive specular reflection detection and inpainting in colonoscopy video frames. In Proceedings of the 25th IEEE International Conference on Image Processing (ICIP), Athens, Greece, 7–10 October 2018; pp. 3134–3138.
- Kacmaz, R.N.; Yilmaz, B.; Aydin, Z. Effect of interpolation on specular reflections in texture-based automatic colonic polyp detection. Int. J. Imaging Syst. Technol. 2021, 31, 327–335. [Google Scholar] [CrossRef]
- Regeling, B.; Laffers, W.; Gerstner, A.O.; Westermann, S.; Muller, N.A.; Schmidt, K.; Bendix, J.; Thies, B. Development of an image pre-processor for operational hyperspectral laryngeal cancer detection. J. Biophotonics 2016, 9, 235–245. [Google Scholar] [CrossRef]
- Oh, J.; Hwang, S.; Lee, J.; Tavanapong, W.; Wong, J.; Groen, P.C.d. Informative frame classification for endoscopy video. Med. Image Anal. 2007, 11, 110–127. [Google Scholar] [CrossRef]
- Arnold, M.; Ghosh, A.; Ameling, S.; Lacey, G. Automatic Segmentation and Inpainting of Specular Highlights for Endoscopic Imaging. EURASIP J. Image Video Process. 2010, 2010, 814319. [Google Scholar] [CrossRef] [Green Version]
- El Meslouhi, O.; Kardouchi, M.; Allali, H.; Gadi, T.; Benkaddour, Y.A. Automatic detection and inpainting of specular reflections for colposcopic images. Cent. Eur. J. Comput. Sci. 2011, 1, 341–354. [Google Scholar] [CrossRef]
- Al-Surmi, A.; Wirza, R.; Dimon, M.Z.; Mahmod, R.; Khalid, F. Three dimensional reconstruction of human heart surface from single image-view under different illumination conditions. Am. J. Appl. Sci. 2013, 10, 669. [Google Scholar] [CrossRef] [Green Version]
- Marcinczak, J.M.; Grigat, R.R. Closed contour specular reflection segmentation in laparoscopic images. Int. J. Biomed. Imaging 2013, 2013, 18. [Google Scholar] [CrossRef] [PubMed]
- Alsaleh, S.M.; Aviles, A.I.; Sobrevilla, P.; Casals, A.; Hahn, J.K. Automatic and Robust Single-Camera Specular Highlight Removal in Cardiac Images. In Proceedings of the 37th Annual International Conference of the IEEE-Engineering-in-Medicine-and-Biology-Society (EMBC), Milan, Italy, 25–29 August 2015; pp. 675–678. [Google Scholar]
- Prokopetc, K.; Bartoli, A. SLIM (slit lamp image mosaicing): Handling reflection artifacts. Int. J. Comput. Assist. Radiol. Surg. 2017, 12, 911–920. [Google Scholar] [CrossRef] [PubMed]
- Alsaleh, S.M.; Aviles-Rivero, A.I.; Hahn, J.K. ReTouchImg: Fusioning from-local-to-global context detection and graph data structures for fully-automatic specular reflection removal for endoscopic images. Comput. Med. Imaging Graph. 2019, 73, 39–48. [Google Scholar] [CrossRef] [PubMed]
- Shen, D.F.; Guo, J.J.; Lin, G.S.; Lin, J.Y. Content-aware specular reflection suppression based on adaptive image inpainting and neural network for endoscopic images. Comput. Methods Programs Biomed. 2020, 192, 105414. [Google Scholar] [CrossRef]
- Li, R.; Pan, J.; Si, Y.; Yan, B.; Hu, Y.; Qin, H. Specular Reflections Removal for Endoscopic Image Sequences with Adaptive-RPCA Decomposition. IEEE Trans. Med. Imaging 2020, 39, 328–340. [Google Scholar] [CrossRef]
- Yang, Q.; Tang, J.; Ahuja, N. Efficient and Robust Specular Highlight Removal. IEEE Trans. Pattern Anal. Mach. Intell. 2015, 37, 1304–1311. [Google Scholar] [CrossRef]
- Ren, W.; Tian, J.; Tang, Y. Specular Reflection Separation With Color-Lines Constraint. IEEE Trans. Image Process. 2017, 26, 2327–2337. [Google Scholar] [CrossRef] [PubMed]
- Shen, H.-L.; Zheng, Z.-H. Real-time highlight removal using intensity ratio. Appl. Opt. 2013, 52, 4483–4493. [Google Scholar] [CrossRef] [Green Version]
- Xia, W.; Chen, E.C.S.; Pautler, S.E.; Peters, T.M. A Global Optimization Method for Specular Highlight Removal from A Single Image. IEEE Access 2019, 7, 125976–125990. [Google Scholar] [CrossRef]
- Bouwmans, T.; Javed, S.; Zhang, H.; Lin, Z.; Otazo, R. On the Applications of Robust PCA in Image and Video Processing. Proc. IEEE 2018, 106, 1427–1457. [Google Scholar] [CrossRef] [Green Version]
- Vaswani, N.; Bouwmans, T.; Javed, S.; Narayanamurthy, P. Robust Subspace Learning Robust PCA, robust subspace tracking, and robust subspace recovery. IEEE Signal Process. Mag. 2018, 35, 32–55. [Google Scholar] [CrossRef] [Green Version]
- Criminisi, A.; Perez, P.; Toyama, K. Region Filling and Object Removal by Exemplar-Based Image Inpainting. IEEE Trans. Image Process. 2004, 13, 1200–1212. [Google Scholar] [CrossRef] [PubMed]
- Yin, L.; Chang, C. An Effective Exemplar-based Image Inpainting Method. In Proceedings of the 14th IEEE International Conference on Communication Technology (ICCT), Chengdu, China, 9–11 November 2012; pp. 739–743. [Google Scholar]
- Wang, J.; Lu, K.; Pan, D.; He, N.; Bao, B.-K. Robust object removal with an exemplar-based image inpainting approach. Neurocomputing 2014, 123, 150–155. [Google Scholar] [CrossRef]
- Ouattara, N.; Loum, G.L.; Pandry, G.K.; Atiampo, A.K. A new image inpainting approach based on Criminisi algorithm. Int. J. Adv. Comput. Sci. Appl. 2019, 10, 423–433. [Google Scholar] [CrossRef] [Green Version]
- Hui-Qin, W.; Qing, C.; Cheng-Hsiung, H.; Peng, Y. Fast exemplar-based image inpainting approach. In Proceedings of the 2012 International Conference on Machine Learning and Cybernetics, Xian, China, 15–17 July 2012; pp. 1743–1747. [Google Scholar]
- Gao, Y.; Yang, J.; Ma, S.; Ai, D.; Lin, T.; Tang, S.; Wang, Y. Dynamic Searching and Classification for Highlight Removal on Endoscopic Image. Procedia Comput. Sci. 2017, 107, 762–767. [Google Scholar] [CrossRef]
- Sánchez, F.J.; Bernal, J.; Sánchez-Montes, C.; de Miguel, C.R.; Fernández-Esparrach, G. Bright spot regions segmentation and classification for specular highlights detection in colonoscopy videos. Mach. Vis. Appl. 2017, 28, 917–936. [Google Scholar] [CrossRef]
- Ali, S.; Zhou, F.; Bailey, A.; Braden, B.; East, J.E.; Lu, X.; Rittscher, J. A deep learning framework for quality assessment and restoration in video endoscopy. Med. Image Anal. 2021, 68, 101900. [Google Scholar] [CrossRef]
- Asif, M.; Song, H.; Chen, L.; Yang, J.; Frangi, A.F. Intrinsic layer based automatic specular reflection detection in endoscopic images. Comput. Biol. Med. 2021, 128, 104106. [Google Scholar] [CrossRef]
- Huang, S.C.; Cheng, F.C.; Chiu, Y.S. Efficient contrast enhancement using adaptive gamma correction with weighting distribution. IEEE Trans. Image Process. 2012, 22, 1032–1041. [Google Scholar] [CrossRef]
- Sahnoun, M.; Kallel, F.; Dammak, M.; Kammoun, O.; Mhiri, C.; Ben Mahfoudh, K.; Ben Hamida, A. A Modified DWT-SVD Algorithm for T1-w Brain MR Images Contrast Enhancement. IRBM 2019, 40, 235–243. [Google Scholar] [CrossRef]
- Palanisamy, G.; Shankar, N.B.; Ponnusamy, P.; Gopi, V.P. A hybridfeature preservation technique based on luminosity and edge based contrast enhancement in color fundus images. Biocybern. Biomed. Eng. 2020, 40, 752–763. [Google Scholar] [CrossRef]
- Pogorelov, K.; Randel, K.R.; Griwodz, C.; Eskeland, S.L.; de Lange, T.; Johansen, D.; Spampinato, C.; Dang-Nguyen, D.T.; Lux, M.; Schmidt, P.T.; et al. KVASIR: A Multi-Class Image Dataset for Computer Aided Gastrointestinal Disease Detection. In Proceedings of the Proceedings of the 8th ACM on Multimedia Systems Conference, Taipei, Taiwan, 20–23 June 2017. [Google Scholar]
- Mittal, A.; Soundararajan, R.; Bovik, A.C. Making a ‘Completely Blind’ Image Quality Analyzer. IEEE Signal Process. Lett. 2013, 20, 209–212. [Google Scholar] [CrossRef]
Methods | Accuracy | Precision | Recall | F1-Score | Dice | Jaccard |
---|---|---|---|---|---|---|
Arnold et al. [12] | 0.9672 | 0.4968 | 0.7710 | 0.6042 | 0.5135 | 0.6619 |
Meslouhi et al. [13] | 0.9344 | 0.3725 | 0.7134 | 0.4900 | 0.2451 | 0.5394 |
Alsaleh et al. [18] | 0.9602 | 0.5167 | 0.5780 | 0.5456 | 0.2470 | 0.5509 |
Shen et al. [19] | 0.8618 | 0.6002 | 0.7143 | 0.6522 | 0.3035 | 0.5257 |
Asif et al. [35] | 0.9437 | 0.6280 | 0.7079 | 0.6655 | 0.3973 | 0.6011 |
Proposed | 0.9849 | 0.7575 | 0.8869 | 0.8171 | 0.7673 | 0.8286 |
Methods | Accuracy | Precision | Recall | F1-Score | Dice | Jaccard |
---|---|---|---|---|---|---|
Arnold et al. [12] | 0.9837 | 0.8938 | 0.6739 | 0.7684 | 0.4754 | 0.6589 |
Meslouhi et al. [13] | 0.9920 | 0.3744 | 0.8961 | 0.5281 | 0.4727 | 0.6616 |
Alsaleh et al. [18] | 0.9699 | 0.6016 | 0.8020 | 0.6875 | 0.4801 | 0.6016 |
Shen et al. [19] | 0.9767 | 0.9064 | 0.6683 | 0.7694 | 0.4690 | 0.6518 |
Asif et al. [35] | 0.9584 | 0.6972 | 0.7151 | 0.6489 | 0.4600 | 0.6333 |
Proposed | 0.9932 | 0.9083 | 0.7286 | 0.8085 | 0.6084 | 0.7153 |
Methods | NIQE | COV |
---|---|---|
Original | 6.4315 | 0.3235 |
Arnold et al. [12] | 6.3130 | 0.3157 |
Shen et al. [23] | 6.7897 | 0.3670 |
Li et al. [20] | 6.2093 | 0.3173 |
Asif et al. [35] | 6.1103 | 0.3059 |
Wang et al. [6] | 5.9965 | 0.3054 |
Yin et al. [28] | 5.8565 | 0.3052 |
Proposed | 5.7836 | 0.2950 |
Images | Size | Total Pix | Percentage | Highlight Pix | Li et al. [20] | Wang et al. [6] | Yin et al. [28] | Proposed |
---|---|---|---|---|---|---|---|---|
1 | 288 × 384 | 110,592 | 0.62 | 689 | 8.31 | 30.92 | 29.81 | 0.43 |
2 | 288 × 384 | 110,592 | 1.15 | 1276 | 9.97 | 46.01 | 44.34 | 0.68 |
3 | 288 × 384 | 110,592 | 2.00 | 2184 | 8.50 | 49.56 | 48.01 | 0.77 |
4 | 485 × 506 | 245,410 | 1.51 | 3697 | 17.61 | 164 | 159 | 1.51 |
5 | 555 × 528 | 293,040 | 1.28 | 3753 | 21.11 | 209 | 201 | 3.01 |
6 | 419 × 798 | 334,362 | 1.38 | 4630 | 21.49 | 635 | 613 | 3.75 |
7 | 602 × 753 | 453,306 | 0.72 | 3250 | 35.18 | 300 | 286 | 1.74 |
8 | 652 × 824 | 537,248 | 1.43 | 7662 | 37.31 | 1592 | 1570 | 4.86 |
9 | 950 × 930 | 883,500 | 2.61 | 22,940 | 88.46 | 2676 | 2652 | 45.80 |
10 | 1280 × 720 | 921,600 | 0.74 | 6837 | 64.59 | 1385 | 1349 | 3.17 |
11 | 1057 × 1080 | 1,141,560 | 1.01 | 11,505 | 107.31 | 4737 | 4702 | 14.38 |
12 | 1026 × 1155 | 1,185,030 | 0.31 | 3630 | 104.74 | 1146 | 1095 | 1.74 |
13 | 1011 × 1210 | 1,223,310 | 0.67 | 8220 | 128.65 | 2471 | 2455 | 5.49 |
14 | 1024 × 1280 | 1,310,720 | 0.55 | 7160 | 159.85 | 2807 | 2576 | 5.99 |
15 | 1070 × 1348 | 1,442,360 | 0.58 | 8340 | 177.94 | 2301 | 2107 | 10.53 |
16 | 1001 × 1516 | 1,517,516 | 1.11 | 16,625 | 191.95 | 4711 | 4673 | 11.55 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Nie, C.; Xu, C.; Li, Z.; Chu, L.; Hu, Y. Specular Reflections Detection and Removal for Endoscopic Images Based on Brightness Classification. Sensors 2023, 23, 974. https://doi.org/10.3390/s23020974
Nie C, Xu C, Li Z, Chu L, Hu Y. Specular Reflections Detection and Removal for Endoscopic Images Based on Brightness Classification. Sensors. 2023; 23(2):974. https://doi.org/10.3390/s23020974
Chicago/Turabian StyleNie, Chao, Chao Xu, Zhengping Li, Lingling Chu, and Yunxue Hu. 2023. "Specular Reflections Detection and Removal for Endoscopic Images Based on Brightness Classification" Sensors 23, no. 2: 974. https://doi.org/10.3390/s23020974
APA StyleNie, C., Xu, C., Li, Z., Chu, L., & Hu, Y. (2023). Specular Reflections Detection and Removal for Endoscopic Images Based on Brightness Classification. Sensors, 23(2), 974. https://doi.org/10.3390/s23020974