Optimizing Image Enhancement: Feature Engineering for Improved Classification in AI-Assisted Artificial Retinas
Abstract
:1. Introduction
1.1. Related Work
1.2. Methodology and Major Contributions
- Image Enhancement: Based on primary object classification, this effective contribution plays a pivotal role in enabling visual clarity for images having single, dual, or multiple objects, shown in Figures 4 and 5. This is achieved by the support of the proposed classification model depicted in Section 2.2 and filtering techniques [37].
- Classification Model: The proposed model utilizes a deep learning technique. It classifies multiple objects as primary/non-primary. This model has two branches, which we term as image and object-specification branches. Both have the same purpose of extracting features from each and then concatenating before the linear layer is applied. This design was selected for the purpose of improving the model’s performance, specifically based on the suggested set of selective features.
- Feature Selection and Dataset Preparation: We utilized the common objects in the context (COCO) dataset as a base for our work. The SSD and depth estimation models were applied to the dataset so that the annotations could be processed and added to the dataset for better performance of the classification model. Furthermore, additional features were calculated, such as the area, size, and depth of objects, which are considered key factors in improving the classification model’s performance.
2. Proposed System
2.1. Nomenclature
2.2. Network Architecture
3. Availability of Benchmark Dataset and Equipment
4. Experimental Results
4.1. Experimental Setup
4.2. Training
4.3. Evaluation Results
4.3.1. Optimized Image Enhancements
- Edges and Phosphene Comparisons: In Figure 4, the results of several techniques applied to the original images can be visualized. The first three columns in the plot represent the original, its edge, and simulated phosphene. The next three columns in the plot represent the image with primary classified objects, its edge, and simulated phosphene. It is clearly evident that the objects are clearly identifiable as compared to the first three images in the plot. This is because unnecessary information is removed from the image with the support of the proposed multi-object classification model.The eight images in Figure 4 have either a single object or two objects, and the simulated phosphene results can be considered satisfactory in terms of vision clarity as compared to the phosphene applied over the original image.Figure 4. Different techniques for original vs. object-focused images, illustrating singular and dual object enhancements.Figure 4. Different techniques for original vs. object-focused images, illustrating singular and dual object enhancements.In Figure 5, it can also be visualized that there is a little ambiguity in identifying the overlapped objects, cat and television, in 9, 10. But if the (11–13) are considered for evaluation, even with the inclusion of multiple objects, the objects are easily identifiable, hence depicting the proposed approach’s optimized image enhancement.Figure 5. Different techniques for original vs. object-focused images, illustrating multi-object enhancements.Figure 5. Different techniques for original vs. object-focused images, illustrating multi-object enhancements.
- Intensity Profile Analysis: The intensity profile comparison between the original and proposed images is shown in Figure 6. The blue and orange lines represent the original and proposed enhanced image intensity profiles. This graph contains the intensity profiles for images containing one or two objects. It is clearly evident that the orange line, i.e., the proposed image’s intensity profiles, have an increased contrast and sharper peaks. This states the fact that the images are enhanced significantly in addition to the clarity in vision.Figure 6. Intensity profile analysis comparison between the original and enhanced images featuring singular and dual objects.Figure 6. Intensity profile analysis comparison between the original and enhanced images featuring singular and dual objects.The second intensity profile comparison for images containing three or more objects is shown in Figure 7. It can be clearly analyzed that the proposed enhanced image intensity profiles show increased contrast and sharper peaks, which refers to the fact that the proposed system can cope with the challenges involved in multiple object scenarios. Referring to this, it can be stated that the images are enhanced significantly and kept along with the clarity in vision.
4.3.2. Training and Validation Loss
4.3.3. Validation Accuracy
4.3.4. Average Loss—Test
4.3.5. Accuracy—Test
4.3.6. Confusion Matrix
4.3.7. Accuracy, Precision, Recall, Specificity
4.3.8. Precision Recall Area under Curve (PR-AUC)
5. Discussion
6. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Peiroten, L.; Zrenner, E.; Haq, W. Artificial Vision: The High-Frequency Electrical Stimulation of the Blind Mouse Retina Decay Spike Generation and Electrogenically Clamped Intracellular Ca2+ at Elevated Levels. Bioengineering 2023, 10, 1208. [Google Scholar] [CrossRef] [PubMed]
- Eswaran, V.; Eswaran, U.; Eswaran, V.; Murali, K. Revolutionizing Healthcare: The Application of Image Processing Techniques. In Medical Robotics and AI-Assisted Diagnostics for a High-Tech Healthcare Industry; IGI Global: Hershey, PA, USA, 2024; pp. 309–324. [Google Scholar]
- Mehmood, A.; Mehmood, F.; Song, W.C. Cloud based E-Prescription management system for healthcare services using IoT devices. In Proceedings of the 2019 International Conference on Information and Communication Technology Convergence (ICTC), Jeju, Republic of Korea, 16–18 October 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 1380–1386. [Google Scholar]
- Xu, J.; Park, S.H.; Zhang, X.; Hu, J. The improvement of road driving safety guided by visual inattentional blindness. IEEE Trans. Intell. Transp. Syst. 2021, 23, 4972–4981. [Google Scholar] [CrossRef]
- Wu, K.Y.; Mina, M.; Sahyoun, J.Y.; Kalevar, A.; Tran, S.D. Retinal Prostheses: Engineering and Clinical Perspectives for Vision Restoration. Sensors 2023, 23, 5782. [Google Scholar] [CrossRef]
- Bazargani, Y.S.; Mirzaei, M.; Sobhi, N.; Abdollahi, M.; Jafarizadeh, A.; Pedrammehr, S.; Alizadehsani, R.; Tan, R.S.; Islam, S.M.S.; Acharya, U.R. Artificial Intelligence and Diabetes Mellitus: An Inside Look Through the Retina. arXiv 2024, arXiv:2402.18600. [Google Scholar]
- Li, J.; Han, X.; Qin, Y.; Tan, F.; Chen, Y.; Wang, Z.; Song, H.; Zhou, X.; Zhang, Y.; Hu, L.; et al. Artificial intelligence accelerates multi-modal biomedical process: A Survey. Neurocomputing 2023, 558, 126720. [Google Scholar] [CrossRef]
- Bernard, T.M.; Zavidovique, B.Y.; Devos, F.J. A programmable artificial retina. IEEE J. Solid-State Circuits 1993, 28, 789–798. [Google Scholar] [CrossRef]
- Mehmood, A.; Lee, K.T.; Kim, D.H. Energy Prediction and Optimization for Smart Homes with Weather Metric-Weight Coefficients. Sensors 2023, 23, 3640. [Google Scholar] [CrossRef]
- McDonald, S.M.; Augustine, E.K.; Lanners, Q.; Rudin, C.; Catherine Brinson, L.; Becker, M.L. Applied machine learning as a driver for polymeric biomaterials design. Nat. Commun. 2023, 14, 4838. [Google Scholar] [CrossRef]
- Pattanayak, S. Introduction to Deep-Learning Concepts and TensorFlow. In Pro Deep Learning with TensorFlow 2.0: A Mathematical Approach to Advanced Artificial Intelligence in Python; Apress: Berkeley, CA, USA, 2023; pp. 109–197. [Google Scholar]
- Daich Varela, M.; Sen, S.; De Guimaraes, T.A.; Kabiri, N.; Pontikos, N.; Balaskas, K.; Michaelides, M. Artificial intelligence in retinal disease: Clinical application, challenges, and future directions. Graefe’s Arch. Clin. Exp. Ophthalmol. 2023, 261, 3283–3297. [Google Scholar] [CrossRef]
- Chien, Y.; Hsiao, Y.J.; Chou, S.J.; Lin, T.Y.; Yarmishyn, A.A.; Lai, W.Y.; Lee, M.S.; Lin, Y.Y.; Lin, T.W.; Hwang, D.K.; et al. Nanoparticles-mediated CRISPR-Cas9 gene therapy in inherited retinal diseases: Applications, challenges, and emerging opportunities. J. Nanobiotechnol. 2022, 20, 511. [Google Scholar] [CrossRef]
- Kasture, K.; Shende, P. Amalgamation of Artificial Intelligence with Nanoscience for Biomedical Applications. Arch. Comput. Methods Eng. 2023, 30, 4667–4685. [Google Scholar] [CrossRef]
- Wan, C.; Zhou, X.; You, Q.; Sun, J.; Shen, J.; Zhu, S.; Jiang, Q.; Yang, W. Retinal image enhancement using cycle-constraint adversarial network. Front. Med. 2022, 8, 793726. [Google Scholar] [CrossRef]
- Athar, A.; Luiten, J.; Voigtlaender, P.; Khurana, T.; Dave, A.; Leibe, B.; Ramanan, D. Burst: A benchmark for unifying object recognition, segmentation and tracking in video. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, Waikoloa, HI, USA, 3–7 January 2023; pp. 1674–1683. [Google Scholar]
- Yu, Q.; Fan, K.; Zheng, Y. Domain Adaptive Transformer Tracking Under Occlusions. IEEE Trans. Multimed. 2023, 25, 1452–1461. [Google Scholar] [CrossRef]
- Muntarina, K.; Shorif, S.B.; Uddin, M.S. Notes on edge detection approaches. Evol. Syst. 2022, 13, 169–182. [Google Scholar] [CrossRef]
- Xiao, K.; Engstrom, L.; Ilyas, A.; Madry, A. Noise or signal: The role of image backgrounds in object recognition. arXiv 2020, arXiv:2006.09994. [Google Scholar]
- Sheng, H.; Wang, S.; Yang, D.; Cong, R.; Cui, Z.; Chen, R. Cross-view recurrence-based self-supervised super-resolution of light field. IEEE Trans. Circuits Syst. Video Technol. 2023, 33, 7252–7266. [Google Scholar] [CrossRef]
- Yang, D.; Cui, Z.; Sheng, H.; Chen, R.; Cong, R.; Wang, S.; Xiong, Z. An Occlusion and Noise-aware Stereo Framework Based on Light Field Imaging for Robust Disparity Estimation. IEEE Trans. Comput. 2023, 73, 764–777. [Google Scholar] [CrossRef]
- Fu, C.; Yuan, H.; Xu, H.; Zhang, H.; Shen, L. TMSO-Net: Texture adaptive multi-scale observation for light field image depth estimation. J. Vis. Commun. Image Represent. 2023, 90, 103731. [Google Scholar] [CrossRef]
- Jiang, W.; Ren, T.; Fu, Q. Deep learning in the phase extraction of electronic speckle pattern interferometry. Electronics 2024, 13, 418. [Google Scholar] [CrossRef]
- Sarkar, P.; Dewangan, O.; Joshi, A. A Review on Applications of Artificial Intelligence on Bionic Eye Designing and Functioning. Scand. J. Inf. Syst. 2023, 35, 1119–1127. [Google Scholar]
- Zheng, W.; Lu, S.; Yang, Y.; Yin, Z.; Yin, L. Lightweight transformer image feature extraction network. PeerJ Comput. Sci. 2024, 10, e1755. [Google Scholar] [CrossRef]
- Phan, H.L.; Yi, J.; Bae, J.; Ko, H.; Lee, S.; Cho, D.; Seo, J.M.; Koo, K.I. Artificial compound eye systems and their application: A review. Micromachines 2021, 12, 847. [Google Scholar] [CrossRef] [PubMed]
- Kaur, P.; Panwar, G.; Uppal, N.; Singh, P.; Shivahare, B.D.; Diwakar, M. A Review on Multi-Focus Image Fusion Techniques in Surveillance Applications for Image Quality Enhancement. In Proceedings of the 2022 5th International Conference on Contemporary Computing and Informatics (IC3I), Uttar Pradesh, India, 14–16 December 2022; IEEE: Piscataway, NJ, USA, 2022; pp. 7–11. [Google Scholar]
- Zhang, H.; Luo, G.; Li, J.; Wang, F.Y. C2FDA: Coarse-to-fine domain adaptation for traffic object detection. IEEE Trans. Intell. Transp. Syst. 2021, 23, 12633–12647. [Google Scholar] [CrossRef]
- Cui, Z.; Sheng, H.; Yang, D.; Wang, S.; Chen, R.; Ke, W. Light field depth estimation for non-lambertian objects via adaptive cross operator. IEEE Trans. Circuits Syst. Video Technol. 2023, 34, 1199–1211. [Google Scholar] [CrossRef]
- Zhang, Y.; Luo, L.; Dou, Q.; Heng, P.A. Triplet attention and dual-pool contrastive learning for clinic-driven multi-label medical image classification. Med. Image Anal. 2023, 86, 102772. [Google Scholar] [CrossRef] [PubMed]
- Han, J.K.; Yun, S.Y.; Lee, S.W.; Yu, J.M.; Choi, Y.K. A review of artificial spiking neuron devices for neural processing and sensing. Adv. Funct. Mater. 2022, 32, 2204102. [Google Scholar] [CrossRef]
- Khattak, K.N.; Qayyum, F.; Naqvi, S.S.; Mehmood, A.; Kim, J. A Systematic Framework for Addressing Critical Challenges in Adopting DevOps Culture in Software Development: A PLS-SEM Perspective. IEEE Access 2023, 11, 120137–120156. [Google Scholar] [CrossRef]
- Liu, H.; Yuan, H.; Hou, J.; Hamzaoui, R.; Gao, W. Pufa-gan: A frequency-aware generative adversarial network for 3d point cloud upsampling. IEEE Trans. Image Process. 2022, 31, 7389–7402. [Google Scholar] [CrossRef] [PubMed]
- Liu, H.; Yuan, H.; Liu, Q.; Hou, J.; Zeng, H.; Kwong, S. A hybrid compression framework for color attributes of static 3D point clouds. IEEE Trans. Circuits Syst. Video Technol. 2021, 32, 1564–1577. [Google Scholar] [CrossRef]
- van Steveninck, J.D.; van Gestel, T.; Koenders, P.; van der Ham, G.; Vereecken, F.; Güçlü, U.; van Gerven, M.; Güçlütürk, Y.; van Wezel, R. Real-world indoor mobility with simulated prosthetic vision: The benefits and feasibility of contour-based scene simplification at different phosphene resolutions. J. Vis. 2022, 22, 1. [Google Scholar] [CrossRef]
- He, B.; Lu, Q.; Lang, J.; Yu, H.; Peng, C.; Bing, P.; Li, S.; Zhou, Q.; Liang, Y.; Tian, G. A new method for CTC images recognition based on machine learning. Front. Bioeng. Biotechnol. 2020, 8, 897. [Google Scholar] [CrossRef]
- Li, R.; Li, K.; Kuo, Y.C.; Shu, M.; Qi, X.; Shen, X.; Jia, J. Referring image segmentation via recurrent refinement networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 5745–5753. [Google Scholar]
- Iqbal, A.; Aftab, S.; Ullah, I.; Saeed, M.A.; Husen, A. A classification framework to detect DoS attacks. Int. J. Comput. Netw. Inf. Secur. 2019, 11, 40–47. [Google Scholar] [CrossRef]
- Shifman, D.A.; Cohen, I.; Huang, K.; Xian, X.; Singer, G. An adaptive machine learning algorithm for the resource-constrained classification problem. Eng. Appl. Artif. Intell. 2023, 119, 105741. [Google Scholar] [CrossRef]
- Dong, Y.; Liu, Q.; Du, B.; Zhang, L. Weighted feature fusion of convolutional neural network and graph attention network for hyperspectral image classification. IEEE Trans. Image Process. 2022, 31, 1559–1572. [Google Scholar] [CrossRef] [PubMed]
- Lu, S.; Yang, J.; Yang, B.; Li, X.; Yin, Z.; Yin, L.; Zheng, W. Surgical instrument posture estimation and tracking based on LSTM. ICT Express 2024, in press. [Google Scholar] [CrossRef]
- Lee, W.; Kang, M.; Kim, S. Highly VM-Scalable SSD in Cloud Storage Systems. IEEE Trans. Comput.-Aided Des. Integr. Circuits Syst. 2023, 43, 113–126. [Google Scholar] [CrossRef]
- Rahmani, A.M.; Azhir, E.; Ali, S.; Mohammadi, M.; Ahmed, O.H.; Ghafour, M.Y.; Ahmed, S.H.; Hosseinzadeh, M. Artificial intelligence approaches and mechanisms for big data analytics: A systematic study. PeerJ Comput. Sci. 2021, 7, e488. [Google Scholar] [CrossRef] [PubMed]
- Ma, P.; Li, C.; Rahaman, M.M.; Yao, Y.; Zhang, J.; Zou, S.; Zhao, X.; Grzegorzek, M. A state-of-the-art survey of object detection techniques in microorganism image analysis: From classical methods to deep learning approaches. Artif. Intell. Rev. 2023, 56, 1627–1698. [Google Scholar] [CrossRef] [PubMed]
- Touretzky, D.; Gardner-McCune, C.; Seehorn, D. Machine learning and the five big ideas in AI. Int. J. Artif. Intell. Educ. 2023, 33, 233–266. [Google Scholar] [CrossRef]
- Mehmood, F.; Ahmad, S.; Whangbo, T.K. Object detection based on deep learning techniques in resource-constrained environment for healthcare industry. In Proceedings of the 2022 International Conference on Electronics, Information, and Communication (ICEIC), Jeju, Republic of Korea, 6–9 February 2022; IEEE: Piscataway, NJ, USA, 2022; pp. 1–5. [Google Scholar]
- COCO. Coco Dataset Images. Available online: http://images.cocodataset.org/zips/train2017.zip (accessed on 26 February 2024).
- Ehret, T. Monocular Depth Estimation: A Review of the 2022 State of the Art. Image Process. Line 2023, 13, 38–56. [Google Scholar] [CrossRef]
- Gupta, M.; Bhatt, S.; Alshehri, A.H.; Sandhu, R. Secure Virtual Objects Communication. In Access Control Models and Architectures For IoT and Cyber Physical Systems; Springer International Publishing: Cham, Switzerland, 2022; pp. 97–124. [Google Scholar]
- Mazhar, T.; Malik, M.A.; Mohsan, S.A.; Li, Y.; Haq, I.; Ghorashi, S.; Karim, F.K.; Mostafa, S.M. Quality of Service (QoS) Performance Analysis in a Traffic Engineering Model for Next-Generation Wireless Sensor Networks. Symmetry 2023, 15, 513. [Google Scholar] [CrossRef]
- Jia, R. Introduction to Neural Networks; Computer Science Department (CSCI467), University of Southern California (USC): Los Angeles, CA, USA, 2023; Available online: https://usc-csci467.github.io/assets/lectures/10_neuralnets.pdf (accessed on 26 February 2024).
- Mehmood, F.; Ahmad, S.; Whangbo, T.K. An Efficient Optimization Technique for Training Deep Neural Networks. Mathematics 2023, 11, 1360. [Google Scholar] [CrossRef]
- Qian, W.; Huang, J.; Xu, F.; Shu, W.; Ding, W. A survey on multi-label feature selection from perspectives of label fusion. Inf. Fusion 2023, 100, 101948. [Google Scholar] [CrossRef]
- Bharati, P.; Pramanik, A. Deep learning techniques—R-CNN to mask R-CNN: A survey. Comput. Intell. Pattern Recognit. Proc. CIPR 2020, 2019, 657–668. [Google Scholar]
- Reddy, K.R.; Dhuli, R. A novel lightweight CNN architecture for the diagnosis of brain tumors using MR images. Diagnostics 2023, 13, 312. [Google Scholar] [CrossRef]
- Radhakrishnan, A.; Belkin, M.; Uhler, C. Wide and deep neural networks achieve consistency for classification. Proc. Natl. Acad. Sci. USA 2023, 120, e2208779120. [Google Scholar] [CrossRef] [PubMed]
- Cao, X.; Chen, H.; Gelbal, S.Y.; Aksun-Guvenc, B.; Guvenc, L. Vehicle-in-Virtual-Environment (VVE) method for autonomous driving system development, evaluation and demonstration. Sensors 2023, 23, 5088. [Google Scholar] [CrossRef] [PubMed]
- Shitharth, S.; Manoharan, H.; Alsowail, R.A.; Shankar, A.; Pandiaraj, S.; Maple, C.; Jeon, G. Development of Edge Computing and Classification using The Internet of Things with Incremental Learning for Object Detection. Int. Things 2023, 23, 100852. [Google Scholar] [CrossRef]
- Amanatidis, P.; Karampatzakis, D.; Iosifidis, G.; Lagkas, T.; Nikitas, A. Cooperative Task Execution for Object Detection in Edge Computing: An Internet of Things Application. Appl. Sci. 2023, 13, 4982. [Google Scholar] [CrossRef]
- Ban, Y.; Liu, M.; Wu, P.; Yang, B.; Liu, S.; Yin, L.; Zheng, W. Depth estimation method for monocular camera defocus images in microscopic scenes. Electronics 2022, 11, 2012. [Google Scholar] [CrossRef]
- Meimetis, D.; Daramouskas, I.; Perikos, I.; Hatzilygeroudis, I. Real-time multiple object tracking using deep learning methods. Neural Comput. Appl. 2023, 35, 89–118. [Google Scholar] [CrossRef]
- Singh, P.; Diwakar, M.; Cheng, X.; Shankar, A. A new wavelet-based multi-focus image fusion technique using method noise and anisotropic diffusion for real-time surveillance application. J. Real-Time Image Process. 2021, 18, 1051–1068. [Google Scholar] [CrossRef]
Symbol | Detail |
---|---|
i | number of images. |
o | number of objects in an image. |
j | instances of a class of object. |
number of maximum objects. | |
top-left coordinate of object o. | |
bottom-right coordinate of object o. | |
central-coordinate of object o. | |
class of object o. | |
area of object o. | |
depth estimations for image i. | |
depth of object o. | |
classification score of an object o. | |
boolean representing primary/non-primary object o. |
Component | Layer | Metric | Value |
---|---|---|---|
i-branch | Convolutions | kernel_size | |
stride | |||
Pools | kernel_size | ||
stride | |||
Conv2D(1) | in_channels | 1 | |
out_channels | 15 | ||
Conv2D(2) | in_channels | 15 | |
out_channels | 30 | ||
Conv2D(3) | in_channels | 30 | |
out_channels | 60 | ||
Conv2D(4) | in_channels | 60 | |
out_channels | 120 | ||
GlobalAvg2D | output_size | ||
o-branch | Convolutions | kernel_size | 3 |
stride | 1 | ||
Pools | kernel_size | 1 | |
stride | 1 | ||
Conv1D(1) | in_channels | 1 | |
out_channels | 15 | ||
Conv1D(2) | in_channels | 15 | |
out_channels | 30 | ||
Conv1D(3) | in_channels | 30 | |
out_channels | 60 | ||
GlobalAvg1D | output_size | 1 | |
others | Dropout | zero_probability | |
Fully connected linear layer | in_features | 180 | |
out_features | 5 | ||
Final layer | activation_func | sigmoid |
Column | Detail |
---|---|
Depth estimations for image i, which is a 256 × 256 grayscale image. | |
Top-left coordinate of the object o from the bound box. | |
Bottom-right coordinate of object o from the bound box. | |
Central coordinate of object o derived from , and . | |
Class representing type of the object o. | |
Area representing size of the object o. | |
This represents depth/distance of the object o. | |
Classification score representing the confidence score for an object o. | |
This is a Boolean that represents whether the object o is primary/non-primary. |
Software/Library | Detail |
---|---|
-2 | |
Metric | Value | Metric | Value |
---|---|---|---|
1000 | criterion | BCE Loss | |
- | optimizer | AdamW | |
- | 128 | threshold | (train) |
(test) |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Mehmood, A.; Ko, J.; Kim, H.; Kim, J. Optimizing Image Enhancement: Feature Engineering for Improved Classification in AI-Assisted Artificial Retinas. Sensors 2024, 24, 2678. https://doi.org/10.3390/s24092678
Mehmood A, Ko J, Kim H, Kim J. Optimizing Image Enhancement: Feature Engineering for Improved Classification in AI-Assisted Artificial Retinas. Sensors. 2024; 24(9):2678. https://doi.org/10.3390/s24092678
Chicago/Turabian StyleMehmood, Asif, Jungbeom Ko, Hyunchul Kim, and Jungsuk Kim. 2024. "Optimizing Image Enhancement: Feature Engineering for Improved Classification in AI-Assisted Artificial Retinas" Sensors 24, no. 9: 2678. https://doi.org/10.3390/s24092678
APA StyleMehmood, A., Ko, J., Kim, H., & Kim, J. (2024). Optimizing Image Enhancement: Feature Engineering for Improved Classification in AI-Assisted Artificial Retinas. Sensors, 24(9), 2678. https://doi.org/10.3390/s24092678