Comparative Evaluation of Color Correction as Image Preprocessing for Olive Identification under Natural Light Using Cell Phones
Abstract
:1. Introduction
2. Materials and Methods
2.1. Experimental Set-Up
2.2. Image Color Correction
2.2.1. Color Correction Based on ColorChecker
2.2.2. Adaptive Histogram Equalization (AHE)
2.2.3. Histogram Equalization
2.3. Data Augmentation
2.4. Model
2.5. YOLOv7 Training
2.6. Metrics
2.6.1. Precision
2.6.2. Recall
2.6.3. mAP
2.6.4. Paired t-Test
2.6.5. ANOVA
2.6.6. Tukey
3. Results
- Significant differences were found between H1 and several other groups (H1.1, H1.2, H2.1, H2.2, H3.2, H4.1, H4.2), indicating that the use of any augmentation or preprocessing method generally improves performance compared to no augmentation/preprocessing.
- H1.1 (no preprocessing with image rotation augmentation) significantly differs from H3.1, H3.2, and H4.2, indicating differences in performance when comparing image rotation augmentation with various preprocessing methods.
- These comparisons show significant differences, suggesting that the type of preprocessing used with bbox rotation augmentation can impact performance.
- A significant difference exists, indicating that the choice between image rotation and bbox rotation in color correction preprocessing can impact the results.
- This comparison reveals a significant difference, suggesting that histogram equalization preprocessing combined with image rotation performs better than histogram adaptive preprocessing with the same augmentation.
- A significant difference is noted here, highlighting the impact of the type of augmentation (image rotation vs. bbox rotation) in histogram equalization preprocessing.
- H1.2 (No preprocessing with image rotation): While it showed significant improvements over H1, it was outperformed by H3.1, H3.2, and H4.2.
- H2.1 (Color correction with image rotation): This group did not show significant differences when compared to H1.1, H3.1, and H4.1, and it was only significantly better than H2.2, H3.1, H3.2, and H4.2.
- H4.1 (Histogram equalization with image rotation): H4.1 stands out as no other group significantly outperformed it in our test, and it showed significant improvements over several groups, including H3.1 and H3.2.
4. Discussion
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
Abbreviations
AHE | Adaptive Histogram Equalization |
CV | Cross-Validation |
CNN | Convolution Neural Network |
HE | Histogram Equalization |
YOLO | You Only See Once |
MSE | Mean Squared Error |
mAP | Mean Average Precision |
References
- The Brainy Insights. Olive Oil Market Size by Type (Extra Virgin, Virgin, Pure/Refined, and Others), By End-user (Foodservice/HoReCa, Household/Retail, Food Manufacturing, and Others), Regions, Global Industry Analysis, Share, Growth, Trends, and Forecast 2023 to 2032. 2023. Available online: https://www.thebrainyinsights.com/report/olive-oil-market-13494 (accessed on 12 November 2023).
- Rodrigues, N.; Casal, S.; Rodrigues, A.I.; Cruz, R.; Pereira, J.A. Impact of Frost on the Morphology and Chemical Composition of cv. Santulhana Olives. Appl. Sci. 2022, 12, 1222. [Google Scholar] [CrossRef]
- Khosravi, H.; Saedi, S.I.; Rezaei, M. Real-time recognition of on-branch olive ripening stages by a deep convolutional neural network. Sci. Hortic. 2021, 287, 110252. [Google Scholar] [CrossRef]
- Martínez, S.S.; Gila, D.M.M.; Beyaz, A.; Ortega, J.G.; García, J.G. A computer vision approach based on endocarp features for the identification of olive cultivars. Comput. Electron. Agric. 2018, 154, 341–346. [Google Scholar] [CrossRef]
- Roy, P.; Kislay, A.; Plonski, P.A.; Luby, J.; Isler, V. Vision-based preharvest yield mapping for apple orchards. Comput. Electron. Agric. 2019, 164, 104897. [Google Scholar] [CrossRef]
- Qureshi, W.; Payne, A.; Walsh, K.; Linker, R.; Cohen, O.; Dailey, M. Machine vision for counting fruit on mango tree canopies. Precis. Agric. 2017, 18, 224–244. [Google Scholar] [CrossRef]
- Bac, C.; Hemming, J.; Van Henten, E. Robust pixel-based classification of obstacles for robotic harvesting of sweet-pepper. Comput. Electron. Agric. 2013, 96, 148–162. [Google Scholar] [CrossRef]
- Underwood, J.P.; Hung, C.; Whelan, B.; Sukkarieh, S. Mapping almond orchard canopy volume, flowers, fruit and yield using lidar and vision sensors. Comput. Electron. Agric. 2016, 130, 83–96. [Google Scholar] [CrossRef]
- Zhao, Y.; Gong, L.; Zhou, B.; Huang, Y.; Liu, C. Detecting tomatoes in greenhouse scenes by combining AdaBoost classifier and colour analysis. Biosyst. Eng. 2016, 148, 127–137. [Google Scholar] [CrossRef]
- Nuske, S.; Wilshusen, K.; Achar, S.; Yoder, L.; Narasimhan, S.; Singh, S. Automated visual yield estimation in vineyards. J. Field Robot. 2014, 31, 837–860. [Google Scholar] [CrossRef]
- Aquino, A.; Millan, B.; Diago, M.P.; Tardaguila, J. Automated early yield prediction in vineyards from on-the-go image acquisition. Comput. Electron. Agric. 2018, 144, 26–36. [Google Scholar] [CrossRef]
- Ponce, J.M.; Aquino, A.; Millán, B.; Andújar, J.M. Olive-fruit mass and size estimation using image analysis and feature modeling. Sensors 2018, 18, 2930. [Google Scholar] [CrossRef]
- Ponce, J.M.; Aquino, A.; Millan, B.; Andújar, J.M. Automatic counting and individual size and mass estimation of olive-fruits through computer vision techniques. IEEE Access 2019, 7, 59451–59465. [Google Scholar] [CrossRef]
- Diaz, R.; Gil, L.; Serrano, C.; Blasco, M.; Moltó, E.; Blasco, J. Comparison of three algorithms in the classification of table olives by means of computer vision. J. Food Eng. 2004, 61, 101–107. [Google Scholar] [CrossRef]
- Hassan, H.; El-Rahman, A.A.; Attia, M. Color Properties of olive fruits during its maturity stages using image analysis. In Proceedings of the AIP Conference Proceedings, Omaha, NE, USA, 3–4 August 2011; American Institute of Physics: College Park, MD, USA, 2011; Volume 1380, pp. 101–106. [Google Scholar] [CrossRef]
- Puerto, D.A.; Martínez Gila, D.M.; Gámez García, J.; Gómez Ortega, J. Sorting olive batches for the milling process using image processing. Sensors 2015, 15, 15738–15754. [Google Scholar] [CrossRef]
- Ponce, J.F.; Aquino, A.; Andújar, J.M. Olive-fruit variety classification by means of image processing and convolutional neural networks. IEEE Access 2019, 7, 147629–147641. [Google Scholar] [CrossRef]
- Aquino, A.; Ponce, J.M.; Andújar, J.M. Identification of olive fruit, in intensive olive orchards, by means of its morphological structure using convolutional neural networks. Comput. Electron. Agric. 2020, 176, 105616. [Google Scholar] [CrossRef]
- Gatica, G.; Best, S.; Ceroni, J.; Lefranc, G. Olive fruits recognition using neural networks. Procedia Comput. Sci. 2013, 17, 412–419. [Google Scholar] [CrossRef]
- Figorilli, S.; Violino, S.; Moscovini, L.; Ortenzi, L.; Salvucci, G.; Vasta, S.; Tocci, F.; Costa, C.; Toscano, P.; Pallottino, F. Olive fruit selection through ai algorithms and RGB imaging. Foods 2022, 11, 3391. [Google Scholar] [CrossRef]
- Avila, F.; Mora, M.; Oyarce, M.; Zuñiga, A.; Fredes, C. A method to construct fruit maturity color scales based on support machines for regression: Application to olives and grape seeds. J. Food Eng. 2015, 162, 9–17. [Google Scholar] [CrossRef]
- Sola-Guirado, R.R.; Bayano-Tejero, S.; Aragón-Rodríguez, F.; Bernardi, B.; Benalia, S.; Castro-García, S. A smart system for the automatic evaluation of green olives visual quality in the field. Comput. Electron. Agric. 2020, 179, 105858. [Google Scholar] [CrossRef]
- Aguilera Puerto, D.; Cáceres Moreno, Ó.; Martínez Gila, D.M.; Gómez Ortega, J.; Gámez García, J. Online system for the identification and classification of olive fruits for the olive oil production process. J. Food Meas. Charact. 2019, 13, 716–727. [Google Scholar] [CrossRef]
- Aljaafreh, A.; Elzagzoug, E.Y.; Abukhait, J.; Soliman, A.H.; Alja’Afreh, S.S.; Sivanathan, A.; Hughes, J. A Real-Time Olive Fruit Detection for Harvesting Robot Based on Yolo Algorithms. Acta Technol. Agric. 2023, 26, 121–132. [Google Scholar] [CrossRef]
- Sharmila, G.; Rajamohan, K. A Systematic Literature Review on Image Preprocessing and Feature Extraction Techniques in Precision Agriculture. In Proceedings of the Congress on Intelligent Systems: CIS 2021, Bengaluru, India, 4–5 September 2021; Springer: Singapore, 2022; Volume 1, pp. 333–354. [Google Scholar] [CrossRef]
- Kiran, S.; Chandrappa, D. Plant Leaf Disease Detection Using Efficient Image Processing and Machine Learning Algorithms. J. Robot. Control 2023, 4, 840–848. [Google Scholar]
- Ojo, M.O.; Zahid, A. Improving Deep Learning Classifiers Performance via Preprocessing and Class Imbalance Approaches in a Plant Disease Detection Pipeline. Agronomy 2023, 13, 887. [Google Scholar] [CrossRef]
- Nugroho, B.; Yuniarti, A. Performance of contrast-limited AHE in preprocessing of face recognition with training image under various lighting conditions. In Proceedings of the 2020 6th Information Technology International Seminar (ITIS), Surabaya, Indonesia, 14–16 October 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 167–171. [Google Scholar] [CrossRef]
- Wosner, O.; Farjon, G.; Bar-Hillel, A. Object detection in agricultural contexts: A multiple resolution benchmark and comparison to human. Comput. Electron. Agric. 2021, 189, 106404. [Google Scholar] [CrossRef]
- Gongal, A.; Amatya, S.; Karkee, M.; Zhang, Q.; Lewis, K. Sensors and systems for fruit detection and localization: A review. Comput. Electron. Agric. 2021, 116, 8–19. [Google Scholar] [CrossRef]
- Liu, Z.; Liu, Y.X.; Gao, G.A.; Yong, K.; Wu, B.; Liang, J.X. An integrated method for color correction based on color constancy for early mural images in Mogao Grottoes. Front. Neurosci. 2022, 16, 1024599. [Google Scholar] [CrossRef]
- Wang, C.Y.; Bochkovskiy, A.; Liao, H.Y.M. YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors. In Proceedings of the 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Vancouver, BC, Canada, 17–24 June 2023. [Google Scholar] [CrossRef]
- Zimmerman, J.; Cousins, S.; Hartzell, K.; Frisse, M.; Kahn, M. A psychophysical comparison of two methods for adaptive histogram equalization. J. Digital Imaging 1989, 2, 82–91. [Google Scholar] [CrossRef]
- Khan, F.S.; van Weijer, J.; Vanrell, M. Top-down color attention for object recognition. In Proceedings of the 2009 IEEE 12th International Conference on Computer Vision, Kyoto, Japan, 27 September–4 October 2010. [Google Scholar] [CrossRef]
- Luo, M.R.; Cui, G.; Rigg, B. The development of the CIE 2000 colour-difference formula: CIEDE2000. Color Res. Appl. 2001, 26, 340–350. [Google Scholar] [CrossRef]
- Fairchild, M.D. Color Appearance Models; Wiley: Hoboken, NJ, USA, 2013. [Google Scholar] [CrossRef]
- Finlayson, G.D.; Mackiewicz, M.; Hurlbert, A. Color Correction Using Root-Polynomial Regression. IEEE Trans. Image Process. 2015, 24, 1460–1470. [Google Scholar] [CrossRef]
- Heckbert, P. Graphics Gems IV (IBM Version); Elsevier: Amsterdam, The Netherlands, 1994; Chapter 5. [Google Scholar]
- Pizer, S.M.; Amburn, E.P.; Austin, J.D.; Cromartie, R.; Geselowitz, A.; Greer, T.; ter Haar Romeny, B.; Zimmerman, J.B.; Zuiderveld, K. Adaptive histogram equalization and its variations. Comput. Vision Graph. Image Process. 1987, 39, 355–368. [Google Scholar] [CrossRef]
- Cheng, H.D.; Shi, X. A simple and effective histogram equalization approach to image enhancement. Digit. Signal Process. 2004, 14, 158–170. [Google Scholar] [CrossRef]
- Xiong, J.; Yu, D.; Wang, Q.; Shu, L.; Cen, J.; Liang, Q.; Chen, H.; Sun, B. Application of histogram equalization for image enhancement in corrosion areas. Shock Vib. 2021, 2021, 1–13. [Google Scholar] [CrossRef]
- Khalifa, N.E.; Loey, M.; Mirjalili, S. A comprehensive survey of recent trends in deep learning for digital images augmentation. Artif. Intell. Rev. 2022, 55, 2351–2377. [Google Scholar] [CrossRef]
- Shorten, C.; Khoshgoftaar, T.M. A survey on Image Data Augmentation for Deep Learning. J. Big Data 2019, 6, 1–48. [Google Scholar] [CrossRef]
- Quiroga, F.; Ronchetti, F.; Lanzarini, L.; Bariviera, A.F. Revisiting data augmentation for rotational invariance in convolutional neural networks. In Modelling and Simulation in Management Sciences: Proceedings of the International Conference on Modelling and Simulation in Management Sciences (MS-18), Girona, Spain, 28–29 June 2018; Springer: Cham, Switzerland, 2020; pp. 127–141. [Google Scholar] [CrossRef]
- Simard, P.Y.; Steinkraus, D.; Platt, J.C. Best Practices for Convolutional Neural Networks Applied to Visual Document Analysis; IEEE: Piscataway, NJ, USA, 2003. [Google Scholar] [CrossRef]
- Badeka, E.; Karapatzak, E.; Karampatea, A.; Bouloumpasi, E.; Kalathas, I.; Lytridis, C.; Tziolas, E.; Tsakalidou, V.N.; Kaburlasos, V.G. A Deep Learning Approach for Precision Viticulture, Assessing Grape Maturity via YOLOv7. Sensors 2023, 23, 8126. [Google Scholar] [CrossRef]
- Redmon, J.; Farhadi, A. YOLO9000: Better, Faster, Stronger. arXiv 2017, arXiv:1612.08242. [Google Scholar] [CrossRef]
- Redmon, J.; Farhadi, A. Yolov3: An incremental improvement. arXiv 2018, arXiv:1804.02767. [Google Scholar] [CrossRef]
- Bochkovskiy, A.; Wang, C.Y.; Liao, H.Y.M. Yolov4: Optimal speed and accuracy of object detection. arXiv 2020, arXiv:2004.10934. [Google Scholar] [CrossRef]
- Wu, D.; Jiang, S.; Zhao, E.; Liu, Y.; Zhu, H.; Wang, W.; Wang, R. Detection of Camellia oleifera fruit in complex scenes by using YOLOv7 and data augmentation. Appl. Sci. 2022, 12, 11318. [Google Scholar] [CrossRef]
- Shankar, R.; Muthulakshmi, M. Comparing YOLOV3, YOLOV5 & YOLOV7 Architectures for Underwater Marine Creatures Detection. In Proceedings of the 2023 International Conference on Computational Intelligence and Knowledge Economy (ICCIKE), Dubai, United Arab Emirates, 9–10 March 2023; IEEE: Piscataway, NJ, USA, 2023; pp. 25–30. [Google Scholar] [CrossRef]
- Gallo, I.; Rehman, A.U.; Dehkordi, R.H.; Landro, N.; La Grassa, R.; Boschetti, M. Deep object detection of crop weeds: Performance of YOLOv7 on a real case dataset from UAV images. Remote Sens. 2023, 15, 539. [Google Scholar] [CrossRef]
- Zeng, Y.; Zhang, T.; He, W.; Zhang, Z. Yolov7-uav: An unmanned aerial vehicle image object detection algorithm based on improved yolov7. Electronics 2023, 12, 3141. [Google Scholar] [CrossRef]
- Fu, X.; Wei, G.; Yuan, X.; Liang, Y.; Bo, Y. Efficient YOLOv7-Drone: An Enhanced Object Detection Approach for Drone Aerial Imagery. Drones 2023, 7, 616. [Google Scholar] [CrossRef]
- Liu, K.; Sun, Q.; Sun, D.; Peng, L.; Yang, M.; Wang, N. Underwater target detection based on improved YOLOv7. J. Mar. Sci. Eng. 2023, 11, 677. [Google Scholar] [CrossRef]
- Ding, X.; Zhang, X.; Ma, N.; Han, J.; Ding, G.; Sun, J. Repvgg: Making vgg-style convnets great again. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 13733–13742. [Google Scholar]
- Dang, F.; Chen, D.; Lu, Y.; Li, Z. YOLOWeeds: A novel benchmark of YOLO object detectors for multi-class weed detection in cotton production systems. Comput. Electron. Agric. 2023, 205, 107655. [Google Scholar] [CrossRef]
- Ariza-Sentís, M.; Baja, M.; Martín, S.V.V.; Valente, J.R.P. Object detection and tracking on UAV RGB videos for early extraction of grape phenotypic traits. Comput. Electron. Agric. 2023, 211, 108051. [Google Scholar] [CrossRef]
- Fazari, A.; Pellicer-Valero, O.; Gómez-Sanchís, J.; Bernardi, B.; Cubero, S.; Benalia, S.; Zimbalatti, G.; Blasco, J. Application of deep convolutional neural networks for the detection of anthracnose in olives using VIS/NIR hyperspectral images. Comput. Electron. Agric. 2021, 187, 106252. [Google Scholar] [CrossRef]
- Padilla, R.; Netto, S.L.; da Silva, E.A.B. A Survey on Performance Metrics for Object-Detection Algorithms; IEEE: Piscataway, NJ, USA, 2020. [Google Scholar] [CrossRef]
- Chen, B.; Wang, X.; Qiu, B.; Jia, B.; Li, X.; Wang, Y. An unsafe behavior detection method based on improved YOLO framework. Electronics 2022, 11, 1912. [Google Scholar] [CrossRef]
- Hsu, H.; Lachenbruch, P. Paired t test. In Wiley StatsRef: Statistics Reference Online; Wiley: Hoboken, NJ, USA, 2014. [Google Scholar] [CrossRef]
- Fisher, R. The Design of Experiments; Oliver & Boyd: Edinburgh, UK, 1949. [Google Scholar]
- Tukey, J.W. Comparing individual means in the analysis of variance. Biometrics 1949, 5, 99. [Google Scholar] [CrossRef]
- Guzmán, E.; Baeten, V.; Pierna, J.A.F.; García-Mesa, J.A. Determination of the olive maturity index of intact fruits using image analysis. J. Food Sci. Technol. 2013, 52, 1462–1470. [Google Scholar] [CrossRef]
- Ortenzi, L.; Figorilli, S.; Costa, C.; Pallottino, F.; Violino, S.; Pagano, M.; Imperi, G.; Manganiello, R.; Lanza, B.; Antonucci, F. A Machine Vision Rapid Method to Determine the Ripeness Degree of Olive Lots. Sensors 2021, 21, 2940. [Google Scholar] [CrossRef] [PubMed]
- Guo, J.; Ma, J.; Ángel, F. García-Fernández.; Zhang, Y.; Liang, H.N. A survey on image enhancement for Low-light images. Heliyon 2023, 9, e14558. [Google Scholar] [CrossRef] [PubMed]
- Finlayson, G.D.; Darrodi, M.M.; Mackiewicz, M. The alternating least squares technique for nonuniform intensity color correction. Color Res. Appl. 2014, 40, 232–242. [Google Scholar] [CrossRef]
- Bortolotti, G.; Piani, M.; Mengoli, D.; Corelli Grappadelli, L.; Manfrini, L. Pilot study of a computer vision system for in-field peach fruit quality evaluation. Acta Hortic. 2022, 1352, 315–322. [Google Scholar] [CrossRef]
- Chen, Y.; Zhang, P.; Kong, T.; Li, Y.; Zhang, X.; Qi, L.; Sun, J.; Jia, J. Scale-aware automatic augmentations for object detection with dynamic training. IEEE Trans. Pattern Anal. Mach. Intell. 2023, 45, 2367–2383. [Google Scholar] [CrossRef]
Treatment | CV1 | CV1 | CV1 | CV2 | CV2 | CV2 |
---|---|---|---|---|---|---|
No preprocessing | Precision | Recall | mAP | Precision | Recall | mAP |
H1 | 0.593 | 0.672 | 0.641 | 0.572 | 0.756 | 0.686 |
H1.1 | 0.762 | 0.786 | 0.84 | 0.738 | 0.794 | 0.848 |
H1.2 | 0.651 | 0.872 | 0.821 | 0.669 | 0.828 | 0.814 |
ColorChecker preprocessing | Precision | Recall | mAP | Precision | Recall | mAP |
H2 | 0.6651 | 0.872 | 0.821 | 0.663 | 0.759 | 0.726 |
H2.1 | 0.718 | 0.842 | 0.847 | 0.78 | 0.78 | 0.86 |
H2.2 | 0.69 | 0.757 | 0.785 | 0.662 | 0.835 | 0.774 |
AHE | Precision | Recall | mAP | Precision | Recall | mAP |
H3 | 0.74 | 0.81 | 0.78 | 0.772 | 0.79 | 0.84 |
H3.1 | 0.633 | 0.729 | 0.713 | 0.66 | 0.736 | 0.744 |
H3.2 | 0.701 | 0.669 | 0.725 | 0.675 | 0.738 | 0.745 |
HE | Precision | Recall | mAP | Precision | Recall | mAP |
H4 | 0.676 | 0.793 | 0.775 | 0.664 | 0.814 | 0.785 |
H4.1 | 0.72 | 0.829 | 0.832 | 0.723 | 0.817 | 0.839 |
H4.2 | 0.647 | 0.785 | 0.752 | 0.661 | 0.809 | 0.784 |
H1 | H2 | H3 | H4 |
---|---|---|---|
p-value | 0.3608 | 0.0326 | 0.0949 |
Group1 | Group2 | Meandiff | p-adj | Lower | Upper | Reject |
---|---|---|---|---|---|---|
H1 | H1.1 | 0.1805 | 0 | 0.1155 | 0.2455 | True |
H1 | H1.2 | 0.154 | 0.0001 | 0.089 | 0.219 | True |
H1 | H2.1 | 0.19 | 0 | 0.125 | 0.255 | True |
H1 | H2.2 | 0.116 | 0.0011 | 0.051 | 0.181 | True |
H1 | H3.1 | 0.065 | 0.0501 | 0 | 0.13 | False |
H1 | H3.2 | 0.0715 | 0.0294 | 0.0065 | 0.1365 | True |
H1 | H4.1 | 0.172 | 0 | 0.107 | 0.237 | True |
H1 | H4.2 | 0.1045 | 0.0024 | 0.0395 | 0.1695 | True |
H1.1 | H1.2 | 0.0265 | 0.7794 | 0.0915 | 0.0385 | False |
H1.1 | H2.1 | 0.0095 | 0.9993 | 0.0555 | 0.0745 | False |
H1.1 | H2.2 | 0.0645 | 0.0522 | 0.1295 | 0.0005 | False |
H1.1 | H3.1 | 0.1155 | 0.0012 | 0.1805 | 0.0505 | True |
H1.1 | H3.2 | 0.109 | 0.0018 | 0.174 | 0.044 | True |
H1.1 | H4.1 | 0.0085 | 0.9997 | 0.0735 | 0.0565 | False |
H1.1 | H4.2 | 0.076 | 0.0205 | 0.141 | 0.011 | True |
H1.2 | H2.1 | 0.036 | 0.4779 | 0.029 | 0.101 | False |
H1.2 | H2.2 | 0.038 | 0.4197 | 0.103 | 0.027 | False |
H1.2 | H3.1 | 0.089 | 0.0074 | 0.154 | 0.024 | True |
H1.2 | H3.2 | 0.0825 | 0.0123 | 0.1475 | 0.0175 | True |
H1.2 | H4.1 | 0.018 | 0.9609 | 0.047 | 0.083 | False |
H1.2 | H4.2 | 0.0495 | 0.1785 | 0.1145 | 0.0155 | False |
H2.1 | H2.2 | 0.074 | 0.024 | 0.139 | 0.009 | True |
H2.1 | H3.1 | 0.125 | 0.0006 | 0.19 | 0.06 | True |
H2.1 | H3.2 | 0.1185 | 0.001 | 0.1835 | 0.0535 | True |
H2.1 | H4.1 | 0.018 | 0.9609 | 0.083 | 0.047 | False |
H2.1 | H4.2 | 0.0855 | 0.0097 | 0.1505 | 0.0205 | True |
H2.2 | H3.1 | 0.051 | 0.1583 | 0.116 | 0.014 | False |
H2.2 | H3.2 | 0.0445 | 0.2634 | 0.1095 | 0.0205 | False |
H2.2 | H4.1 | 0.056 | 0.1053 | 0.009 | 0.121 | False |
H2.2 | H4.2 | 0.0115 | 0.9974 | 0.0765 | 0.0535 | False |
H3.1 | H3.2 | 0.0065 | 1 | 0.0585 | 0.0715 | False |
H3.1 | H4.1 | 0.107 | 0.002 | 0.042 | 0.172 | True |
H3.1 | H4.2 | 0.0395 | 0.379 | 0.0255 | 0.1045 | False |
H3.2 | H4.1 | 0.1005 | 0.0032 | 0.0355 | 0.1655 | True |
H3.2 | H4.2 | 0.033 | 0.5719 | 0.032 | 0.098 | False |
H4.1 | H4.2 | 0.0675 | 0.0408 | 0.1325 | 0.0025 | True |
Name | Hyperparameter | mAP |
---|---|---|
YOLOv5x | D | 0.7708 |
YOLOv5s | D | 0.7265 |
YOLOv5x | C | 0.7116 |
YOLOv5s | C | 0.6827 |
YOLOv5x | B | 0.733 |
YOLOv5s | B | 0.7384 |
YOLOv5x | A | 0.7559 |
YOLOv5s | A | 0.7413 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Mojaravscki, D.; Graziano Magalhães, P.S. Comparative Evaluation of Color Correction as Image Preprocessing for Olive Identification under Natural Light Using Cell Phones. AgriEngineering 2024, 6, 155-170. https://doi.org/10.3390/agriengineering6010010
Mojaravscki D, Graziano Magalhães PS. Comparative Evaluation of Color Correction as Image Preprocessing for Olive Identification under Natural Light Using Cell Phones. AgriEngineering. 2024; 6(1):155-170. https://doi.org/10.3390/agriengineering6010010
Chicago/Turabian StyleMojaravscki, David, and Paulo S. Graziano Magalhães. 2024. "Comparative Evaluation of Color Correction as Image Preprocessing for Olive Identification under Natural Light Using Cell Phones" AgriEngineering 6, no. 1: 155-170. https://doi.org/10.3390/agriengineering6010010
APA StyleMojaravscki, D., & Graziano Magalhães, P. S. (2024). Comparative Evaluation of Color Correction as Image Preprocessing for Olive Identification under Natural Light Using Cell Phones. AgriEngineering, 6(1), 155-170. https://doi.org/10.3390/agriengineering6010010