Computer Vision and Transfer Learning for Grading of Egyptian Cotton Fibres
Abstract
:1. Introduction
1.1. Why Egyptian Cotton Fibres?
1.2. Quality Attributes of Cotton Lint in General and Egyptian Cotton Fibre in Particular
1.3. Logistic Challenges in Egyptian Cotton Trade
1.4. Textile Fibre Quality Assessment Using Optical Sensors
1.5. Transfer Learning (TL)
2. Materials and Methods
2.1. Materials
2.2. Imaging System and Image Acquisition
2.3. Transfer Learning
2.4. Pretrained CNNs’ Architectures
2.4.1. AlexNet
2.4.2. GoogleNet (Inception v1)
2.4.3. SqueezeNet
2.4.4. VGG16 and VGG19
2.5. Data Processing
2.6. Fusion of Pretrained CNNs’ Models
2.7. Evaluation of Classification Models
3. Results and Discussion
Classification Results of Fused CNN Models
4. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Messiry, M.E.; Abd-Ellatif, S.A.M. Characterization of Egyptian cotton fibres. Indian J. Fibre Text. Res. 2013, 38, 109–113. [Google Scholar]
- Cotton Incorporated. Textile Fibers; Cotton Incorporated: Cary, NC, USA, 2013. [Google Scholar]
- FAO. Cotton Lint and Ginned Production; FAO: Rome, Italy, 2023. [Google Scholar]
- Norris, P.K. Cotton Production in Egypt; U.S. Department of Agriculture: Washington, DC, USA, 1934. [Google Scholar]
- Shalaby, M.E.S.; Ebaido, I.A.; Abd-El-Rahman, Y.S. Characterization of Egyptian Cotton Fiber Quality Using CCS. Eur. J. Agric. Food Sci. 2021, 3, 45–52. [Google Scholar] [CrossRef]
- Gourlot, J.-P.; Drieling, A. Interpretation and Use of Instrument Measured Cotton Characteristics: A Guideline; International Cotton Advisory Committee (ICAC): Washington, DC, USA, 2020. [Google Scholar]
- CATGO. Natural Characteristics of Egyptian Cotton Cultivars and Their Relationship with Different Grades (Season 2018/2019); CATGO: Cairo, Egypt, 2020; p. 69. [Google Scholar]
- El Saied, H. Egyptian Cotton. Handling and Ginning, and the Factors Affecting the Grade. 2022. Available online: https://misrelzraea.com/43153-2/ (accessed on 21 August 2024).
- Ahmed, Y.N.; Delin, H. Current Situation of Egyptian Cotton: Econometrics Study Using ARDL Model. J. Agric. Sci. 2019, 11, 88. [Google Scholar] [CrossRef]
- Fisher, O.J.; Rady, A.; El-Banna, A.A.; Watson, N.J.; Emaish, H.H. An image processing and machine learning solution to automate Egyptian cotton lint grading. Text. Res. J. 2023, 93, 2558–2575. [Google Scholar] [CrossRef]
- Hassanain, H.T. A Study of The Most Important Economic and Productive Factors Affecting the Relative and Competitive Advantage of Egyptian Cotton Crop (دراسة لأهم العوامل الاقتصادية والإنتاجية المؤثرة علي الميزة النسبية والميزة التنافسية لمحصول القطن المصري). J. Adv. Agric. Res. 2021, 26, 197–212. [Google Scholar]
- Khllil, M.H. Economics of cultivation and production of cotton in Egypt and its contribution to economic development for the period (2000–2009). Univ. Kirkuk J. Adm. Econ. Sci. 2013, 3. [Google Scholar]
- Omar, S.; Morgan, J. Cotton and Products Annual-Egypt; United States Department of Agriculture-Foreign Agricultural Services: Washington, DC, USA, 2023; p. 15.
- Saiz-Rubio, V.; Rovira-Más, F. From smart farming towards agriculture 5.0: A review on crop data management. Agronomy 2020, 10, 207. [Google Scholar] [CrossRef]
- Oxford Business Group. Digitalisation Is Key to Bolstering Egypt’s Food and Water Security. The Report: Egypt 2022. Available online: https://oxfordbusinessgroup.com/reports/egypt/2022-report/economy/tools-of-the-trade-digitalisation-is-key-to-efforts-to-bolster-food-and-water-security (accessed on 22 March 2025).
- Ma, J.; Sun, D.W.; Qu, J.H.; Liu, D.; Pu, H.; Gao, W.H.; Zeng, X.A. Applications of computer vision for assessing quality of agri-food products: A review of recent research advances. Crit. Rev. Food Sci. Nutr. 2016, 56, 113–127. [Google Scholar] [CrossRef]
- Abdullah, M.Z. Image Acquisition Systems. In Computer Vision Technology for Food Quality Evaluation; Elsevier: Amsterdam, The Netherlands, 2016; pp. 3–43. [Google Scholar]
- Hernández-Sánchez, N.; Moreda, G.P.; Herre-ro-Langreo, A.; Melado-Herreros, Á. Assessment of internal and external quality of fruits and vegetables. In Imaging Technologies and Data Processing for Food Engineers; Springer: Cham, Switzerland, 2016; pp. 269–309. [Google Scholar]
- Altunkaynak, A.; Başakın, E.E. Estimation of Wheat Yield Based on Precipitation and Evapotranspiration Using Soft Computing Methods. In Computer Vision and Machine Learning in Agriculture; Springer: Cham, Switzerland, 2022; Volume 2, pp. 83–106. [Google Scholar]
- Rajathi, N.; Parameswari, P. Early Stage Prediction of Plant Leaf Diseases Using Deep Learning Models. In Computer Vision and Machine Learning in Agriculture; Springer: Cham, Switzerland, 2022; Volume 2, pp. 245–260. [Google Scholar]
- Holopainen-Mantila, U.; Raulio, M. Cereal grain structure by microscopic analysis. In Imaging Technologies and Data Processing for Food Engineers; Springer: Cham, Switzerland, 2016; pp. 1–39. [Google Scholar]
- Valous, N.A.; Zheng, L.; Sun, D.W.; Tan, J. Quality evaluation of meat cuts. In Computer Vision Technology for Food Quality Evaluation; Elsevier: Amsterdam, The Netherlands, 2016; pp. 175–193. [Google Scholar]
- Du, C.-J.; Iqbal, A.; Sun, D.-W. Quality measurement of cooked meats. In Computer Vision Technology for Food Quality Evaluation; Elsevier: Amsterdam, The Netherlands, 2016; pp. 195–212. [Google Scholar]
- Zhang, C.; Li, T.; Li, J. Detection of impurity rate of machine-picked cotton based on improved canny operator. Electronics 2022, 11, 974. [Google Scholar] [CrossRef]
- Wang, X.; Yang, W.; Li, Z. A fast image segmentation algorithm for detection of pseudo-foreign fibers in lint cotton. Comput. Electr. Eng. 2015, 46, 500–510. [Google Scholar] [CrossRef]
- Lv, Y.; Gao, Y.; Rigall, E.; Qi, L.; Gao, F.; Dong, J. Cotton appearance grade classification based on machine learning. Procedia Comput. Sci. 2020, 174, 729–734. [Google Scholar] [CrossRef]
- Jeyaraj, P.R.; Samuel Nadar, E.R. Computer vision for automatic detection and classification of fabric defect employing deep learning algorithm. Int. J. Cloth. Sci. Technol. 2019, 31, 510–521. [Google Scholar] [CrossRef]
- Yang, W.; Lu, S.; Wang, S.; Li, D. Fast recognition of foreign fibers in cotton lint using machine vision. Math. Comput. Model. 2011, 54, 877–882. [Google Scholar] [CrossRef]
- Wei, W.; Deng, D.; Zeng, L.; Zhang, C.; Shi, W. Classification of foreign fibers using deep learning and its implementation on embedded system. Int. J. Adv. Robot. Syst. 2019, 16, 1729881419867600. [Google Scholar] [CrossRef]
- Zhou, J.; Yu, L.; Ding, Q.; Wang, R. Textile fiber identification using near-infrared spectroscopy and pattern recognition. Autex Res. J. 2019, 19, 201–209. [Google Scholar] [CrossRef]
- Jiang, Y.; Li, C. Detection and discrimination of cotton foreign matter using push-broom based hyperspectral imaging: System design and capability. PLoS ONE 2015, 10, e0121969. [Google Scholar] [CrossRef]
- Huang, J.; He, H.; Lv, R.; Zhang, G.; Zhou, Z.; Wang, X. Non-destructive detection and classification of textile fibres based on hyperspectral imaging and 1D-CNN. Anal. Chim. Acta 2022, 1224, 340238. [Google Scholar] [CrossRef]
- Xu, W.; Yang, W.; Chen, P.; Zhan, Y.; Zhang, L.; Lan, Y. Cotton fiber quality estimation based on machine learning using time series UAV remote sensing data. Remote Sens. 2023, 15, 586. [Google Scholar] [CrossRef]
- Zhuang, F.; Qi, Z.; Duan, K.; Xi, D.; Zhu, Y.; Zhu, H.; Xiong, H.; He, Q. A comprehensive survey on transfer learning. Proc. IEEE 2020, 109, 43–76. [Google Scholar] [CrossRef]
- Hosna, A.; Merry, E.; Gyalmo, J.; Alom, Z.; Aung, Z.; Azim, M.A. Transfer learning: A friendly introduction. J. Big Data 2022, 9, 102. [Google Scholar] [CrossRef]
- Long, M.; Wang, J.; Ding, G.; Sun, J.; Yu, P.S. Transfer feature learning with joint distribution adaptation. In Proceedings of the IEEE International Conference on Computer Vision, Sydney, Australia, 1–8 December 2013. [Google Scholar]
- Yao, Y.; Doretto, G. Boosting for transfer learning with multiple sources. In Proceedings of the 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Francisco, CA, USA, 13–18 June 2010. [Google Scholar]
- Chawla, N.V.; Bowyer, K.W.; Hall, L.O.; Kegelmeyer, W.P. SMOTE: Synthetic minority over-sampling technique. J. Artif. Intell. Res. 2002, 16, 321–357. [Google Scholar] [CrossRef]
- Camgözlü, Y.; Kutlu, Y. Leaf image classification based on pre-trained convolutional neural network models. Nat. Eng. Sci. 2023, 8, 214–232. [Google Scholar]
- Patel, F.; Mewada, S.; Degadwala, S.; Vyas, D. Recognition of Pistachio Species with Transfer Learning Models. In Proceedings of the 2023 International Conference on Self Sustainable Artificial Intelligence Systems (ICSSAS), Erode, India, 18–20 October 2023. [Google Scholar]
- Peng, M.; Liu, Y.; Khan, A.; Ahmed, B.; Sarker, S.K.; Ghadi, Y.Y.; Bhatti, U.A.; Al-Razgan, M.; Ali, Y.A. Crop monitoring using remote sensing land use and land change data: Comparative analysis of deep learning methods using pre-trained CNN models. Big Data Res. 2024, 36, 100448. [Google Scholar] [CrossRef]
- Tasci, B. Automated ischemic acute infarction detection using pre-trained CNN models’ deep features. Biomed. Signal Process. Control 2023, 82, 104603. [Google Scholar] [CrossRef]
- Gao, Z.; Tian, Y.; Lin, S.C.; Lin, J. A CT image classification network framework for lung tumors based on pre-trained mobilenetv2 model and transfer learning, and its application and market analysis in the medical field. arXiv 2025, arXiv:2501.04996. [Google Scholar] [CrossRef]
- Krishnapriya, S.; Karuna, Y. Pre-trained deep learning models for brain MRI image classification. Front. Hum. Neurosci. 2023, 17, 1150120. [Google Scholar] [CrossRef]
- Hassan, E.; Shams, M.Y.; Hikal, N.A.; Elmougy, S. Detecting COVID-19 in chest CT images based on several pre-trained models. Multimed. Tools Appl. 2024, 83, 65267–65287. [Google Scholar] [CrossRef]
- Singh, S.A.; ASKumar Desai, K. Comparative assessment of common pre-trained CNNs for vision-based surface defect detection of machined components. Expert Syst. Appl. 2023, 218, 119623. [Google Scholar] [CrossRef]
- Matarneh, S.; Elghaish, F.; Rahimian, F.P.; Abdellatef, E.; Abrishami, S. Evaluation and optimisation of pre-trained CNN models for asphalt pavement crack detection and classification. Autom. Constr. 2024, 160, 105297. [Google Scholar] [CrossRef]
- Feuz, K.D.; Cook, D.J. Transfer learning across feature-rich heterogeneous feature spaces via feature-space remapping (FSR). ACM Trans. Intell. Syst. Technol. (TIST) 2015, 6, 1–27. [Google Scholar] [CrossRef]
- Kaur, R.; Kumar, R.; Gupta, M. Review on Transfer Learning for Convolutional Neural Network. In Proceedings of the 2021 3rd International Conference on Advances in Computing, Communication Control and Networking (ICAC3N), Greater Noida, India, 17–18 December 2021. [Google Scholar]
- Dewan, J.H.; Das, R.; Thepade, S.D.; Jadhav, H.; Narsale, N.; Mhasawade, A.; Nambiar, S. Image Classification by Transfer Learning using Pre-Trained CNN Models. In Proceedings of the 2023 International Conference on Recent Advances in Electrical, Electronics, Ubiquitous Communication, and Computational Intelligence (RAEEUCCI), Chennai, India, 19–21 April 2023. [Google Scholar]
- Zhao, Z.; Alzubaidi, L.; Zhang, J.; Duan, Y.; Gu, Y. A comparison review of transfer learning and self-supervised learning: Definitions, applications, advantages and limitations. Expert Syst. Appl. 2024, 242, 122807. [Google Scholar] [CrossRef]
- LeCun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef] [PubMed]
- Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Image Net classification with deep convolutional neural networks. Adv. Neural Inf. Process. Syst. 2012, 25, 1–9. [Google Scholar]
- Szegedy, C.; Liu, W.; Jia, Y.; Sermanet, P.; Reed, S.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; Rabinovich, A. Going Deeper with Convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 7–12 June 2015. [Google Scholar]
- Iandola, F.N.; Han, S.; Moskewicz, M.W.; Ashraf, K.; Dally, W.J.; Keutzer, K. SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and< 0.5 MB model size. arXiv 2016, arXiv:1602.07360. [Google Scholar]
- Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar]
- Ruta, D.; Gabrys, B. An overview of classifier fusion methods. Comput. Inf. Syst. 2000, 7, 1–10. [Google Scholar]
- Vujović, Ž. Classification model evaluation metrics. Int. J. Adv. Comput. Sci. Appl. 2021, 12, 599–606. [Google Scholar] [CrossRef]
- Radiuk, P.M. Impact of training set batch size on the performance of convolutional neural networks for diverse datasets. Inf. Technol. Manag. Sci. 2017, 20, 20–24. [Google Scholar] [CrossRef]
- Bottou, L.; Curtis, F.E.; Nocedal, J. Optimization methods for large-scale machine learning. SIAM Rev. 2018, 60, 223–311. [Google Scholar] [CrossRef]
- Attard, M. What Is Stochastic Gradient Descenbt? 3 Pros and Cons. 2024. Available online: https://insidelearningmachines.com/stochastic_gradient_descent/ (accessed on 22 March 2025).
- Li, Q.; Yang, Y.; Guo, Y.; Li, W.; Liu, Y.; Liu, H.; Kang, Y. Performance evaluation of deep learning classification network for image features. IEEE Access 2021, 9, 9318–9333. [Google Scholar] [CrossRef]
- Hassanpour, M.; Malek, H. Learning document image features with SqueezeNet convolutional neural network. Int. J. Eng. 2020, 33, 1201–1207. [Google Scholar]
- Yang, Y.; Zhang, L.; Du, M.; Bo, J.; Liu, H.; Ren, L.; Li, X.; Deen, M.J. A comparative analysis of eleven neural networks architectures for small datasets of lung images of COVID-19 patients toward improved clinical decisions. Comput. Biol. Med. 2021, 139, 104887. [Google Scholar] [CrossRef] [PubMed]
- Ananda, A.; Ngan, K.H.; Karabağ, C.; Ter-Sarkisov, A.; Alonso, E.; Reyes-Aldasoro, C.C. Classification and visualisation of normal and abnormal radiographs; a comparison between eleven convolutional neural network architectures. Sensors 2021, 21, 5381. [Google Scholar] [CrossRef] [PubMed]
- Li, L.; Zhang, C.; Zheng, X. Segmentation Algorithm for Machine-Harvested Cotton based on S and I Regional Features. In Proceedings of the 5th International Conference on Vehicle, Mechanical and Electrical Engineering (ICVMEE 2019), Dalian, China, 28–30 September 2019. [Google Scholar]
- Li, D.; Yang, W.; Wang, S. Classification of foreign fibers in cotton lint using machine vision and multi-class support vector machine. Comput. Electron. Agric. 2010, 74, 274–279. [Google Scholar] [CrossRef]
- Tantaswadi, P.; Vilainatre, J.; Tamaree, N.; Viraivan, P. Machine vision for automated visual inspection of cotton quality in textile industries using color isodiscrimination contour. Comput. Ind. Eng. 1999, 37, 347–350. [Google Scholar] [CrossRef]
- Fisher, O.J.; Rady, A.; El-Banna, A.A.; Emaish, H.H.; Watson, N.J. AI-Assisted Cotton Grading: Active and Semi-Supervised Learning to Reduce the Image-Labelling Burden. Sensors 2023, 23, 8671. [Google Scholar] [CrossRef]
- Mehta, P.; Bukov, M.; Wang, C.H.; Day, A.G.; Richardson, C.; Fisher, C.K.; Schwab, D.J. A high-bias, low-variance introduction to machine learning for physicists. Phys. Rep. 2019, 810, 1–124. [Google Scholar] [CrossRef]
- Moreno-Seco, F.; Inesta, J.M.; De León, P.J.P.; Micó, L. Comparison of Classifier Fusion Methods for Classification in Pattern Recognition Tasks. In Structural, Syntactic, and Statistical Pattern Recognition: Joint IAPR International Workshops, SSPR 2006, and SPR 2006, Hong Kong, China, 17–19 August 2006; Proceedings; Springer: Cham, Switzerland, 2006. [Google Scholar]
- Dietterich, T.G. Ensemble Methods in Machine Learning. In International Workshop on Multiple Classifier Systems, Cagliari, Italy, 21–23 June 2000; Springer: Cham, Switzerland, 2000. [Google Scholar]
- Springenberg, M.; Frommholz, A.; Wenzel, M.; Weicken, E.; Ma, J.; Strodthoff, N. From modern CNNs to vision transformers: Assessing the performance, robustness, and classification strategies of deep learning models in histopathology. Med. Image Anal. 2023, 87, 102809. [Google Scholar] [CrossRef]
- Iman, M.; Arabnia, H.R.; Rasheed, K. A review of deep transfer learning and recent advancements. Technologies 2023, 11, 40. [Google Scholar] [CrossRef]
Grade | Cultivar | ||||
---|---|---|---|---|---|
Giza 86 (Long Stable) | Giza 87 (Extra-Long Stable) | Giza 90 (Long Stable) | Giza 94 (Long Stable) | Giza 96 (Extra-Long Stable) | |
Extra (E) | 0 | 0 | 0 | 0 | 0 |
Fully Good to Extra (FGE) | 0 | 0 | 0 | 0 | 0 |
Fully Good (FG) | 115 | 0 | 0 | 102 | 103 |
Good to Fully Good (GFG) | 118 | 116 | 100 | 115 | 0 |
Good (G) | 113 | 108 | 131 | 100 | 109 |
Fully Good Fair to Good (FGFG) | 119 | 110 | 116 | 108 | 118 |
Fully Good Fair (FGF) | 150 | 115 | 124 | 109 | 97 |
Good Fair to Fully Good Fair (GFFGF) | 115 | 0 | 131 | 99 | 102 |
Good Fair (GF) | 115 | 0 | 101 | 0 | 120 |
Fully Fair to Good Fair (FFGF) | 0 | 0 | 0 | 0 | 0 |
Fully Fair (FF) | 0 | 0 | 0 | 104 | 64 |
Total | 845 | 449 | 703 | 737 | 713 |
No. of classes | 7 | 4 | 6 | 7 | 7 |
CNN | Input Image Size | No. of Classes | No. of Parameters | No. of Deep Layers |
---|---|---|---|---|
AlexNet | 227 × 227 | 1000 | 60,000,000 | 8 (5 Conv. + 3 FC) |
GoogleNet | 224 × 224 | 1000 | 6,797,700 | 27 (Conv. + Pooling) + 5 pooling) |
SqueezeNet | 227 × 227 | 1000 | 736,450 | 18 (Conv. + FC) |
VGG16 | 224 × 224 | 1000 | 138,000,000 | 16 (13 Conv. + 3 FC) |
VGG19 | 224 × 224 | 1000 | 138,000,000 | 19 (16 Conv. + 3 FC) |
Cultivar | Pretrained CNN | Training Time (s) | Validation Accuracy (%) | Testing Accuracy (%) |
---|---|---|---|---|
Giza 86 (7 classes) | AlexNet | 113,720 | 90.6 | 64.3 |
GoogleNet | 113,720 | 81.2 | 62.9 | |
SqueezeNet | 77,747 | 83.1 | 50.0 | |
VGG16 | 171,418 | 87.9 | 72.9 | |
VGG19 | 202,891 | 91.9 | 75.7 | |
Giza 87 (4 classes) | AlexNet | 487,345 | 95.9 | 85.0 |
GoogleNet | 26,670 | 91.0 | 75.0 | |
SqueezeNet | 22,886 | 90.2 | 75.0 | |
VGG16 | 62,937 | 79.2 | 77.5 | |
VGG19 | 68,816 | 82.0 | 82.5 | |
Giza 90 (6 classes) | AlexNet | 82,392 | 84.3 | 61.7 |
GoogleNet | 48,467 | 80.5 | 58.3 | |
SqueezeNet | 69,482 | 79.5 | 71.7 | |
VGG16 | 151,989 | 90.5 | 65.0 | |
VGG19 | 187,402 | 90.5 | 80.0 | |
Giza 94 (7 classes) | AlexNet | 134,831 | 88.6 | 77.1 |
GoogleNet | 84,894 | 89.5 | 74.3 | |
SqueezeNet | 76,019 | 87.3 | 74.3 | |
VGG16 | 169,532 | 82.4 | 70.0 | |
VGG19 | 201,126 | 84.6 | 65.7 | |
Giza 96 (7 classes) | AlexNet | 1,182,656 | 86.5 | 85.7 |
GoogleNet | 74,081 | 83.2 | 90.0 | |
SqueezeNet | 67,654 | 82.9 | 83.4 | |
VGG16 | 150,190 | 85.1 | 77.1 | |
VGG19 | 177,943 | 82.2 | 87.1 |
Cultivar/Optimal CNN | Grade | Precision (%) | Recall (%) | F1-Score (%) |
---|---|---|---|---|
Giza 86/VGG19 | FG | 88.9 | 80.0 | 84.2 |
GFG | 81.8 | 90.0 | 85.7 | |
G | 90.9 | 100 | 95.2 | |
FGFG | 100 | 90.0 | 94.7 | |
FGF | 41.2 | 70.0 | 51.9 | |
GFFGF | 0 | 0 | ----- | |
GF | 100 | 100 | 100 | |
Giza 87/AlexNet | GFG | 100 | 100 | 100 |
G | 100 | 100 | 100 | |
FGFG | 75.0 | 60.0 | 66.7 | |
FGF | 67.0 | 80.0 | 72.8 | |
Giza 90/VGG19 | GFG | 100 | 100 | 100 |
G | 100 | 100 | 100 | |
FGFG | 53.3 | 80.0 | 64.0 | |
FGF | 75.0 | 30.0 | 42.9 | |
GFFGF | 77.8 | 70.0 | 73.7 | |
GF | 83.3 | 100 | 90.9 | |
Giza 94/AlexNet | FG | 60.0 | 30.0 | 40.0 |
GFG | 57.1 | 80.0 | 66.6 | |
G | 90.9 | 100 | 95.2 | |
FGFG | 100 | 100 | 100 | |
FGF | 100 | 30.0 | 46.2 | |
GFFGF | 58.8 | 100 | 74.1 | |
FF | 100 | 100 | 100 | |
Giza 96/GoogleNet | FG | 90.9 | 100 | 95.2 |
G | 100 | 70.0 | 82.4 | |
FGFG | 80.0 | 80.0 | 80.0 | |
FGF | 80.0 | 80.0 | 80.0 | |
GFFGF | 83.1 | 100 | 90.8 | |
GF | 100 | 100 | 100 | |
FF | 100 | 100 | 100 |
Fusion Rates (%) Cultivar | Giza 86 | Giza 87 | Giza 90 | Giza 94 | Giza 96 |
---|---|---|---|---|---|
AlexNet | 8 | 14 | 29 | 14 | 8 |
GoogleNet | 21 | 36 | 3 | 45 | 50 |
SqueezeNet | 3 | 31 | 20 | 39 | 4 |
VGG16 | 6 | 14 | 1 | 1 | 1 |
VGG19 | 61 | 5 | 46 | 1 | 37 |
Classification accuracy for optimal fused models (%) | 78.6 | 82.5 | 86.7 | 84.3 | 92.9 |
Optimal classification accuracy for optimal individual models (%) | 75.7 | 85.0 | 80.0 | 77.1 | 90.0 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Rady, A.; Fisher, O.; El-Banna, A.A.A.; Emasih, H.H.; Watson, N.J. Computer Vision and Transfer Learning for Grading of Egyptian Cotton Fibres. AgriEngineering 2025, 7, 127. https://doi.org/10.3390/agriengineering7050127
Rady A, Fisher O, El-Banna AAA, Emasih HH, Watson NJ. Computer Vision and Transfer Learning for Grading of Egyptian Cotton Fibres. AgriEngineering. 2025; 7(5):127. https://doi.org/10.3390/agriengineering7050127
Chicago/Turabian StyleRady, Ahmed, Oliver Fisher, Aly A. A. El-Banna, Haitham H. Emasih, and Nicholas J. Watson. 2025. "Computer Vision and Transfer Learning for Grading of Egyptian Cotton Fibres" AgriEngineering 7, no. 5: 127. https://doi.org/10.3390/agriengineering7050127
APA StyleRady, A., Fisher, O., El-Banna, A. A. A., Emasih, H. H., & Watson, N. J. (2025). Computer Vision and Transfer Learning for Grading of Egyptian Cotton Fibres. AgriEngineering, 7(5), 127. https://doi.org/10.3390/agriengineering7050127