Evaluating the FLUX.1 Synthetic Data on YOLOv9 for AI-Powered Poultry Farming
Abstract
:1. Introduction
- Developed a hybrid dataset combining real and synthetic images generated by FLUX.1 to enhance AI-driven poultry monitoring;
- Evaluated the impact of synthetic data on YOLOv9-e model accuracy, demonstrating its feasibility in reducing reliance on real-world data;
- Implemented an automated annotation pipeline using Grounding DINO and SAM2, significantly reducing manual labeling effort;
- Analyzed different dataset compositions to determine the optimal balance between real and synthetic images for object detection in poultry farming;
- Provided a practical framework for integrating generative AI into smart agriculture applications, ensuring model generalization and efficiency;
- Demonstrated that a combination of real and synthetic images can achieve comparable accuracy to larger, real-only datasets, validating the use of synthetic data augmentation.
2. Related Work
3. Materials and Methods
3.1. Tools Selection and Setup
3.2. Dataset Collection and Annotation
3.3. Experiment Models and Setup
Algorithm 1 Implementation steps of generative AI for extending dataset with synthetic data. |
|
3.4. Evaluation Metrics
- YOLOv9 uses CIoU (Complete Intersection over Union) loss, incorporating distance, aspect ratio, and scale, ensuring faster convergence and precise localization for bounding box regression loss [60].
- Using Binary Cross-Entropy (BCE) loss with sigmoid activation, the model enables multi-label classification per bounding box for classification loss [61].
- Focal loss reduces the impact of easy negatives, focusing on hard-to-classify cases, improving object–background distinction, and it is used as objectness loss [62].
4. Results and Discussion
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
Abbreviations
AI | Artificial Intelligence |
AP | Average Precision |
BCE | Binary Cross Entropy |
COCO | Common Objects in Context |
CIoU | Complete Intersection over Union |
CV | Computer Vision |
DIoU | Distance Intersection over Union |
DT | Digital Twin |
GAN | General Adversarial Network |
GenAI | Generative Artificial Intelligence |
GIoU | Generalized Intersection over Union |
HPC | High Performance Computing |
FP | False Positive |
FN | False Negative |
IoT | Internet of Things |
IoU | Intersection over Union |
LoRA | Low-Rank Adaptation |
mAP | Mean Average Precision |
ML | Machine Learning |
RAM | Random Access Memory |
R-CNN | Regional Convolutional Neural Network |
SAM | Segment Anything Model |
SSD | Single Shoot Detector |
TP | True Positive |
YOLO | You Only Look Once |
References
- USDA. Livestock and Poultry: World Markets and Trade, United States Department of Agriculture, Foreign Agriculture Service. 8 April 2022. Available online: https://www.fas.usda.gov/data/livestock-and-poultry-world-markets-and-trade (accessed on 25 February 2025).
- Ojo, R.O.; Ajayi, A.O.; Owolabi, H.A.; Oyedele, L.O.; Akanbi, L.A. Internet of Things and Machine Learning Techniques in Poultry Health and Welfare Management: A Systematic Literature Review. Comput. Electron. Agric. 2022, 200, 107266. [Google Scholar] [CrossRef]
- FAO. The Future of Food and Agriculture: Alternative Pathways to 2050; Food and Agriculture Organization of the United Nations: Rome, Italy, 2018; p. 228. [Google Scholar]
- Cakic, S.; Popovic, T.; Krco, S.; Nedic, D.; Babic, D.; Jovovic, I. Developing Edge AI Computer Vision for Smart Poultry Farms Using Deep Learning and HPC. Sensors 2023, 23, 3002. [Google Scholar] [CrossRef] [PubMed]
- Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You Only Look Once: Unified, Real-Time Object Detection. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 779–788. [Google Scholar] [CrossRef]
- Ren, S.; He, K.; Girshick, R.; Sun, J. Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks. In Proceedings of the 28th International Conference on Neural Information Processing Systems, Cambridge, MA, USA, 8–13 December 2014; Volume 1, pp. 91–99. [Google Scholar] [CrossRef]
- He, K.; Gkioxari, G.; Dollár, P.; Girshick, R. Mask R-CNN. In Proceedings of the 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; pp. 2980–2988. [Google Scholar] [CrossRef]
- Rabbani, A.; Hussain, M. YOLOv1 to YOLOv10: A Comprehensive Review of YOLO Variants and Their Application in the Agricultural. arXiv 2024, arXiv:2406.10139. [Google Scholar] [CrossRef]
- Ezzeddini, L.; Ktari, J.; Frikha, T.; Alsharabi, N.; Alayba, A.; Alzahrani, A.; Jadi, A.; Alkholidi, A.; Hamam, H. Analysis of the Performance of Faster R-CNN and YOLOv8 in Detecting Fishing Vessels and Fishes in Real Time. PeerJ Comput. Sci. 2024, 10, e2033. [Google Scholar] [CrossRef]
- Sapkota, R.; Ahmed, D.; Karkee, M. Comparing YOLOv8 and Mask R-CNN for Instance Segmentation in Complex Orchard Environments. Artif. Intell. Agric. 2024, 13, 84–99. [Google Scholar] [CrossRef]
- Camacho, J.; Morocho-Cayamcela, M.E. Mask R-CNN and YOLOv8 Comparison to Perform Tomato Maturity Recognition. In Communications in Computer and Information Science; Springer: Berlin/Heidelberg, Germany, 2023; pp. 1–12. [Google Scholar] [CrossRef]
- Alif, M.; Hussain, M. YOLOv12: A Breakdown of the Key Architectural Features. arXiv 2025, arXiv:2502.14740. [Google Scholar] [CrossRef]
- Geetha, A.S.; Hussain, M. A Comparative Analysis of YOLOv5, YOLOv8, and YOLOv10 in Kitchen Safety. arXiv 2024, arXiv:2407.20872. [Google Scholar]
- Ultralytics. YOLOv11 Overview. Available online: https://docs.ultralytics.com/models/yolo11/#overview (accessed on 27 February 2025).
- Liu, J.; Zhou, Y.; Li, Y.; Li, Y.; Hong, S.; Li, Q.; Liu, X.; Lu, M.; Wang, X. Exploring the Integration of Digital Twin and Generative AI in Agriculture. In Proceedings of the 15th International Conference on Intelligent Human-Machine Systems and Cybernetics (IHMSC), Hangzhou, China, 26–27 August 2023; pp. 223–228. [Google Scholar] [CrossRef]
- Klair, Y.S.; Agrawal, K.; Kumar, A. Impact of Generative AI in Diagnosing Diseases in Agriculture. In Proceedings of the 2nd International Conference on Disruptive Technologies (ICDT), Greater Noida, India, 15–16 March 2024. [Google Scholar] [CrossRef]
- Narvekar, C.; Rao, M. Productivity Improvement with Generative AI Framework for Data Enrichment in Agriculture. Int. J. Recent Innov. Trends Comput. Commun. 2023, 11, 679–684. [Google Scholar] [CrossRef]
- Majumder, S.; Khandelwal, Y.; Sornalakshmi, K. Computer Vision and Generative AI for Yield Prediction in Digital Agriculture. In Proceedings of the 2nd International Conference on Networking and Communications (ICNWC), Chennai, India, 2–4 April 2024; pp. 1–6. [Google Scholar] [CrossRef]
- Yang, Z. The Ultimate FLUX.1 Hands-On Guide. Available online: https://medium.com/@researchgraph/the-ultimate-flux-1-hands-on-guide-067fc053fedd (accessed on 25 February 2025).
- Lei, J.; Zhang, R.; Hu, X.; Lin, W.; Li, Z.; Sun, W.; Du, R.; Zhuo, L.; Li, Z.; Li, X.; et al. IMAGINE-E: Image Generation Intelligence Evaluation of State-of-the-art Text-to-Image Models. arXiv 2025, arXiv:2501.13920. [Google Scholar] [CrossRef]
- Oñoro-Rubio, D.; López-Sastre, R.J. Towards Perspective-Free Object Counting with Deep Learning. In Computer Vision—ECCV 2016; Springer: Cham, Switzerland, 2016; Volume 9911. [Google Scholar] [CrossRef]
- Laradji, I.H.; Rostamzadeh, N.; Pinheiro, P.O.; Vazquez, D.; Schmidt, M. Where Are the Blobs: Counting by Localization with Point Supervision. In Computer Vision—ECCV 2018; Springer: Cham, Switzerland, 2018; Volume 11206. [Google Scholar] [CrossRef]
- Tian, M.; Guo, H.; Chen, H.; Wang, Q.; Long, Q.; Ma, Y. Automated pig counting using deep learning. Comput. Electron. Agric. 2019, 163, 104840. [Google Scholar] [CrossRef]
- Xu, B.; Wang, W.; Falzon, G.; Kwan, P.; Guo, L.; Chen, G.; Tait, A.; Schneider, D. Automated cattle counting using mask R-CNN in Quadcopter Vision System. Comput. Electron. Agric. 2020, 171, 105300. [Google Scholar] [CrossRef]
- Cao, L.; Xiao, Z.; Liao, X.; Yao, Y.; Wu, K.; Mu, J.; Li, J.; Pu, H. Automated Chicken Counting in Surveillance Camera Environments Based on the Point Supervision Algorithm: LC-DenseFCN. Agriculture 2021, 11, 493. [Google Scholar] [CrossRef]
- Cowton, J.; Kyriazakis, I.; Bacardit, J. Automated Individual Pig Localisation, Tracking and Behaviour Metric Extraction Using Deep Learning. IEEE Access 2019, 7, 108049–108060. [Google Scholar] [CrossRef]
- Lin, C.-Y.; Hsieh, K.-W.; Tsai, Y.-C.; Kuo, Y.-F. Automatic Monitoring of Chicken Movement and Drinking Time Using Convolutional Neural Networks. Trans. ASABE 2020, 63, 2029–2038. [Google Scholar] [CrossRef]
- Neethirajan, S. ChickTrack—A quantitative tracking tool for measuring chicken activity. Measurement 2022, 191, 110819. [Google Scholar] [CrossRef]
- Liu, H.-W.; Chen, C.-H.; Tsai, Y.-C.; Hsieh, K.-W.; Lin, H.-T. Identifying Images of Dead Chickens with a Chicken Removal System Integrated with a Deep Learning Algorithm. Sensors 2021, 21, 3579. [Google Scholar] [CrossRef]
- Elmessery, W.M.; Gutiérrez, J.; Abd El-Wahhab, G.G.; Elkhaiat, I.A.; El-Soaly, I.S.; Alhag, S.K.; Al-Shuraym, L.A.; Akela, M.A.; Moghanm, F.S.; Abdelshafie, M.F. YOLO-Based Model for Automatic Detection of Broiler Pathological Phenomena through Visual and Thermal Images in Intensive Poultry Houses. Agriculture 2023, 13, 1527. [Google Scholar] [CrossRef]
- Li, G.; Hui, X.; Lin, F.; Zhao, Y. Developing and Evaluating Poultry Preening Behavior Detectors via Mask R-CNN. Animals 2020, 10, 1762. [Google Scholar] [CrossRef]
- Yang, X.; Bist, R.; Paneru, B.; Chai, L. Monitoring Activity Index and Behaviors of Cage-Free Hens with Advanced Deep Learning Technologies. Poult. Sci. 2024, 103, 104193. [Google Scholar] [CrossRef]
- Ma, X.; Lu, X.; Huang, Y.; Yang, X.; Xu, Z.; Mo, G.; Ren, Y.; Li, L. An Advanced Chicken Face Detection Network Based on GAN and MAE. Animals 2022, 12, 3055. [Google Scholar] [CrossRef]
- Ye, C.; Yousaf, K.; Qi, C.; Liu, C.; Chen, K. Broiler Stunned State Detection Based on an Improved Fast R-CNN Algorithm. Poult. Sci. 2020, 99, 637–646. [Google Scholar] [CrossRef] [PubMed]
- Muhammad, A.; Salman, Z.; Lee, K.; Han, D. Harnessing the Power of Diffusion Models for Plant Disease Image Augmentation. Front. Plant Sci. 2023, 14, 1280496. [Google Scholar] [CrossRef] [PubMed]
- Lu, Y.; Chen, D.; Olaniyi, E.; Huang, Y. Generative Adversarial Networks (GANs) for Image Augmentation in Agriculture: A Systematic Review. Comput. Electron. Agric. 2022, 200, 107208. [Google Scholar] [CrossRef]
- Goyal, M.; Mahmoud, Q.H. A Systematic Review of Synthetic Data Generation Techniques Using Generative AI. Electronics 2024, 13, 3509. [Google Scholar] [CrossRef]
- Klein, J.; Waller, R.; Pirk, S.; Pałubicki, W.; Tester, M.; Michels, D.L. Synthetic Data at Scale: A Development Model to Efficiently Leverage Machine Learning in Agriculture. Front. Plant Sci. 2024, 15, 1360113. [Google Scholar] [CrossRef]
- Nikolenko, S.I. Synthetic Data for Deep Learning in Computer Vision: A Survey. arXiv 2019, arXiv:1909.11512. [Google Scholar]
- Seth, P.; Bhandari, A.; Lakara, K. Analyzing Effects of Fake Training Data on the Performance of Deep Learning Systems. arXiv 2023, arXiv:2303.01268. [Google Scholar]
- HuggingFace. FLUX 1.1-dev. Available online: https://huggingface.co/black-forest-labs/FLUX.1-dev (accessed on 25 February 2025).
- agroNET. Digital Farming Management. Available online: https://digitalfarming.eu (accessed on 26 February 2025).
- Liu, S.; Zeng, Z.; Ren, T.; Li, F.; Zhang, H.; Jiang, Q.; Yang, J.; Li, C.; Yang, J.; Su, H.; et al. Grounding DINO: Marrying DINO with Grounded Pre-Training for Open-Set Object Detection. arXiv 2023, arXiv:2303.05499. [Google Scholar]
- Jocher, G.; Chaurasia, A.; Qiu, J. Ultralytics YOLO. Available online: https://github.com/ultralytics/ultralytics (accessed on 25 February 2025).
- Roboflow (Version 1.0) [Software]. Available online: https://roboflow.com (accessed on 25 February 2025).
- Replicate, Inc. Replicate Platform. Available online: https://replicate.com (accessed on 25 February 2025).
- Belekar, A. Fine-Tuning Flux.1 with Your Own Images: Top 3 Methods. Available online: https://blog.segmind.com/fine-tuning-flux-1-with-your-own-images-top-3-methods/ (accessed on 12 November 2024).
- Mullins, C.C.; Esau, T.J.; Zaman, Q.U.; Toombs, C.L.; Hennessy, P.J. Leveraging Zero-Shot Detection Mechanisms to Accelerate Image Annotation for Machine Learning in Wild Blueberry (Vaccinium angustifolium Ait.). Agronomy 2024, 14, 2830. [Google Scholar] [CrossRef]
- Ravi, N.; Gabeur, V.; Hu, Y.-T.; Hu, R.; Ryali, C.; Ma, T.; Khedr, H.; Rädle, R.; Rolland, C.; Gustafson, L.; et al. SAM 2: Segment Anything in Images and Videos. arXiv 2024, arXiv:2408.00714. [Google Scholar] [CrossRef]
- Roboflow Team. Segment Anything: Roboflow Image Segmentation. Available online: https://ai.meta.com/blog/segment-anything-roboflow-image-segmentation/ (accessed on 12 November 2024).
- FLUX, Github. Available online: https://github.com/black-forest-labs/flux (accessed on 15 March 2025).
- Yang, C.; Liu, C.; Deng, X.; Kim, D.; Mei, X.; Shen, X.; Chen, L.-C. 1.58-bit FLUX: Efficient Quantization of Text-to-Image Models. arXiv 2024, arXiv:2412.18653. [Google Scholar]
- Flux.1-Architecture-Diagram, Github. Available online: https://github.com/brayevalerien/Flux.1-Architecture-Diagram/tree/main (accessed on 27 February 2025).
- Replicate, Inc. Replicate Pricing. Available online: https://replicate.com/pricing (accessed on 27 February 2025).
- ShakkerLabs. FLUX.1 dev LoRA Antiblur. Available online: https://huggingface.co/Shakker-Labs/FLUX.1-dev-LoRA-AntiBlur (accessed on 12 February 2025).
- ShakkerLabs. FLUX.1 dev LoRA Add Details. Available online: https://huggingface.co/Shakker-Labs/FLUX.1-dev-LoRA-add-details (accessed on 12 February 2025).
- Wang, C.-Y.; Yeh, I.-H.; Liao, H.-Y.M. YOLOv9: Learning What You Want to Learn Using Programmable Gradient Information. arXiv 2024, arXiv:2402.13616. [Google Scholar]
- Tishby, N.; Pereira, F.C.; Bialek, W. The Information Bottleneck Method. arXiv 2000, arXiv:physics/0004057. [Google Scholar]
- Yaseen, M. What Is YOLOv9: An In-Depth Exploration of the Internal Features of the Next-Generation Object Detector. arXiv 2024, arXiv:2409.07813. [Google Scholar] [CrossRef]
- Zheng, Z.; Wang, P.; Ren, D.; Liu, W.; Ye, R.; Hu, Q.; Zuo, W. Enhancing Geometric Factors in Model Learning and Inference for Object Detection and Instance Segmentation. IEEE Trans. Cybern. 2021, 52, 8574–8586. [Google Scholar] [CrossRef]
- Terven, J.R.; Cordova-Esparza, D.M.; Ramirez-Pedraza, A.; Chavez-Urbiola, E.A.; Romero-Gonzalez, J.A. Loss Functions and Metrics in Deep Learning. arXiv 2024, arXiv:2307.02694. [Google Scholar]
- Lin, T.-Y.; Goyal, P.; Girshick, R.; He, K.; Dollár, P. Focal Loss for Dense Object Detection. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017. [Google Scholar]
- Rezatofighi, H.; Tsoi, N.; Gwak, J.; Sadeghian, A.; Reid, I.; Savarese, S. Generalized Intersection over Union: A Metric and a Loss for Bounding Box Regression. In Proceedings of the IEEE CVPR, Long Beach, CA, USA, 15–20 June 2019; pp. 658–666. [Google Scholar]
- Zheng, Z.; Wang, P.; Liu, W.; Li, J.; Ye, R.; Ren, D. Distance-IoU Loss: Faster and Better Learning for Bounding Box Regression. In Proceedings of the AAAI Conference on Artificial Intelligence, New York, NY, USA, 7–8 February 2020. [Google Scholar]
- He, J.; Erfani, S.M.; Ma, X.; Bailey, J.; Chi, Y.; Hua, X.-S. Alpha-IoU: A Family of Power Intersection over Union Losses for Bounding Box Regression. In Proceedings of the 35th Conference on Neural Information Processing Systems (NeurIPS), Online, 6–14 December 2021. [Google Scholar]
- Zhang, Y.-F.; Ren, W.; Zhang, Z.; Jia, Z.; Wang, L.; Tan, T. Focal and Efficient IOU Loss for Accurate Bounding Box Regression. Neurocomputing 2022, 506, 146–157. [Google Scholar] [CrossRef]
- Huang, H.-W.; Yang, C.-Y.; Sun, J.; Kim, P.-K.; Kim, K.-J.; Lee, K.; Huang, C.-I.; Hwang, J.-N. Iterative Scale-Up ExpansionIoU and Deep Features Association for Multi-Object Tracking in Sports. arXiv 2023, arXiv:2306.13074. [Google Scholar]
- Gevorgyan, Z. SIoU Loss: More Powerful Learning for Bounding Box Regression. arXiv 2022, arXiv:2205.12740. [Google Scholar]
Experiment ID | Dataset Split | Metric | |||
---|---|---|---|---|---|
Real | Synthetic | Augmented | Total | mAP | |
1 | 0 | 0 | 0 | 0 | 0.245 |
2 | 100 | 0 | 0 | 100 | 0.796 |
3 | 400 | 0 | 0 | 400 | 0.822 |
4 | 0 | 100 | 0 | 100 | 0.793 |
5 | 0 | 400 | 0 | 400 | 0.789 |
6 | 50 | 50 | 0 | 100 | 0.801 |
7 | 200 | 200 | 0 | 400 | 0.821 |
8 | 300 | 100 | 0 | 400 | 0.829 |
9 | 100 | 300 | 0 | 400 | 0.820 |
10 | 400 | 0 | 800 | 1200 | 0.820 |
11 | 0 | 400 | 800 | 1200 | 0.792 |
12 | 200 | 200 | 800 | 1200 | 0.814 |
13 | 300 | 100 | 800 | 1200 | 0.812 |
14 | 100 | 300 | 800 | 1200 | 0.808 |
15 | 400 | 0 | 800 | 1200 | 0.827 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Cakic, S.; Popovic, T.; Krco, S.; Jovovic, I.; Babic, D. Evaluating the FLUX.1 Synthetic Data on YOLOv9 for AI-Powered Poultry Farming. Appl. Sci. 2025, 15, 3663. https://doi.org/10.3390/app15073663
Cakic S, Popovic T, Krco S, Jovovic I, Babic D. Evaluating the FLUX.1 Synthetic Data on YOLOv9 for AI-Powered Poultry Farming. Applied Sciences. 2025; 15(7):3663. https://doi.org/10.3390/app15073663
Chicago/Turabian StyleCakic, Stevan, Tomo Popovic, Srdjan Krco, Ivan Jovovic, and Dejan Babic. 2025. "Evaluating the FLUX.1 Synthetic Data on YOLOv9 for AI-Powered Poultry Farming" Applied Sciences 15, no. 7: 3663. https://doi.org/10.3390/app15073663
APA StyleCakic, S., Popovic, T., Krco, S., Jovovic, I., & Babic, D. (2025). Evaluating the FLUX.1 Synthetic Data on YOLOv9 for AI-Powered Poultry Farming. Applied Sciences, 15(7), 3663. https://doi.org/10.3390/app15073663