Artificial Intelligence and Computer Vision Applications in Food Science and Industry

A special issue of Foods (ISSN 2304-8158).

Deadline for manuscript submissions: 15 December 2026 | Viewed by 7661

Special Issue Editors


E-Mail Website
Guest Editor
Rambam Research Institute, Rambam Health Care Campus, HaAliya HaShniya St 8, Haifa 3109601, Israel
Interests: food quality analysis; Nuclear Magnetic Resonance (NMR); statistical analysis; renal health

E-Mail Website
Guest Editor
Department of Biotechnology Engineering, Ben Gurion University of the Negev, Beer Sheva, Israel
Interests: food composition; lipids; oxidation; low field NMR relaxation applications; supramolecular chemistry; food quality control; emulsion stability
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Artificial intelligence (AI) and computer vision (CV) are emerging as powerful tools driving innovation and efficiency in the rapidly evolving landscape of food science and technology. AI and computer vision applications transform food production, processing, and consumption, automating quality control processes, enhancing food safety, and optimizing production workflows. Integrating AI and CV into the food industry enhances operational efficiency and plays a critical role in meeting the growing demand for sustainable, safe, and high-quality food products. This Special Issue of Foods aims to bring together cutting-edge research, review articles, and case studies that explore the diverse applications of AI and computer vision in food science and industry. The objective is to provide a comprehensive overview of current advancements, challenges, and future directions in this rapidly expanding field. We invite contributions highlighting novel AI-driven approaches, the development and application of CV technologies, and interdisciplinary studies bridging the gap between food science and computational methodologies. 

Topics of interest for this Special Issue include, but are not limited to:

  • AI-Driven Quality Control: Defect detection automation, grading, and sorting food products.
  • Computer Vision for Food Safety: Applications of CV in microbial detection, contamination monitoring, and shelf-life prediction.
  • Optimization of Food Processing: AI models for streamlining workflows, improving efficiency, and reducing food waste.
  • Sustainable Food Production: Leveraging AI and CV for sustainable agriculture, innovative packaging, and resource management.
  • Predictive Analytics in Food Supply Chains: AI systems that enhance supply chain logistics, demand forecasting, and inventory control decision-making.
  • Food Recognition and Classification: CV-based systems for automatic recognition, sorting, and classifying of food items in production environments.
  • AI and CV in Consumer Experience: Personalized nutrition, food product recommendations, and visual assessment of food quality via AI and CV.
  • Robotics and Automation in Food Production: Integration of AI-driven robotics in food handling, packaging, and distribution.
  • Data Fusion and Interdisciplinary Approaches: Combining AI and CV with sensory data, spectroscopy, and other technologies to enhance food characterization and processing.

We encourage submissions that demonstrate novel methodologies, practical case studies, and review articles that synthesize current knowledge and propose new frameworks or directions for future research. This Special Issue is a comprehensive resource and a practical guide for scientists, researchers, industry professionals, and policymakers. It aims to showcase the high potential of AI and Computer Vision as vital tools in the future of modern food science and technology, not just in theory but in practice.

Prof. Dr. Cristian Randieri
Dr. Salvatore Campisi-Pinto
Prof. Dr. Zeev Wiesman
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 250 words) can be sent to the Editorial Office for assessment.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Foods is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2900 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • artificial intelligence
  • food analisys
  • computer vision
  • image analisys
  • machine learning
  • food processing
  • quality inspection
  • robotics
  • automation
  • food safety

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • Reprint: MDPI Books provides the opportunity to republish successful Special Issues in book format, both online and in print.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (6 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

15 pages, 2005 KB  
Article
Image-Based Machine Learning for Predicting Acceptability Limits in Frozen Pizza Shelf Life
by Marika Valentino, Giulia Varutti, Sylvio Barbon Júnior and Maria Cristina Nicoli
Foods 2026, 15(8), 1348; https://doi.org/10.3390/foods15081348 - 13 Apr 2026
Viewed by 284
Abstract
Shelf life of frozen foods is intrinsically linked to consumer sensory acceptability. However, quantifying the synergistic impact of extended storage and variable thermal cycles on perception remains challenging. This study proposes a non-destructive image-based approach for estimating the acceptability of frozen pizza using [...] Read more.
Shelf life of frozen foods is intrinsically linked to consumer sensory acceptability. However, quantifying the synergistic impact of extended storage and variable thermal cycles on perception remains challenging. This study proposes a non-destructive image-based approach for estimating the acceptability of frozen pizza using a machine learning model and identifying tomato sauce degradation as indicator of product quality decay. Qualitative consumer feedback (90%) identified tomato sauce saturation as the primary driver of visual rejection. Image processing pipeline was developed to isolate the sauce region from each sample for further color extraction (saturation in the HSV color space). A second-degree polynomial regression model was used to describe the saturation trend over time and, in parallel, a logistic regression classifier was trained to predict binary consumer acceptability based on both saturation and storage duration. The models were evaluated using frozen pizzas (−12 and −18 °C) for up to 200 days. The regression model achieved an R2 of 0.68 and an RMSE of 12.8, while the classifier attained an accuracy of 88.2% and an AUC of 0.93. The resulting framework enables early, non-invasive estimation of product acceptability and shows strong potential for practical application in shelf life studies within the frozen food industry. Full article
Show Figures

Graphical abstract

27 pages, 31298 KB  
Article
Automated Detection of Quality Deviations in Poultry Processing Using Step-Specific YOLOv12 Models
by Daniel Einsiedel, Marco Vita, Florian Kaltenecker, Bertus Dunnewind, Johan Meulendijks and Christian Krupitzer
Foods 2026, 15(6), 1019; https://doi.org/10.3390/foods15061019 - 13 Mar 2026
Viewed by 464
Abstract
Artificial intelligence (AI) and computer vision (CV) offer promising avenues for automated quality control in food manufacturing, yet many prior works in that sector focused on agricultural primary production tasks. This study evaluates object detection for in-line quality monitoring on a real production [...] Read more.
Artificial intelligence (AI) and computer vision (CV) offer promising avenues for automated quality control in food manufacturing, yet many prior works in that sector focused on agricultural primary production tasks. This study evaluates object detection for in-line quality monitoring on a real production line for ready-to-eat chicken-type products. Overhead cameras captured images at four processing steps: forming, coating, frying, and cooking. For each step, we labeled 2000 images containing multiple products with multiple classes of quality deviations. Separate YOLOv12x models (default and hyperparameter-tuned) were trained per step and evaluated using mAP50–95, F1-curves, and confusion matrices. Step-specific models, i.e., models applicable solely for a specific processing step, achieved similar peak mAP50–95 (0.50–0.60), and hyperparameter tuning did not yield any major gains despite high computational cost. Performance was strongly tied to class frequency: common classes achieved high F1-Scores, whereas rare classes were often misclassified. To mitigate imbalance and improve robustness, we trained a single model on a combined dataset spanning all steps, which attained a higher peak mAP50–95 of 0.7331 ± 0.0040 and produced more balanced F1-curves, albeit with some loss of step-specific strengths, such as detection of certain deviations specific to that step. The results indicate that out-of-the-box detectors can add practical value to industrial CV-enhanced quality control in food processing, and that further improvements will primarily come from targeted data collection for minority classes, instance-centric datasets, higher-resolution or multi-scale training, and methods that address class imbalance. Full article
Show Figures

Figure 1

24 pages, 9875 KB  
Article
Corn Kernel Segmentation and Damage Detection Using a Hybrid Watershed–Convex Hull Approach
by Yi Shen, Wensheng Wang, Xuanyu Luo, Feiyu Zou and Zhen Yin
Foods 2026, 15(2), 404; https://doi.org/10.3390/foods15020404 - 22 Jan 2026
Viewed by 457
Abstract
Accurate segmentation of adhered (sticky) corn kernels and reliable damage detection are critical for quality control in corn processing and kernel selection. Traditional watershed algorithms suffer from over-segmentation, whereas deep learning methods require large annotated datasets that are impractical in most industrial settings. [...] Read more.
Accurate segmentation of adhered (sticky) corn kernels and reliable damage detection are critical for quality control in corn processing and kernel selection. Traditional watershed algorithms suffer from over-segmentation, whereas deep learning methods require large annotated datasets that are impractical in most industrial settings. This study proposes W&C-SVM, a hybrid computer vision method that integrates an improved watershed algorithm (Sobel gradient and Euclidean distance transform), convex hull defect detection and an SVM classifier trained on only 50 images. On an independent test set, W&C-SVM achieved the highest damage detection accuracy of 94.3%, significantly outperforming traditional watershed SVM (TW + SVM) (74.6%), GrabCut (84.5%) and U-Net trained on the same 50 images (85.7%). The method effectively separates severely adhered kernels and identifies mechanical damage, supporting the selection of intact kernels for quality control. W&C-SVM offers a low-cost, small-sample solution ideally suited for small-to-medium food enterprises and breeding laboratories. Full article
Show Figures

Figure 1

25 pages, 33596 KB  
Article
Fig-YOLO: An Improved YOLOv11-Based Fig Detection Algorithm for Complex Environments
by Zhihao Liang, Ruoyu Di, Fei Tan, Jinbang Zhang, Weiping Yan, Li Zhang, Wei Xu, Pan Gao and Zhewen Hao
Foods 2025, 14(23), 4154; https://doi.org/10.3390/foods14234154 - 3 Dec 2025
Cited by 4 | Viewed by 1115
Abstract
Accurate fig detection in complex environments is a significant challenge. Small targets, occlusion, and similar backgrounds are considered the main obstacles in intelligent harvesting. To address this, this study proposes Fig-YOLO, an improved YOLOv11n-based detection algorithm with multiple targeted architectural innovations. First, a [...] Read more.
Accurate fig detection in complex environments is a significant challenge. Small targets, occlusion, and similar backgrounds are considered the main obstacles in intelligent harvesting. To address this, this study proposes Fig-YOLO, an improved YOLOv11n-based detection algorithm with multiple targeted architectural innovations. First, a Spatial–Frequency Selective Convolution (SFSConv) module is introduced into the backbone to replace conventional convolution, enabling joint modeling of spatial structures and frequency-domain texture features for more effective discrimination of figs from visually similar backgrounds. Second, an enhanced bi-branch attention mechanism (EBAM) is incorporated at the network’s terminal stage to strengthen the representation of key regions and improve robustness under severe occlusion. Third, a multi-branch dynamic sampling convolution (MFCV) module replaces the original C3k2 structure in the feature fusion stage, capturing figs of varying sizes through dynamic sampling and residual deep-feature fusion. Experimental results show that Fig-YOLO achieves precision, recall, and mAP@0.5 of 89.2%, 78.4%, and 87.3%, respectively, substantially outperforming the baseline YOLOv11n. Further evaluation confirms that the model maintains stable performance across varying fruit sizes, occlusion levels, lighting conditions, and data sources. Fig-YOLO’s innovations offer solid support for intelligent orchard monitoring and harvesting. Full article
Show Figures

Figure 1

17 pages, 1641 KB  
Article
A Coarse-to-Fine Feature Aggregation Neural Network with a Boundary-Aware Module for Accurate Food Recognition
by Shuang Liang and Yu Gu
Foods 2025, 14(3), 383; https://doi.org/10.3390/foods14030383 - 24 Jan 2025
Cited by 4 | Viewed by 2518
Abstract
Food recognition from images is crucial for dietary management, enabling applications like automated meal tracking and personalized nutrition planning. However, challenges such as background noise disrupting intra-class consistency, inter-class distinction, and domain shifts due to variations in capture angles, lighting, and image resolution [...] Read more.
Food recognition from images is crucial for dietary management, enabling applications like automated meal tracking and personalized nutrition planning. However, challenges such as background noise disrupting intra-class consistency, inter-class distinction, and domain shifts due to variations in capture angles, lighting, and image resolution persist. This study proposes a multi-stage convolutional neural network-based framework incorporating a boundary-aware module (BAM) for boundary region perception, deformable ROI pooling (DRP) for spatial feature refinement, a transformer encoder for capturing global contextual relationships, and a NetRVLAD module for robust feature aggregation. The framework achieved state-of-the-art performance on three benchmark datasets, with Top-1 accuracies of 99.80% on the Food-5k dataset, 99.17% on the Food-101 dataset, and 85.87% on the Food-2k dataset, significantly outperforming existing methods. This framework holds promise as a foundational tool for intelligent dietary management, offering robust and accurate solutions for real-world applications. Full article
Show Figures

Figure 1

Review

Jump to: Research

18 pages, 297 KB  
Review
Integrating Worker and Food Safety in Poultry Processing Through Human-Robot Collaboration: A Comprehensive Review
by Corliss A. O’Bryan, Kawsheha Muraleetharan, Navam S. Hettiarachchy and Philip G. Crandall
Foods 2026, 15(2), 294; https://doi.org/10.3390/foods15020294 - 14 Jan 2026
Viewed by 782
Abstract
This comprehensive review synthesizes current advances and persistent challenges in integrating worker safety and food safety through human-robot collaboration (HRC) in poultry processing. Rapid industry expansion and rising consumer demand for ready-to-eat poultry products have heightened occupational risks and foodborne contamination concerns, necessitating [...] Read more.
This comprehensive review synthesizes current advances and persistent challenges in integrating worker safety and food safety through human-robot collaboration (HRC) in poultry processing. Rapid industry expansion and rising consumer demand for ready-to-eat poultry products have heightened occupational risks and foodborne contamination concerns, necessitating holistic safety strategies. The review examines ergonomic, microbiological, and regulatory risks specific to poultry lines, and maps how state-of-the-art collaborative robots (“cobots”)—including power and force-limiting arms, adaptive soft grippers, machine vision, and biosensor integration—can support safer, more hygienic, and more productive operations. The authors analyze technical scientific literature (2018–2025) and real-world case studies, highlighting how automation (e.g., vision-guided deboning and intelligent sanitation) can reduce repetitive strain injuries, lower contamination rates, and improve production consistency. The review also addresses the psychological and sociocultural dimensions that affect workforce acceptance, as well as economic and regulatory barriers to adoption, particularly in small- and mid-sized plants. Key research gaps include gripper adaptability, validation of food safety outcomes in mixed human-cobot workflows, and the need for deeper workforce retraining and feedback mechanisms. The authors propose a multidisciplinary roadmap: harmonizing ergonomic, safety, and hygiene standards; developing adaptive food-grade robotic end-effectors; fostering explainable AI for process transparency; and advancing workforce education programs. Ultimately, successful HRC deployment in poultry processing will depend on continuous collaboration among industry, researchers, and regulatory authorities to ensure both safety and competitiveness in a rapidly evolving global food system. Full article
Back to TopTop