Next Article in Journal
Effects of Cultivation Years on the Distribution of Nitrogen and Base Cations in 0–7 m Soil Profiles of Plastic-Greenhouse Pepper
Next Article in Special Issue
Analysis of the Factors Affecting the Deposition Coverage of Air-Assisted Electrostatic Spray on Tomato Leaves
Previous Article in Journal
Study on Water and Salt Transport Characteristics of Sunflowers under Different Irrigation Amounts in the Yellow River Irrigation Area
Previous Article in Special Issue
A Rapid Construction Method for High-Throughput Wheat Grain Instance Segmentation Dataset Using High-Resolution Images
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Intelligent Detection System for Wheat Appearance Quality

1
School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai 200240, China
2
Technical Center for Animal Plant and Food Inspection and Quarantine of Shanghai Customs, Shanghai 200002, China
3
Vision Science and Rehabilitation Engineering Laboratory, Shanghai Jiao Tong University, Shanghai 200240, China
*
Authors to whom correspondence should be addressed.
Agronomy 2024, 14(5), 1057; https://doi.org/10.3390/agronomy14051057
Submission received: 31 March 2024 / Revised: 26 April 2024 / Accepted: 15 May 2024 / Published: 16 May 2024
(This article belongs to the Special Issue In-Field Detection and Monitoring Technology in Precision Agriculture)

Abstract

:
In the realm of commercial trade, the appearance quality of wheat is a crucial metric for assessing its value and grading. Traditionally, evaluating wheat appearance quality is a manual process conducted by inspectors, which is time-consuming, laborious, and error-prone. In this research, we developed an intelligent detection system for wheat appearance quality, leveraging state-of-the-art neural network technology for the efficient and standardized assessment of wheat appearance quality. Our system was meticulously crafted, integrating high-performance hardware components and sophisticated software solutions. Central to its functionality is a detection model built upon multi-grained convolutional neural networks. This innovative setup allows for the swift and precise evaluation and categorization of wheat quality. Remarkably, our system achieved an exceptional overall recognition accuracy rate of 99.45% for wheat grain categories, boasting a recognition efficiency that was approximately five times faster than manual recognition processes. This groundbreaking system serves as a valuable tool for assisting inspectors, offering technical support for customs quarantine, grain reserves, and food safety.

1. Introduction

With the development of the economy and the improvement of people’s living standards, the global supply of crops is becoming increasingly tight due to the increased demand [1,2]. The international trade of crops through the global market not only helps balance the market demand of crop-deficient countries but also enhances the income level of farmers in crop-exporting countries, stimulating agricultural production and rural economic growth in major crop-producing nations [3]. A prior assessment of the quality of crops before import and export is crucial to ensuring grain security, as well as serving as the basis for crop grading and is an essential safeguard for food safety. Among them, the appearance assessment of granular crops (such as wheat, soybeans, etc.) is one of the main methods for evaluating crop quality. Typically, a crop item is considered high quality if the rate of unsatisfactory appearance is less than 2%, while a rate exceeding 12% makes it unsuitable for human consumption and can only be used for animal feed or ethanol processing [4,5]. In practice, the visual quality assessment of granular crops often requires inspectors to conduct manual sampling and sensory inspections for each kernel [6]. This method is labor-intensive, time-consuming, and cumbersome, far from meeting the actual needs of crop reserves, production processing, and expedited customs clearance. Moreover, issues such as subjective differences in inspection standards among inspectors and increased error rates due to fatigue make it difficult to achieve uniform standards across different countries and laboratories [7]. Therefore, developing an efficient and standardized intelligent system for the visual quality assessment of crops is highly necessary.
With the advancement of computer vision in recent years, researchers are exploring the use of image processing techniques to replace manual detection. The objective is to enhance detection accuracy and efficiency while minimizing labor costs. Researchers employ conventional image processing methods to extract and refine features from gathered crop images (taking wheat as an example, including broken grains, insect grains, moldy grains, blemished grains, etc., as depicted in Figure 1), and subsequently analyze and process them to derive the classification outcomes of imperfect grains [8,9,10]. However, due to subtle visual feature discrepancies among different categories of imperfect grain images, such as blemished grains and black-tipped grains, as well as potentially significant visual feature differences within the same category, like broken grains, traditional image processing methods often encounter challenges in achieving satisfactory results regarding accuracy, repeatability, and generalization [11]. The robust feature learning capability of deep learning effectively addresses this issue [12,13,14]. Through extensive data training, the model grasps subtle feature distinctions and intricate relationships within the data, enabling better generalization to new, previously unseen data, and yielding more precise predictions [15,16,17].
In this study, we developed an intelligent wheat appearance quality detection system that integrates advanced hardware equipment and software technologies to ensure efficient and precise detection results. On the hardware side, the system was equipped with high-performance industrial computers, wireless touch displays, high-resolution cameras, and other devices, ensuring the stability of system operations. Additionally, we designed a specialized high-throughput sampling board for precise and comprehensive information collection of wheat samples. On the software side, we utilized advanced deep learning algorithms and AI technologies to construct a wheat appearance quality detection model based on multi-grained convolutional neural networks. Through extensive data training and analysis, the system can achieve rapid and accurate evaluation and classification of wheat quality. Furthermore, we developed an intuitive and user-friendly interface to assist operators in quickly picking up imperfect grains. Overall, our crop appearance quality detection system integrates advanced hardware equipment, powerful software technologies, and a user-friendly interface, providing users with a convenient and efficient experience. This system serves as an effective tool to assist inspectors, facilitating efficient and fast sorting operations, and providing technical support for customs quarantine, grain storage, and food safety initiatives.
The main contributions of this study are as follows:
(1)
We designed a high-throughput wheat sampling module to ensure comprehensive image data acquisition, along with installing dual cameras in the system for capturing images from both the top and bottom sides.
(2)
Initially, we harnessed AI technology to construct a versatile fine-grained network classification model. To achieve fine-grained detection of wheat appearance quality, we independently curated a high-quality dataset consisting of nearly 20,000 wheat images. Subsequently, we trained the network model using transfer learning techniques, leading to an impressive 99.45% recognition accuracy rate.
(3)
To assist in the manual sorting of imperfect grains, we created a sorting interaction interface that assists users in swiftly and accurately locating and sorting imperfect grains. This facilitates subsequent operations such as weighing and statistical analyses of imperfect grains.

2. Materials and Methods

2.1. System Hardware Design

The overall structural design of the system, as depicted in Figure 2, primarily comprises two high-resolution cameras, four pairs of light sources (eight white light sources; model number: MV-LLDS-192-28; power: 7.2 w (four), 9.0 w (four); color temperature: 6000–7000 K), a high-throughput sampling board and shelf, an external industrial computer, a wireless electronic touchscreen, and a power supply module.

2.1.1. Comprehensive Image Data Capture

  • Image Acquisition Module
To minimize the impact of external light on the captured image, a sealed light-shielding enclosure made of a metal material was devised for this study, featuring only one inlet and outlet for sample insertion and retrieval (Figure 2A). Within this enclosure, a movable loading platform was installed, capable of movement along the X and Y axes to accommodate samples. This loading platform is operated by two stepper motors that control two pairs of sliders, ensuring precise sample positioning. Positioned 18 cm above and below the loading platform are two 2000 w high-resolution cameras (Nikon Corporation, Tokyo, Japan) utilized for capturing wheat (or even other crop) image. Additionally, surrounding the shooting plane approximately 10 cm above and below are four long incandescent light sources, powered by the power supply module located beneath the enclosure.
  • Design of high-throughput sampling board
The high-throughput sampling board developed for our research is a device designed for arranging and placing granular crops. It features a composite double-layer structure with sieve holes, as depicted in Figure 3A. The outer layer is a transparent square box made of acrylic material, which enables the bottom camera to capture images of the back side. Inside, there is a square sieve plate made of nylon material with hollows, securely fixed at the bottom of the sieve box. Nylon material was chosen due to its resistance to discoloration from exposure to ultraviolet light, minimizing its impact on the captured crop images. The size and shape of the bottom sieve holes were customized based on the size and shape of the crop grains. For instance, using wheat as an example, we selected olive-shaped sieve holes with a diameter that allowed only one normal-sized grain to fit into each hole. Furthermore, the sieve holes were designed with double-sided chamfers, with varying chamfer sizes (as shown in Figure 3B). Holes farther from the camera have larger chamfers, while those closer to the camera have smaller chamfers. Holes directly below the camera have no chamfers, and the chamfer angles are consistent on both sides. This design effectively addresses edge obstruction issues, enabling the camera to capture more comprehensive double-sided crop image information.
Compared to existing technologies [18,19], our device features a composite double-layer structure and a unidirectional double-sided chamfered sieve hole design, which facilitates the capture of additional image information by the cameras. Additionally, this design organizes crop grains in a regular pattern, simplifying image segmentation and the pairing of front and back sides, thereby improving the crop detection efficiency.

2.1.2. Vision-Guided Human–Machine Interactive Sorting

The vision-guided human–machine interactive sorting module features a wireless touch screen display device that provides the real-time sorting status for crops and highlights the position of grains to be picked up through flashing, as depicted in Figure 4. On the left side of the display screen, details about the grains slated for picking up are presented. Clicking on a specific type of grain triggers a color change from green to red in the corresponding box, accompanied by flashing. Simultaneously, the top of the screen displays “Please sort ** grains”, with images of the grains to be sorted shown on the left side. The grid area on the right side corresponds to the sample box, with the flashing area indicating the grains to be sorted. Inspectors utilize tweezers or a vacuum pickup pen to extract grains from the flashing positions and deposit them into the corresponding category containers. They repeat this process by clicking on the next button until all imperfect grains have been sorted out.
It is worth mentioning that in our equipment, both the assistance sorting module and the industrial computer are integrated into the entire system (Figure 2). The industrial computer is positioned at the back, while the assistance sorting touchscreen is positioned above the system. This integration allows for seamless operation and facilitates the work of inspectors. Simultaneously considering ergonomic factors, we designed a specific angle (approximately 25°) for placing the display touch screen, enhancing the comfort of quality inspectors when using the device.

2.2. Dataset Annotation of Wheat Grains

The dataset annotation process in this study comprises several steps: the collection of grain seeds, image acquisition, image preprocessing, and sample labeling. The grain samples used were provided by the Shanghai Customs Technical Center in China and covered 8 categories of samples including normal grains, broken grains, sprouted grains, moldy grains, insect grains, blemished grains, black-tipped grains, and red enzyme grains, as depicted in Figure 1. Each sample was identified and confirmed by technical experts from customs, and the categories can be distinctly distinguished based on the surface characteristics of the grains. After collecting the 8 categories of grain samples, we proceeded to the image acquisition phase, capturing separate high-resolution images for each category using the image acquisition module described in Section 2.1.1. Subsequently, the software, built on Python 3.7, processes two images by extracting individual samples and pairs them with their front and back images to create a sample set. Empty positions on the sampling board were automatically excluded. Finally, the paired wheat images were labeled using one-hot encoding [20] to establish a database with truth labels, as shown in Figure 5. This labeling method is simple, intuitive, and easy to implement [21,22]. Following data labeling, we invited two technical experts to validate the annotated images, making corrections and deletions as necessary to ensure the data quality.
The distribution of the various sample types in our dataset is shown in Table 1.

2.3. System Software Design

2.3.1. Software Design

The software design workflow diagram of our system is depicted in Figure 6. Solid line boxes denote tasks automated by the system, while unmarked areas require manual intervention. Upon opening our system software, it enters the self-check mode, primarily evaluating the operational status of the light source and cameras based on the light intensity and image quality. If any anomalies are detected, manual intervention is needed for repairs, such as checking for loose power lines, replacing light sources, and adjusting the camera alignment. Once the self-check is passed, the system proceeds to the imperfect grain detection phase for wheat. Initially, camera parameters and model hyperparameters can be manually adjusted, or default settings from the software can be utilized. Subsequently, the sampling plate is positioned on the shelf (Figure 2) and sample insertion into the system is initiated by pressing the control button outside the device. Clicking the detection button on the software interface captures the final frame images from the upper and lower cameras. Leveraging the segmentation and matching module introduced in this study, the system extracts individual wheat grains from the images, followed by applying the trained fine-grained classification model to detect imperfect grains one by one and annotating their positions. Upon retrieving the sampling plate, the software automatically opens the sorting interface (Figure 4), enabling the user to utilize the vision-guided human–machine interactive sorting module proposed in this study to pick up the imperfect grains. The system then determines whether to proceed with the next batch; if yes, the process repeats, if no, the weight of the picked up imperfect grains is measured. The system automatically reads the weight results and presents them in a histogram on the interface for the operator to view and save.
The developed software interface is characterized by simplicity, clarity, strong consistency, feedback capability, and ease of navigation. Its simplicity and clarity allow users to quickly locate the functions and modules they require. Strong consistency ensures that the layout, style, and interaction elements of our interface remain consistent, aiding users in better understanding and utilizing the application. The feedback capability involves providing timely feedback to users, such as indicating the acceptance of their actions through progress bars and dialog boxes. Lastly, the ease of navigation ensures that users can effortlessly access the information they need, thanks to clear navigation menus, labels, and buttons.

2.3.2. Fine-Grained Classification Model for Wheat Quality

  • Model structure
In this study, we developed a weakly supervised neural network model named the Attention-based Cropping and Erasing Network [23] (ACEN, the study has been published in Neurocomputing) for model training, as depicted in Figure 7.
Initially, we extracted feature maps from a basic CNN model and then generated attention maps using a 1 × 1 convolution kernel on these feature maps. Subsequently, we utilized a bilinear global pooling framework [24] to combine the feature maps and attention maps into a final feature-attention matrix. This matrix was fed into the classification head to determine the wheat category. By imposing alignment constraints, our model can create distinctive feature representations for each subclass using straightforward training strategies and minimal computational costs. To encourage multi-attention learning and mitigate overfitting, we employed attention region cropping and erasing operations. Attention cropping images highlight locally magnified discriminative object parts, while attention erasing images eliminate information-rich regions from the original image, prompting the model to focus on other informative regions. These enhanced images were fed into our model to improve the robust fine-grained feature learning, offering an effective solution for fine-grained classification compared to standard random data augmentation. During testing, our network initially predicts the classification of the original image, known as the coarse prediction. Based on the confidence value and attention map, we obtain the fine-grained cropping image, and the model generates a fine-grained prediction. This prediction is integrated with the coarse and fine predictions to form the final result.
  • Performance of the ACEN model on the wheat dataset
The ACEN model, known for its universal fine-grained classification capabilities, has demonstrated high performance on three widely used fine-grained classification datasets (CUB-200-2011 [25], FGVC-Aircrafts [26], and Stanford Cars [27]). As a result, we sought to integrate it into our system. Given that our fine-grained classification task lies between coarse-grained and typical fine-grained tasks in terms of complexity, we speculated that utilizing a simpler backbone model like ResNet-50 could meet the practical requirements. To validate this speculation, we replaced the original ResNet-101 backbone in ACEN with ResNet-50 and evaluated both models’ performance on our wheat fine-grained classification dataset. Interestingly, both models achieved comparable recognition accuracy (as depicted in Table 2). This suggests that our task’s inherent complexity does not necessitate an intricate backbone to effectively learn data patterns and features.
Based on the aforementioned study, we compared the ACEN method with other advanced fine-grained classification models (all utilizing ResNet-50 as the backbone) on the wheat dataset, as demonstrated in Table 3. The results demonstrated the ACEN method achieved a state-of-the-art performance compared to other fine-grained classification network models. Consequently, we integrated the ACEN method into the software system for detecting imperfect wheat grains.

2.4. Testing and Evaluation Criteria

  • System recognition accuracy
To validate the system’s effectiveness, we conducted an evaluation of its recognition accuracy. We randomly selected 10 batches of wheat samples, each containing 50 g of seeds (according to the National Standard of the People’s Republic of China GBT 5494-2019 [32], 50 g of medium-sized seeds (such as wheat, grain sorghum) in a batch). Our system detected each batch of wheat samples, and the detection results were individually evaluated by experienced inspectors for each grain. We calculated perfect grains, imperfect grains, and the average recognition accuracy across all samples using Formulas (1)–(3), as well as the system’s recognition accuracy for each type of imperfect grain (Formula (2)).
Perfect grain recognition accuracy = (NP/TP) × 100%
Imperfect grain recognition accuracy = (NIP/TIP) × 100%
Overall accuracy = (NP + NIP)/(TP + TIP) × 100%
Here, NP represents the number of perfect grains correctly identified by the machine; TP represents the total number of perfect grains identified by the machine; NIP represents the number of imperfect grains correctly identified by the machine; and TIP represents the total number of imperfect grains identified by the machine.
  • System Detection Time
The efficiency in detection is a crucial measure of system usability for an automated intelligent detection system. To evaluate this, we compared the time taken for manual inspection by experienced quality inspectors with the time required for our system to detect samples in a single batch.

3. Results

3.1. System Recognition Performance

In this study, we utilized our operating system to analyze wheat samples from 10 batches to assess the system’s recognition accuracy (depicted in Figure 8). Figure 8A represents the system’s recognition accuracy for each type of imperfect grain. There were variations in the system’s accuracy in recognizing different categories of imperfect grains. This discrepancy may stem from certain categories having distinct features that make them easier for the model to identify (such as broken grains). Conversely, other categories of imperfect grains may exhibit less conspicuous characteristics, leading to a lower accuracy (like black-tipped grains). Moving forward, we plan to prioritize enhancing the model’s ability to recognize categories with less distinct features. Figure 8B represents the system’s average recognition accuracy results for perfect grains, imperfect grains, and all of the test samples. Across all samples, the system exhibited an average recognition accuracy of 99.59% for perfect grains and 95.71% for imperfect grains. As for the entire set of wheat samples, the system achieved a comprehensive recognition accuracy of 99.45%. The recognition accuracy for imperfect grains was slightly lower than that for perfect grains. This disparity can be attributed to the fact that the model primarily focuses on the global information of the entire sample during the discrimination of perfect grains. In contrast, the model must accurately discern the local features of wheat for imperfect grains, which is influenced by factors like lighting conditions, shooting angles of the samples, or subtle differences between categories that are not easily distinguishable. These factors occasionally lead to confusion and misinterpretations in the model’s recognition process.

3.2. Efficiency of the Wheat Appearance Quality Intelligent Detection System

The purpose of designing this system in our study is to replace traditional manual inspection with intelligent detection. Therefore, evaluating the system’s efficiency during operation is essential. Table 4 presents a comparison of the time duration for each stage using the system for 50 g of wheat (one sample batch, requiring two detection cycles) and the time duration for manual operations. It is evident that an experienced quality inspector takes approximately five times longer to complete a batch of wheat inspection compared to the machine. Hence, the development of this intelligent detection system is crucial.
Furthermore, it is important to highlight that our system is not limited to the intelligent detection of wheat samples but can also be adapted for assessing the appearance quality of various granular crops. The key difference is that when evaluating other crop categories, we must design a high-throughput sampling plate tailored to the specific crop under examination, along with gathering high-quality datasets for model training.

4. Discussion

4.1. Discussion about the System Hardware Design

In this study, we implemented a high-throughput sampling plate with a chamfered edge design (Section 2.1.1) to optimize the camera’s ability to capture detailed image information. For wheat appearance quality detection, even minor visual obstructions can significantly impact the recognition accuracy. For example, a small obstruction like a hole in an insect grain occurring in the camera’s blind spot, may be mistakenly identified as a normal grain. Similarly, the unnoticed sharp tip of a black-tipped grain can also be misidentified as a normal grain. Therefore, the targeted design of the sampling plate in this study is crucial to ensuring the comprehensive capture of wheat information. Furthermore, in a single batch of samples, the proportion of imperfect grains is pivotal for evaluating the wheat batch quality. Hence, picking up and weighing the identified imperfect grains becomes necessary. The sorting interface developed in this study, combined with the densely arranged sampling plate, aids inspectors in swiftly identifying various imperfect grain types and picking them efficiently (as depicted in Figure 4). Moreover, this study integrated existing hardware into an intelligent system for wheat appearance quality detection. The decision to use high-resolution cameras ensures that the captured wheat image pixels meet the image recognition algorithm requirements, while also factoring in equipment costs. The selection of sampling plate materials involved screening from numerous options and considering factors like transport durability and resistance to discoloration from long-term ultraviolet exposure. The sampling plate size underwent rigorous testing; a size too large would hinder transport, while a size too small would necessitate multiple repetitions per batch, affecting the system’s efficiency. Overall, our hardware design aimed to enhance the practicality and effectiveness of the entire system. However, it is worth noting that this study did not achieve fully automated wheat quality detection, picking, and weighing. Future utilization of deep learning algorithms to directly map imperfect grain images with their weight may pave the way for complete system automation.

4.2. Discussion about the System Software Design

The focus of this study was the overall system design. To ensure optimal performance across all system modules, we developed corresponding software that was tailored to control the hardware components and execute specific functionalities through the integrated algorithmic modules. Our software initiates a system self-check during startup, analyzing the image quality captured by the camera to detect abnormalities in the light source or camera. Once the self-check is passed, a quality inspection of wheat’s appearance can be carried out. This phase incorporates segmentation and matching algorithms alongside a fine-grained classification network, enabling the precise grain-by-grain detection of wheat samples on the sampling board. Following this, our sorting interface guides inspectors in identifying and picking up various types of imperfect grains. Upon manual weighing, the results are fed into the software, which automatically generates a user-friendly histogram for convenient viewing and analysis. Therefore, we interconnected the system’s hardware via software, allowing it to execute detection tasks through human–machine interactions.

5. Conclusions

This study harnessed deep learning-based AI technology to develop a classification model for assessing wheat appearance quality, with the goal of categorizing and statistically analyzing the appearance quality of wheat samples. The model demonstrates exceptional accuracy, swift decision-making capabilities, robust generalization, and high repeatability. The innovative design of dual cameras (top and bottom) and the high-throughput sampling plate proposed in this study allows for comprehensive image capture from both sides of the target object. Additionally, the double-sided chamfer design of the sieve holes enhances the image capture comprehensiveness. Furthermore, the size and shape of the sieve holes can be tailored to different crop varieties, thus broadening the system’s applicability. Moreover, the incorporation of indicators for picking up imperfect grains streamlines the sorting process for inspectors, enabling quick and efficient crop sorting. These advancements collectively contribute to the efficacy and versatility of our system in intelligent crop quality assessment. Overall, our study presents a robust solution for wheat appearance quality assessment, leveraging cutting-edge technology to enhance the efficiency and accuracy of crop quality evaluation processes.

Author Contributions

Conceptualization, X.C. and H.L.; Methodology, J.L.; Software, M.Z.; Validation, M.Z. and F.X.; Formal Analysis, Y.X.; Investigation, J.L.; Resources, L.Y.; Data Curation, M.Z. and F.X.; Writing—Original Draft Preparation, J.L., J.C., and Y.X.; Writing—Review and Editing, J.L., J.C., X.C., and L.Y.; Visualization, H.L.; Supervision, X.C.; Project Administration, H.L.; Funding Acquisition, H.L. and X.C. All authors have read and agreed to the published version of the manuscript.

Funding

This work is supported by the National Key R&D Program of China (grant numbers 2021YFD1400100 and 2021YFD1400102) and National Natural Science Foundation of China (grant numbers 62103269 and 62073221). The project was funded by the Med-X Research Fund of Shanghai Jiao Tong University (grant number YG2022QN077).

Data Availability Statement

Data are available upon request from researchers who meet the eligibility criteria. Kindly contact the first author privately through e-mail.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Albahri, G.; Alyamani, A.A.; Badran, A. Enhancing essential grains yield for sustainable food security and bio-safe agriculture through latest innovative approaches. Agronomy 2023, 13, 1709. [Google Scholar] [CrossRef]
  2. Bin Rahman, A.R.; Zhang, J. Trends in rice research: 2030 and beyond. Food Energy Secur. 2023, 12, e390. [Google Scholar] [CrossRef]
  3. Wang, Y.H.; Su, W.H. Convolutional neural networks in computer vision for grain crop phenotyping: A review. Agronomy 2022, 12, 2659. [Google Scholar] [CrossRef]
  4. Fan, L.; Ding, Y.; Fan, D. An annotated grain kernel image database for visual quality inspection. Sci. Data 2023, 10, 778. [Google Scholar] [CrossRef]
  5. Vithu, P.; Moses, J. Machine vision system for food grain quality evaluation: A review. Trends Food Sci. Technol. 2016, 56, 13–20. [Google Scholar] [CrossRef]
  6. Aviara, N.A.; Liberty, J.T.; Olatunbosun, O.S. Potential application of hyperspectral imaging in food grain quality inspection, evaluation and control during bulk storage. J. Agric. Food Res. 2022, 8, 100288. [Google Scholar] [CrossRef]
  7. Wei, W.; Yang, T.L.; Rui, L. Detection and enumeration of wheat grains based on a deep learning method under various scenarios and scales. J. Integr. Agric. 2020, 19, 1998–2008. [Google Scholar] [CrossRef]
  8. Genze, N.; Bharti, R.; Grieb, M. Accurate machine learning-based germination detection, prediction and quality assessment of three grain crops. Plant Methods 2020, 16, 157. [Google Scholar] [CrossRef]
  9. Škrubej, U.; Rozman, Č.; Stajnko, D. Assessment of germination rate of the tomato seeds using image processing and machine learning. Eur. J. Hortic. Sci. 2015, 80, 68–75. [Google Scholar] [CrossRef]
  10. Nguyen, T.T.; Hoang, V.N.; Le, T.L. A vision based method for automatic evaluation of germination rate of rice seeds. In Proceedings of the 2018 1st International Conference on Multimedia Analysis and Pattern Recognition (MAPR), Ho Chi Minh City, Vietnam, 5–6 April 2018; pp. 1–6. [Google Scholar] [CrossRef]
  11. Khakimov, A.; Salakhutdinov, I.; Omolikov, A. Traditional and current-prospective methods of agricultural plant diseases detection: A review. IOP Conf. Ser. Earth Environ. Sci. 2022, 951, 012002. [Google Scholar] [CrossRef]
  12. Menghani, G. Efficient Deep Learning: A Survey on Making Deep Learning Models Smaller, Faster, and Better. ACM Comput. Surv. 2023, 55, 259. [Google Scholar] [CrossRef]
  13. Janiesch, C.; Zschech, P.; Heinrich, K. Machine learning and deep learning. Electron. Mark. 2021, 31, 685–695. [Google Scholar] [CrossRef]
  14. Oikonomidis, A.; Catal, C.; Kassahun, A. Deep learning for crop yield prediction: A systematic literature review. N. Z. J. Crop Hortic. Sci. 2023, 51, 1–26. [Google Scholar] [CrossRef]
  15. Zheng, H.; Fu, J.; Zha, Z.J. Looking for the devil in the details: Learning trilinear attention sampling network for fine-grained image recognition. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 5012–5021. [Google Scholar] [CrossRef]
  16. Li, X.; Wu, J.; Sun, Z. BSNet: Bi-similarity network for few-shot fine-grained image classification. IEEE Trans. Image Process. 2020, 30, 1318–1331. [Google Scholar] [CrossRef] [PubMed]
  17. Liu, C.; Huang, L.; Wei, Z. Subtler mixed attention network on fine-grained image classification. Appl. Intell. 2021, 51, 7903–7916. [Google Scholar] [CrossRef]
  18. Zhang, W.; Ma, H.; Li, X. Imperfect wheat grain recognition combined with an attention mechanism and residual network. Appl. Sci. 2021, 11, 5139. [Google Scholar] [CrossRef]
  19. Gao, H.; Zhen, T.; Li, Z. Detection of wheat unsound kernels based on improved ResNet. IEEE Access 2022, 10, 20092–20101. [Google Scholar] [CrossRef]
  20. Gu, B.; Sung, Y. Enhanced reinforcement learning method combining one-hot encoding-based vectors for CNN-based alternative high-level decisions. Appl. Sci. 2021, 11, 1291. [Google Scholar] [CrossRef]
  21. Li, J.; Si, Y.; Xu, T. Deep convolutional neural network based ECG classification system using information fusion and one-hot encoding techniques. Math. Probl. Eng. 2018, 2018, 7354081. [Google Scholar] [CrossRef]
  22. Sun, W.; Cai, Y.; Liu, Y. MSR14 Comparisons of Encoding Techniques for Categorical Features in Linear Regression Models. Value Health 2022, 25, S520. [Google Scholar] [CrossRef]
  23. Chen, J.; Li, H.; Liang, J. Attention-based cropping and erasing learning with coarse-to-fine refinement for fine-grained visual classification. Neurocomputing 2022, 501, 359–369. [Google Scholar] [CrossRef]
  24. Hu, T.; Qi, H.; Huang, Q. See better before looking closer: Weakly supervised data augmentation network for fine-grained visual classification. arXiv 2019, arXiv:1901.09891. [Google Scholar] [CrossRef]
  25. Wah, C.; Branson, S.; Welinder, P. The Caltech-Ucsd Birds-200-2011 Dataset. 2011. Available online: https://github.com/caltechvisionlab/caltechvisionlab.github.io/blob/master/_pages/datasets/cub_200_2011.md (accessed on 14 May 2024).
  26. Maji, S.; Rahtu, E.; Kannala, J. Fine-grained visual classification of aircraft. arXiv 2013. [Google Scholar] [CrossRef]
  27. Krause, J.; Stark, M.; Deng, J. 3d object representations for fine-grained categorization. In Proceedings of the IEEE International Conference on Computer Vision Workshops, Sydney, Australia, 1–8 December 2013; pp. 554–561. [Google Scholar] [CrossRef]
  28. He, K.; Zhang, X.; Ren, S. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27-30 June 2016; pp. 770–778. [Google Scholar] [CrossRef]
  29. Yang, Z.; Luo, T.; Wang, D. Learning to navigate for fine-grained classification. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 420–435. [Google Scholar] [CrossRef]
  30. Du, R.; Chang, D.; Bhunia, A.K. Fine-grained visual classification via progressive multi-granularity training of jigsaw patches. In Proceedings of the European Conference on Computer Vision, Glasgow, UK, 23–28 August 2020; Springer International Publishing: Cham, Switzerland, 2020; pp. 153–168. [Google Scholar] [CrossRef]
  31. Zhuang, P.; Wang, Y.; Qiao, Y. Learning attentive pairwise interaction for fine-grained classification. AAAI Conf. Artif. Intell. 2020, 34, 13130–13137. [Google Scholar] [CrossRef]
  32. National Standard Disclosure System. Available online: https://openstd.samr.gov.cn/bzgk/gb/newGbInfo?hcno=EB37F2E3E8B0C26EBB3A329D6C0E390E (accessed on 31 March 2024).
Figure 1. Images of various categories of wheat.
Figure 1. Images of various categories of wheat.
Agronomy 14 01057 g001
Figure 2. Wheat appearance quality intelligent detection system. (A) Overall structure diagram; (B) actual photograph of prototype.
Figure 2. Wheat appearance quality intelligent detection system. (A) Overall structure diagram; (B) actual photograph of prototype.
Agronomy 14 01057 g002
Figure 3. (A) Schematic diagram of a high-throughput sampling plate with chamfered design. (A) Actual photograph of prototype; (B) schematic diagram of the chamfered design.
Figure 3. (A) Schematic diagram of a high-throughput sampling plate with chamfered design. (A) Actual photograph of prototype; (B) schematic diagram of the chamfered design.
Agronomy 14 01057 g003
Figure 4. Example of the assistance sorting. (A) Sample flat panel; (B,C) Sorting display screens from different perspectives.
Figure 4. Example of the assistance sorting. (A) Sample flat panel; (B,C) Sorting display screens from different perspectives.
Agronomy 14 01057 g004
Figure 5. Image segmentation, pairing, and labeling.
Figure 5. Image segmentation, pairing, and labeling.
Agronomy 14 01057 g005
Figure 6. The overall workflow diagram of software design.
Figure 6. The overall workflow diagram of software design.
Agronomy 14 01057 g006
Figure 7. The model architecture of the fine-grained classification network.
Figure 7. The model architecture of the fine-grained classification network.
Agronomy 14 01057 g007
Figure 8. System recognition accuracy. (A) System’s recognition accuracy for each type of imperfect grain. (B) System’s average recognition accuracy results.
Figure 8. System recognition accuracy. (A) System’s recognition accuracy for each type of imperfect grain. (B) System’s average recognition accuracy results.
Agronomy 14 01057 g008
Table 1. The statistical count of various sample types.
Table 1. The statistical count of various sample types.
BlemishedRed EnzymeSproutedBlacktipBrokenMoldInsectIntactTotal
3188272030142064164815721854237818,438
Table 2. The performance of different backbones in ACEN based on our dataset.
Table 2. The performance of different backbones in ACEN based on our dataset.
Base ModelWheat Acc. (%)
ACEN(ResNet-50)96.0
ACEN(ResNet-101)95.9
Table 3. Different fine-grained classification methods’ performance on the wheat dataset.
Table 3. Different fine-grained classification methods’ performance on the wheat dataset.
MethodsAccuracy (%)
Baseline [28]93.0
NTS-Net [29]95.0
WSDAN [24]94.8
PMG [30]95.7
API-Net [31]95.8
ACEN96.0
Table 4. The time taken by the system compared to manual operations.
Table 4. The time taken by the system compared to manual operations.
FunctionTime/min
Capture Image and Preprocessing
(Segmentation/Pairing)
~(0.5 × 2)
Automatic Detection~(2.5 × 2)
Manually Picking up~(3 × 2)
Total Duration~12
Time Taken by Inspector ~60
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Liang, J.; Chen, J.; Zhou, M.; Li, H.; Xu, Y.; Xu, F.; Yin, L.; Chai, X. An Intelligent Detection System for Wheat Appearance Quality. Agronomy 2024, 14, 1057. https://doi.org/10.3390/agronomy14051057

AMA Style

Liang J, Chen J, Zhou M, Li H, Xu Y, Xu F, Yin L, Chai X. An Intelligent Detection System for Wheat Appearance Quality. Agronomy. 2024; 14(5):1057. https://doi.org/10.3390/agronomy14051057

Chicago/Turabian Style

Liang, Junling, Jianpin Chen, Meixuan Zhou, Heng Li, Yiheng Xu, Fei Xu, Liping Yin, and Xinyu Chai. 2024. "An Intelligent Detection System for Wheat Appearance Quality" Agronomy 14, no. 5: 1057. https://doi.org/10.3390/agronomy14051057

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop