Next Article in Journal
Antibiotic-Free Poultry Meat Consumption and Its Determinants
Previous Article in Journal
Development and Quality Evaluation of Rigatoni Pasta Enriched with Hemp Seed Meal
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Deep Learning Network of Amomum villosum Quality Classification and Origin Identification Based on X-ray Technology

1
Xin-Huangpu Joint Innovation Institute of Chinese Medicine, Guangzhou 510715, China
2
State Key Laboratory of Component Traditional Chinese Medicine, Tianjin 301617, China
3
College of Pharmaceutical Engineering of Traditional Chinese Medicine, Tianjin University of Traditional Chinese Medicine, Tianjin 301617, China
4
Tianjin Modern Innovative TCM Technology Co., Ltd., Tianjin 300380, China
5
Haihe Laboratory of Modern Chinese Medicine, Tianjin 301617, China
*
Author to whom correspondence should be addressed.
Foods 2023, 12(9), 1775; https://doi.org/10.3390/foods12091775
Submission received: 7 March 2023 / Revised: 9 April 2023 / Accepted: 21 April 2023 / Published: 25 April 2023
(This article belongs to the Section Food Quality and Safety)

Abstract

:
A machine vision system based on a convolutional neural network (CNN) was proposed to sort Amomum villosum using X-ray non-destructive testing technology in this study. The Amomum villosum fruit network (AFNet) algorithm was developed to identify the internal structure for quality classification and origin identification in this manuscript. This network model is composed of experimental features of Amomum villosum. In this study, we adopted a binary classification method twice consecutive to identify the origin and quality of Amomum villosum. The results show that the accuracy, precision, and specificity of the AFNet for quality classification were 96.33%, 96.27%, and 100.0%, respectively, achieving higher accuracy than traditional CNN under the condition of faster operation speed. In addition, the model can also achieve an accuracy of 90.60% for the identification of places of origin. The accuracy of multi-category classification performed later with the consistent network structure is lower than that of the cascaded CNNs solution. With this intelligent feature recognition model, the internal structure information of Amomum villosum can be determined based on X-ray technology. Its application will play a positive role to improve industrial production efficiency.

1. Introduction

Amomum villosum has been traditionally used as a food flavoring [1,2]. The plant is distributed from Sri Lanka to the Himalayas, China, southeast Asia, Malaysia, and northern Australia [3]. Pharmacological studies showed that Amomum villosum has great anti-diarrheal, anti-ulcer, anti-inflammatory, and antibacterial effects [4]. Furthermore, Amomum villosum has also been commonly used in Chinese cooking as a spice to mask the fishy smell of meat [1]. However, the quality of Amomum villosum in the market needs to be better controlled to ensure its safe use [5]. Amomum villosum may become moldy, deteriorate, or burst during harvesting, transportation, and processing. This results in a mixed quality of Amomum villosum sold in the market [1]. To control its quality, Fourier transform near-infrared spectroscopy and gas chromatography was used to determine the sample of Amomum villosum [6].
However, traditional analysis methods cannot achieve rapid and non-destructive detection of Amomum villosum. In recent years, the machine vision approach has been widely employed in the quality inspection of vegetables and other foods. For example, a tomato grading machine vision system performed calyx and stalk scar detection at an average accuracy of 0.9515 for both defective and healthy tomatoes [7]. An automatic carrot grading system was developed to inspect the surface quality of carrots based on computer vision and deep learning [8]. An automatic sorting system was developed to detect the quality of fresh white button mushrooms based on image processing [9]. The studies above illustrate that machine visioning can be employed to detect defective foods intelligently. However, most of the defects of Amomum villosum are hard to identify from the outside. Therefore, a non-destructive detecting technology is urgently needed to identify internal defects to ensure quality. Convolutional neural networks have rapidly advanced in various fields within the past few years [10]. X-ray technology has been successfully combined with CNN for fish bone detection [11,12], radiological image analysis [10], and chest X-ray examination [13,14]. Low-power X-ray detection technology could be used to identify the internal characteristics of fruits within safety limits. Images of internal defects of food can be acquired using X-ray technology, and the food can then be classified as normal or defective based on the images. For example, X-ray technology combined with CNN can be used to locate seed spoilage in mango fruit [15]. In addition, X-ray imaging technology is also employed to inspect onions [16], deboned poultry [17], etc. Most studies identify the origin based on the analysis of the components, such as tea [18], orange [19], and P. notoginseng main roots [20]. In addition, quality defects were distinguished in most studies, but origin identification has often been overlooked, such as in the classification of mangoes [21], eggs [22], apples [23], etc. Generally, the components of plants will vary due to temperature, precipitation, climate, soil characteristics, and other conditions when they are from different places of origin. Such plants include Gentiana rigescens Franch [24], sesame seeds [25], and Lonicerae Japonicae Flos [26]. Therefore, identifying the origin of Amomum villosum based on X-ray technology is beneficial to maintain stable product quality.
In this study, we explored the possibility of identifying the origin of Amomum villosum directly from the images. A non-destructive inspection method was developed to detect the defective Amomum villosum fruits, which combines deep learning and X-ray imaging technology. A convolution neural network was developed to improve the accuracy of evaluating the quality of Amomum villosum. A new algorithm was developed to identify the origin of Amomum villosum, which achieved higher accuracy than the traditional convolutional neural network.

2. Materials and Methods

2.1. Samples

The Amomum villosum fruit was provided by Tianjin Shangyaotang Pharmaceutical Co., Ltd. (Tianjin, China). The samples were collected from two places of origin, Yunnan and Guangdong. All of them were divided into normal and defective categories according to their X-ray images through manual inspection.

2.2. X-ray Detection System

In this study, the experimental system, which has been used to capture the X-ray images of sterculia seeds, was used to obtain X-ray images of Amomum villosum to perform future research on origin identification [27]. In the online X-ray imaging system, a linear array X-ray detector was used to capture the gray lines generated by the X-rays passing through the fruits of Amomum villosum.
The image acquisition device consists of the following parts: servo driver, servo motor, X-ray source, linear array detector. The model of the servo driver is MADLT15SF (Panasonic Industry Co., Ltd., Tokyo, Japan) and the frequency response is set at 3.2 kHz. Meanwhile, the supply voltage is single/3-phase 200 V. The I/F classification of type is analog/pulse when the protocol is Modbus and the interface is RS485/RS232. The model of the servo motor is MHMF022L1U2M (Panasonic Industry Co., Ltd.). Its rated output is 200.0 W and the rated current is 1.4 A (rm s); moreover, the rated speed is 3000.0 r/min and the rated torque is 0.64 N m.
The X-ray source adopts the mini-focus X-ray system V.J IXS0808. Its output current is 0.2–0.7 mA and the output voltage is 20–80 KV. In addition, the maximum continuous output power is 50 W.
The linear array detector employed the data acquisition system XNDT and the technology is XNDT-04. It has a scanning range of 400–600 mm and the pixel size is between 50 μm and 0.8 mm. Furthermore, the SNR system is 25,000:1 and the electronic SNR is 50,000:1.
The acquired digital image is transmitted to the workstation through the network cable. The control interface program and hardware driver program were designed with C#.NET 4.0 (visual studio 2017).

2.3. X-ray Image Acquisition and Pre-Processing

The difference of gray value in the X-ray image can be explained by the principle of X-ray attenuation, as shown in Equation (1).
I I 0 = E X P ( μ m ρ L )
In Equation (1), I, I0, and µm represent the transmitted effective X-ray intensity, incident effective X-ray intensity, and the linear attenuation coefficient (cm−1), respectively. ρ is the material density and L is the material thickness. X-ray attenuation occurs through the fruit, and the degree of attenuation depends on the thickness and density of the material.
Amomum villosum fruit was placed on the conveyor belt and its X-ray image was photographed and saved on the computer. Next, 1600 X-ray images were manually divided into normal and defect groups for their quality. At the same time, they were also divided into the Yunnan and Guangdong groups for the study of their place of origin. All of the images are shown in Figure 1 for the various groups.
Figure 1a shows the normal sample produced in Yunnan. Meanwhile, Figure 1b shows the X-ray image of the normal sample produced in Guangdong. The shapes of the normal samples from various origins are different. For Figure 1a, the inner seed kernel is larger than that in Figure 1b. On the contrary, the X-ray image of the defective fruit samples produced in the same location has a blurred outer contour and small or no inner nuts in Figure 1c.
As shown in Figure 1, it can be found that there is an apparent structural difference between the normal and defect groups by the visual inspection of the X-ray images. For the normal groups, it is possible to distinguish various places of origin based on their appearance.
Based on the differences between the normal and defective images mentioned above, as well as the differences between the images of fruits from different regions, the dataset was divided into quality classification and region classification. It is worth mentioning that in order to explore the possibility of distinguishing fruit images of different origins and qualities at once, considering that there is no considerable difference in the images of defective fruits from different regions, the previous data was further divided into three classification datasets.
After labeling, the images of different datasets were divided into training sets, validation sets, and testing sets according to Table 1. From Table 1, it can be seen that the quality classification dataset has a total of 1600 images, which are divided into normal and defective according to the quality, with 1430 and 170 images, respectively. In addition, it is divided into training, validation, and testing sets according to the ratio of 3:1:1. The origin identification dataset contains 455 and 594 images of fruit from Yunnan and Guangdong, respectively, and only contains normal images from different origins after removing the defective images and atypical images. Finally, is the images were divided into training, validation, and test set according to the ratio of 7:2:2. The three-category classification dataset contains 452, 587, and 101 normal Yunnan, normal Guangdong, and defective images, respectively. It is divided into training, validation, and test sets according to the ratio of 4:1:1.
Before the quality classification, the fruit images were randomly converted and changed to increase the varieties of the dataset. The X-ray images were randomly rotated by 30° or moved horizontally and vertically by 10%. Then, they were flipped horizontally. Due to the original image size being around 400 pixels × 400 pixels and variable, all the images were uniformly adjusted to 50 pixels × 50 pixels to adapt to the later network structure. Finally, all the images were adjusted to 50 pixels × 50 pixels to adapt to the later network structure. After quality classification, the images of the normal fruits that were identified by the CNN model were preprocessed and used for origin identification. Filler white edges were added around the original images to make each image become 400 pixels × 400 pixels to avoid deforming the images in the deformation steps before the study of origin identification. The fruit from the Yunnan group was labeled as 1, and the fruit from the Guangdong group was labeled as 0.

2.4. The Analysis Process and Network Structure

2.4.1. The Analysis Process

In this paper, the network structure used to detect defective Amomum villosum is the same as that used to detect the place of origin of Amomum villosum. The methodology diagram is shown in Figure 2. Firstly, the X-ray images of Amomum villosum from various places of origin were input into the network. Next, the detected defective fruit was removed from the input. The remaining normal fruits were fed into the network model to identify the place of origin. The solid arrow represents the connection of the network layer. The dotted arrow indicates that the two networks have the same structure. The proposed AFNet architecture for non-destructive testing was implemented in Python 3.7 using the Tensorflow backend. The training process was implemented on a Windows 10 system with a 3080Ti GPU.

2.4.2. Structure of the Proposed Amomum villosum Fruit Network

As shown in Table 2, a convolutional neural network including 3 convolution layers and 3 full connection layers was used in this study. The convolutional layers consist of 4, 8, and 16 filters with a filter size of 3 × 3. A 2 × 2 max-pooling function was used to modify the output results of the convolutional layer. In the AFNet, the L2 regularization method and dropout layer was used to prevent overfitting. The set strategy of the dropout rate for each layer is shown in Table 2. In this work, the rectified linear unit (ReLU) was used as an activation function overall in the network. In the output layer, a Softmax activation was employed to solve the binary classification problem. Sparse categorical cross-entropy loss function was used to train the network.

2.5. Quality Classification of Amomum villosum Fruit

Training Process of Quality Classification

In this study, the X-ray images of Amomum villosum fruit from two places of origin were mixed and then divided into two groups, namely, normal and defective by quality. In the normal group, the samples must have full contents. On the contrary, others can be classified as defective. Then, the neural network was trained to divide the input fruit pictures into normal and defective ones and output them as results.

2.6. Origin Identification

Training Process of Origin Identification

After the study of quality classification, the normal fruit images were sent to the network again for training to distinguish the place of origin of Amomum villosum. The preprocessed images were mixed into training set and validation set, and the validation set was randomly shuffled to ensure that the neural network encountered images of various labels.

2.7. The Multi-Category Classification of Amomum villosum Fruits

2.7.1. Training Process of Multi-Category Classification

The obtained X-ray images were placed directly into the AFNet model for multi-category classification in the following research, as shown in Figure 3. The X-ray images of Amomum villosum from the two places of origin with a mixture of qualities were divided into three types. The type of non-defective sample from Yunnan, the type of non-defective sample from Guangdong, and the defective sample were labeled as 0, 1, and 2, respectively. After that, the images of Amomum villosum fruit were sent to the same neural network as the binary classification for prediction.

2.7.2. Parameters Optimization

All the X-ray images of Amomum villosum were input into the convolutional neural network model. In addition, the network structure was modified to achieve the optimal classification performance. At the same time, the hyperparameters of the network model were adjusted to achieve the highest accuracy, as well as the binary classification. The optimal prediction performance was obtained while the model was kept consistent with the original network structure. The batch size among the four levels was optimized as a value between 64 and 8. Furthermore, the learning rate among the five levels was adjusted between 5.0 × 10−4 and 1.0 × 10−5.

2.8. Model Comparison

Traditional CNN models VGG16, ResNet18, and Inception were used to compare the classification performance with the AFNet model. The accuracy, precision, and specificity were used to measure their performance. The same parameters and data were used for training and testing. In this work of quality classification, the batch size was set as 32, the learning rate was set as 8.0 × 10−5, and the epoch was set as 300.

2.9. Evaluation Standards

The binary classification performance of AFNet is measured by three criteria: accuracy, specificity, and precision. These parameters were calculated by the following Equations (2)–(4) [28]:
A c c u r a c y = T P + T N T P + T N + F P + F N
P r e c i s i o n = T P T P + F P
S p e c i f i c i t y = T P T P + F N
TP, FN, FP, and TN represent the number of true positives, false negatives, false positives, and true negatives, respectively.
The multi-class classification performance of AFNet is measured by three standards: average accuracy, precision, and recall. These parameters are calculated using the following Equations (5)–(7) [29]:
Average   accuracy = i = 1 l t p i + t n i t p i + f n i + f p i + t n i l
Precision M = i = 1 l t p i t p i + f p i l
Recall M = i = 1 l t p i t p i + f n i l
Tpi, fpi, fni, and tni are true positive, false positive, false negative, and true negative counts for Ci, respectively. M indices represent macro-averaging.

3. Results

3.1. Performance of AFNet in Detecting Defective Fruits

Confusion Matrix of the Validation Dataset

The robustness of the AFNet model was tested using a validation set containing 386 X-ray images of Amomum villosum from different place of origin. As shown in Figure 4, the accuracy, precision, and specificity of AFNet can be calculated as 96.33%, 96.27%, and 100.0%, respectively, according to the confusion matrix.

3.2. The Performance of AFNet in Distinguishing Places of Origin

3.2.1. Accuracy and Loss Curve

As shown in Figure 5, the accuracy rate of the training set shows a gradual upward trend with the increase of iterations. The accuracy of the validation set fluctuated violently in the first 200 epochs, and gradually stabilized after 200 epochs and fitted the accuracy of the training datasets. The accuracy reached the highest value at the 300th epoch. In the first 50 epochs, the loss function curve of the training dataset decreased rapidly. The decline rate of the loss function curve of the training dataset before 300 epochs slowed down. The loss function curve of the validation dataset decreased rapidly in the first 100 epochs and tended to be gentle before 300 epochs. Overall, the curves of the validation dataset and training dataset are relatively close, indicating that there is no or slight overfitting.

3.2.2. Confusion Matrix of the Validation Dataset

An independent dataset containing 149 X-ray images of Amomum villosum was used to validate the robustness of the proposed model. As shown in Figure 6, the confusion matrix was drawn on the basis of the obtained results. According to the confusion matrix, the accuracy, precision, and specificity of AFNet in identifying the place of origin can be calculated as 90.60%, 91.11%, and 80.39%, respectively.

3.3. Performance of AFNet in Multi-Category Classification

3.3.1. Parameter Optimization

Figure 7 shows that the relationship of accuracy changed with the learning rate. When the learning rate was set to 8 × 10−5, the accuracy rate reached the highest. Then, the batch size was set to 32, followed by a batch size set to 64.
Figure 8 shows the condition of the batch size that was set to 32. As the learning rate gradually decreased, the accuracy reached the peak with the condition that the learning rate was set to 8 × 10−5. Therefore, the parameter configuration achieved optimal network performance when the batch size and the learning rate were set to 32 and 8 × 10−5, respectively.

3.3.2. Accuracy and Loss Curve

As can be seen from Figure 9, the overall accuracy curves dramatically increased and remained stable at around 0.9 after about 300 iterations. The loss curves of the training and validation datasets declined dramatically after the first 50 iterations. The gap between the validation and training loss curves tended to stabilize after epoch 50.

3.3.3. Confusion Matrix of the Validation Dataset

An independent dataset containing 209 X-ray images of Amomum villosum was used to validate the robustness of the proposed model. Figure 10 reports that the confusion matrix was obtained for the multi-category grading model. According to the confusion matrix, the average accuracy, precision, and recall of AFNet in multi-category was 90.08%, 86.48%, and 88.47%, respectively.

3.4. Comparison with Traditional CNN Model

A validation dataset containing 386 X-ray images of Amomum villosum was used to test the performance of the AFNet model and traditional CNN models. As shown in Figure 4, the confusion matrix was drawn based on the experimental results. As shown in Table 3, AFNet achieved similar or higher accuracy than the traditional CNN model.

4. Discussion

In this study, a convolutional neural network model based on X-ray technology was established with the deep learning method. The model performed well for both quality classification and origin identification. In terms of quality classification, AFNet achieved similar accuracy compared with the traditional algorithm methods. Furthermore, it will be suitable for industrial production scenarios that require higher detection speed when its network layers are significantly less than traditional algorithm methods. In terms of origin identification, this model can be used to detect the origins of Amomum villosum not limited to Yunnan and Guangdong. It has also achieved the desired classification effect. Moreover, as was mentioned in the introduction, there are differences in the fruit components of various origins, which further affects the quality of downstream products. Therefore, this model can be used to control the fruit quality, preferably through the identification of the origin, which is also the novelty of this approach. However, 10 non-defective samples from Yunnan were wrongly classified. The reason for this might be that the images from another place of origin were substantially identical and only different in contour shape. Therefore, the number of samples in the training dataset can be increased to improve the accuracy of origin identification in the future. Moreover, CT-scanning can be used to create artificial X-ray images and reduce data acquisition [30]. In the actual production process, the disordered background may interfere with the detection of the internal information of the Amomum villosum. Therefore, we can use image preprocessing methods, such as image binarization, to extract the regions of interest before detection.
In the muti-category classification experiment added in this paper, the number of original network layers was adjusted accordingly to achieve optimal performance. However, the final result shows that the accuracy of multi-classification is lower than the cascaded CNNs solution.
With improved model accuracy, the detection method can potentially switch from one-by-one-sample mode to multiple-samples-at-a-time mode to improve the efficiency of classification. At the same time, it can also be combined with other detection methods to capture different images, such as the RGB image of the fruit to determine the external rot, mildew, and other information about the fruit. A variety of methods have been adopted to detect food quality in previous studies. In the case of orange juice, double-effect real-time PCR was used to exploit the adulteration of mandarin juice [31]. Sandra et al. used hyperspectral images to detect the internal mechanical damage of persimmon fruit [32]. Furthermore, compared with previous research, the method adopted in this study has dramatically improved the detection speed while also ensuring accuracy, and can more readily adapt to the needs of industrial online detection. In addition, sampling inspection has been adopted to control fruits in some studies. However, the full detection method that can control the quality of the final product more comprehensively was adopted for the fruit in this paper. In addition, this experiment explored how to use a single model to classify Amomum villosum images on the basis of multi-category training. It provides a reference for the classification of other fruits in various places of origin. In the future, this algorithm can be combined with new technologies, such as hyperspectral image analysis, to detect the internal components of fruit and be applied to other kernels, such as peach kernels and walnuts. This model can also be applied to the quality inspection of food, wood, and other items with internal defects to meet the needs of online sorting and replace manual quality detection in various industrial scenarios.

5. Conclusions

Based on CNN deep learning and X-ray technology, this study developed a new model for rapid nondestructive quality classification and origin identification of Amomum villosum fruits. A total of 1600 X-ray images were used to train and test the proposed model. The accuracy of quality classification can reach 96.33%. Meanwhile, the accuracy of origin identification can reach 90.60%. The developed model can potentially be applied in the industrial production process to improve accuracy and efficiency.

Author Contributions

Conceptualization, Z.W., Q.X. and Y.C.; methodology, Q.X. and C.L.; software, P.M.; validation, K.M. and X.L.; writing—original draft preparation, Z.W.; writing—review and editing, Y.Y.; supervision, Z.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Joint Innovation Foundation of JIICM (2022IR025), the Tianjin University Student Innovation and Entrepreneurship Training Program (202210063017), the Innovation Team and Talents Cultivation Program of National Administration of Traditional Chinese Medicine (No: ZYYCXTD-D-202002), and the National Natural Science Foundation of China (Grant No. 82074276).

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Conflicts of Interest

The authors declare that they have no conflict with any organization with a direct or indirect financial interest in the subject matter discussed in the manuscript. The authors declare that they have no direct or potential conflicts of interest with Tianjin Modern Innovation Traditional Chinese Medicine Technology Co., Ltd., Tianjin, China. Peiqi Miao is an employee of the Tianjin Modern Innovation Traditional Chinese Medicine Technology Co., Ltd., Tianjin, China (hereinafter referred to as “the company”) and hereby declare the following regarding conflicts of interest: I promise to avoid conflicts of interest (even superficial conflicts) with the company, its shareholders, and its customers.

References

  1. Ai, Z.P.; Mowafy, S.; Liu, Y.H. Comparative analyses of five drying techniques on drying attributes, physicochemical aspects, and flavor components of Amomum villosum fruits. LWT Food Sci. Technol. 2022, 154, 112879. [Google Scholar] [CrossRef]
  2. Doh, E.J.; Kim, J.H.; Lee, G. Identification and monitoring of Amomi fructus and its Adulterants Based on DNA Barcoding Analysis and Designed DNA Markers. Molecules 2019, 24, 4193. [Google Scholar] [CrossRef] [PubMed]
  3. Droop, A.J.; Newman, M.F. A revision of Amomum (Zingiberaceae) in sumatra. Edinb. J. Bot. 2014, 71, 193–258. [Google Scholar] [CrossRef]
  4. Huang, Q.L.; Duan, Z.G.; Yang, J.F.; Ma, X.Y.; Zhan, R.T.; Xu, H.; Chen, W.W. SNP Typing for Germplasm Identification of Amomum villosum Lour. Based on DNA Barcoding Markers. PLoS ONE 2014, 9, e114940. [Google Scholar] [CrossRef]
  5. Ao, H.; Wang, J.; Chen, L.; Li, S.M.; Dai, C.M. Comparison of Volatile Oil between the Fruits of Amomum villosum Lour. and Amomum villosum Lour. var. xanthioides T. L. Wu et Senjen Based on GC-MS and Chemometric Techniques. Molecules 2019, 24, 1663. [Google Scholar] [CrossRef]
  6. Guo, H.-J.; Weng, W.-F.; Zhao, H.-N.; Wen, J.-F.; Li, R.; Li, J.-N.; Zeng, C.-B.; Ji, S.-G. Application of Fourier transform near-infrared spectroscopy combined with GC in rapid and simultaneous determination of essential components in Amomum villosum. Spectrochim. Acta Part A 2021, 251, 119426. [Google Scholar] [CrossRef]
  7. Ireri, D.; Belal, E.; Okinda, C.; Makange, N.; Ji, C. A computer vision system for defect discrimination and grading in tomatoes using machine learning and image processing. Artif. Intell. Agric. 2019, 2, 28–37. [Google Scholar] [CrossRef]
  8. Deng, L.; Li, J.; Han, Z. Online defect detection and automatic grading of carrots using computer vision combined with deep learning methods. LWT Food Sci. Technol. 2021, 2, 111832. [Google Scholar] [CrossRef]
  9. Wang, F.; Zheng, J.; Tian, X.; Wang, J.; Niu, L.; Feng, W. An automatic sorting system for fresh white button mushrooms based on image processing. Comput. Electron. Agric. 2018, 151, 416–425. [Google Scholar] [CrossRef]
  10. Soffer, S.; Ben-Cohen, A.; Shimon, O.; Amitai, M.M.; Greenspan, H.; Klang, E. Convolutional Neural Networks for Radiologic Images: A Radiologist’s Guide. Radiology 2019, 290, 590–606. [Google Scholar] [CrossRef]
  11. Urazoe, K.; Kuroki, N.; Maenaka, A.; Tsutsumi, H.; Iwabuchi, M.; Fuchuya, K.; Hirose, T.; Numa, M. Automated Fish Bone Detection in X-Ray Images with Convolutional Neural Network and Synthetic Image Generation. IEEJ Trans. Electr. Electron. Eng. 2021, 16, 1510–1517. [Google Scholar] [CrossRef]
  12. Mery, D.; Lillo, I.; Loebel, H.; Riffo, V.; Soto, A.; Cipriano, A.; Aguilera, J.M. Automated fish bone detection using X-ray imaging. J. Food Eng. 2011, 105, 485–492. [Google Scholar] [CrossRef]
  13. Soric, M.; Pongrac, D.; Inza, I. Using Convolutional Neural Network for Chest X-ray Image classification. In Proceedings of the 2020 43rd International Convention on Information, Communication and Electronic Technology (MIPRO 2020), Opatija, Croatia, 28 September–2 October 2020. [Google Scholar] [CrossRef]
  14. Farooq, M.S.; Rehman, A.U.; Idrees, M.; Raza, M.A.; Ali, J.; Masud, M.; Al-Amri, J.F.; Kazmi, S.H.R. An Effective Convolutional Neural Network Model for the Early Detection of COVID-19 Using Chest X-ray Images. Appl. Sci. 2021, 11, 10301. [Google Scholar] [CrossRef]
  15. Ansah, F.A.; Amo-Boateng, M.; Siabi, E.K.; Bordoh, P.K. Location of seed spoilage in mango fruit using X-ray imaging and convolutional neural networks. Sci. Afr. 2023, 20, e01649. [Google Scholar] [CrossRef]
  16. Tollner, E.W.; Gitaitis, R.D.; Seebold, K.W.; Maw, B.W. Experiences with a food product X-ray inspection system for classifying onions. Appl. Eng. Agric. 2005, 21, 907–912. [Google Scholar] [CrossRef]
  17. Tao, Y.; Chen, Z.; Jing, H.; Walker, J. Internal inspection of deboned poultry using X-ray imaging and adaptive thresholding. Trans. ASAE 2001, 44, 1005–1009. [Google Scholar]
  18. Ye, X.; Jin, S.; Wang, D.; Zhao, F.; Yu, Y.; Zheng, D.; Ye, N. Identification of the Origin of White Tea Based on Mineral Element Content. Food Anal. Methods 2017, 10, 191–199. [Google Scholar] [CrossRef]
  19. Dan, S.J. NIR Spectroscopy Oranges Origin Identification Framework Based on Machine Learning. Int. J. Semant. Web Inf. Syst. 2022, 18, 16. [Google Scholar] [CrossRef]
  20. Cui, Z.Y.; Liu, C.L.; Li, D.D.; Wang, Y.Z.; Xu, F.R. Anticoagulant activity analysis and origin identification of Panax notoginseng using HPLC and ATR-FTIR spectroscopy. Phytochem. Anal. 2022, 33, 971–981. [Google Scholar] [CrossRef]
  21. Patel, K.K.; Kar, A.; Khan, M.A. Monochrome computer vision for detecting common external defects of mango. J. Food Sci. Technol. 2021, 58, 4550–4557. [Google Scholar] [CrossRef]
  22. Nasiri, A.; Omid, M.; Taheri-Garavand, A. An automatic sorting system for unwashed eggs using deep learning. J. Food Eng. 2020, 283, 110036. [Google Scholar] [CrossRef]
  23. Fan, S.; Li, J.; Zhang, Y.; Tian, X.; Huang, W. On line detection of defective apples using computer vision system combined with deep learning methods. J. Food Eng. 2020, 286, 110102. [Google Scholar] [CrossRef]
  24. Liu, C.; Shen, T.; Xu, F.; Wang, Y. Main components determination and rapid geographical origins identification in Gentiana rigescens Franch. based on HPLC, 2DCOS images combined to ResNet. Ind. Crops Prod. 2022, 187, 115430. [Google Scholar] [CrossRef]
  25. Hika, W.A.; Atlabachew, M.; Amare, M. Geographical origin discrimination of Ethiopian sesame seeds by elemental analysis and chemometric tools. Food Chem. 2023, 17, 100545. [Google Scholar] [CrossRef]
  26. Gu, L.; Xie, X.; Wang, B.; Jin, Y.; Wang, L.; Wang, J.; Yin, G.; Bi, K.; Wang, T. Discrimination of Lonicerae japonicae Flos according to species, growth mode, processing method, and geographical origin with ultra-high performance liquid chromatography analysis and chemical pattern recognition. J. Pharm. Biomed. Anal. 2022, 219, 114924. [Google Scholar] [CrossRef]
  27. Xue, Q.L.; Miao, P.Q.; Miao, K.H.; Yu, Y.; Li, Z. X-ray-based machine vision technique for detection of internal defects of sterculia seeds. J. Food Sci. 2022, 87, 3386–3395. [Google Scholar] [CrossRef]
  28. Yadav, S.; Sengar, N.; Singh, A.; Singh, A.; Dutta, M.K. Identification of disease using deep learning and evaluation of bacteriosis in peach leaf. Ecol. Inform. 2021, 61, 101247. [Google Scholar] [CrossRef]
  29. Sokolova, M.; Lapalme, G. A systematic analysis of performance measures for classification tasks. Inf. Process. Manag. 2009, 45, 427–437. [Google Scholar] [CrossRef]
  30. Andriiashen, V.; van Liere, R.; van Leeuwen, T.; Batenburg, K.J. CT-based data generation for foreign object detection on a single X-ray projection. Sci. Rep. 2023, 13, 1881. [Google Scholar] [CrossRef]
  31. Dasenaki, M.E.; Thomaidis, N.S. Quality and Authenticity Control of Fruit Juices—A Review. Molecules 2019, 24, 1014. [Google Scholar] [CrossRef]
  32. Munera, S.; Rodriguez-Ortega, A.; Aleixos, N.; Cubero, S.; Gomez-Sanchis, J.; Blasco, J. Detection of Invisible Damages in “Rojo Brillante” Persimmon Fruit at Different Stages Using Hyperspectral Imaging and Chemometrics. Foods 2021, 10, 2170. [Google Scholar] [CrossRef] [PubMed]
Figure 1. X-ray images of normal and defective samples in Yunnan and Guangdong. (a) Normal samples from Yunnan; (b) normal samples from Guangdong; (c) defective samples.
Figure 1. X-ray images of normal and defective samples in Yunnan and Guangdong. (a) Normal samples from Yunnan; (b) normal samples from Guangdong; (c) defective samples.
Foods 12 01775 g001
Figure 2. The diagram of the analysis process of the overall architecture of AFNet.
Figure 2. The diagram of the analysis process of the overall architecture of AFNet.
Foods 12 01775 g002
Figure 3. Flowchart of the proposed multi-category classification.
Figure 3. Flowchart of the proposed multi-category classification.
Foods 12 01775 g003
Figure 4. Confusion matrix of the proposed AFNet model when detecting normal and defective samples from various places.
Figure 4. Confusion matrix of the proposed AFNet model when detecting normal and defective samples from various places.
Foods 12 01775 g004
Figure 5. Training and testing curves of AFNet in distinguishing the place of origin of the fruits. (a) Accuracy curves; (b) Loss curves. In Figure 5, the x-axis is the epochs; in (a), the y-axis is accuracy; and in (b), the y-axis is the loss value.
Figure 5. Training and testing curves of AFNet in distinguishing the place of origin of the fruits. (a) Accuracy curves; (b) Loss curves. In Figure 5, the x-axis is the epochs; in (a), the y-axis is accuracy; and in (b), the y-axis is the loss value.
Foods 12 01775 g005
Figure 6. Confusion matrix of the proposed AFNet model when distinguishing various places of origin.
Figure 6. Confusion matrix of the proposed AFNet model when distinguishing various places of origin.
Foods 12 01775 g006
Figure 7. Validation accuracy results under various learning rates.
Figure 7. Validation accuracy results under various learning rates.
Foods 12 01775 g007
Figure 8. Validation results of various batch sizes.
Figure 8. Validation results of various batch sizes.
Foods 12 01775 g008
Figure 9. Training and testing curves of AFNet in muti-category. (a) Accuracy curves; (b) Loss curves.
Figure 9. Training and testing curves of AFNet in muti-category. (a) Accuracy curves; (b) Loss curves.
Foods 12 01775 g009
Figure 10. Confusion matrix of the proposed AFNet model in muti-category classification.
Figure 10. Confusion matrix of the proposed AFNet model in muti-category classification.
Foods 12 01775 g010
Table 1. Composition and division of the dataset.
Table 1. Composition and division of the dataset.
Dataset NameCategoriesTotal SizeProportion
quality classificationNormal14303:1:1
Defective170
origin identificationYunnan4557:2:2
Guangdong594
three-category classificationYunnan4524:1:1
Guangdong587
Defective101
Table 2. Architecture of the AFNet.
Table 2. Architecture of the AFNet.
LayersNumber of FiltersSize of FiltersStride
Input---
1st Conv + Relu43 × 31
Dropout30%--
2nd Conv + Relu83 × 31
Dropout40%--
3rd Conv + Relu163 × 31
MaxPooling-2 × 22
Dropout50%--
Flatten Layer---
1st Dense Layer---
Dropout20%--
2nd Dense Layer---
Dropout30%--
3rdt Dense Layer---
Dropout30%--
Output Layer + Softmax---
Table 3. The validation result of different CNN models.
Table 3. The validation result of different CNN models.
No.ModelAccuracyPrecisionSpecificity
1AFNet model96.33%96.27%100.0%
2BSSNet model94.05%95.56%96.16%
3VGG16 model96.13%96.86%97.89%
4Resnet18 model94.33%93.38%99.29%
5Inception model95.87%95.27%99.29%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wu, Z.; Xue, Q.; Miao, P.; Li, C.; Liu, X.; Cheng, Y.; Miao, K.; Yu, Y.; Li, Z. Deep Learning Network of Amomum villosum Quality Classification and Origin Identification Based on X-ray Technology. Foods 2023, 12, 1775. https://doi.org/10.3390/foods12091775

AMA Style

Wu Z, Xue Q, Miao P, Li C, Liu X, Cheng Y, Miao K, Yu Y, Li Z. Deep Learning Network of Amomum villosum Quality Classification and Origin Identification Based on X-ray Technology. Foods. 2023; 12(9):1775. https://doi.org/10.3390/foods12091775

Chicago/Turabian Style

Wu, Zhouyou, Qilong Xue, Peiqi Miao, Chenfei Li, Xinlong Liu, Yukang Cheng, Kunhong Miao, Yang Yu, and Zheng Li. 2023. "Deep Learning Network of Amomum villosum Quality Classification and Origin Identification Based on X-ray Technology" Foods 12, no. 9: 1775. https://doi.org/10.3390/foods12091775

APA Style

Wu, Z., Xue, Q., Miao, P., Li, C., Liu, X., Cheng, Y., Miao, K., Yu, Y., & Li, Z. (2023). Deep Learning Network of Amomum villosum Quality Classification and Origin Identification Based on X-ray Technology. Foods, 12(9), 1775. https://doi.org/10.3390/foods12091775

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop