Next Article in Journal
Elevated Ripening Temperature Mitigates the Eating Quality Deterioration of Rice in the Lower Grain Position Due to the Improvement of Starch Fine Structure and Properties
Previous Article in Journal
Comparative Study of Small-RNA and Degradome Sequencing Reveals Role of Novel stu-miR8006 in Regulating Root Development in Solanum tuberosum L.
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Automated Grading of Red Ginseng Using DenseNet121 and Image Preprocessing Techniques

1
School of Food Science and Technology, Kyungpook National University, Daegu 41566, Republic of Korea
2
Food Safety and Distribution Research Group, Korea Food Research Institute, Wanju-gun 55365, Republic of Korea
3
Food and Bio-Industry Research Institute, Kyungpook National University, Daegu 41566, Republic of Korea
*
Author to whom correspondence should be addressed.
Agronomy 2023, 13(12), 2943; https://doi.org/10.3390/agronomy13122943
Submission received: 22 October 2023 / Revised: 12 November 2023 / Accepted: 28 November 2023 / Published: 29 November 2023
(This article belongs to the Section Precision and Digital Agriculture)

Abstract

:
Red ginseng is a steamed and dried ginseng that has more functional properties and a longer shelf-life. Red ginseng is graded by appearance and inner quality. However, this conventional process has a high cost in terms of time and human resources, and has the disadvantage of subjective assessment results. Therefore, the convolutional neural network (CNN) method was proposed to automate the grading process of red ginseng and optimize the preprocessing method, select an accurate and efficient deep learning model, and to explore the feasibility of rating discrimination solely based on external quality information, without considering internal quality characteristics. In this study, the effect of five distinct preprocessing methods, including RGB, binary, gray, contrast-limited adaptive histogram equalization (CLAHE), and Gaussian blur, on the rating accuracy of red ginseng images was investigated. Furthermore, a comparative analysis was conducted on the performance of four different models, consisting of one CNN model and three transfer learning models, which were VGG19, MobileNet, and DenseNet121. Among them, DenseNet121 with CLAHE preprocessing reported the best performance; its accuracy in the Dataset 2 test set was 95.11%. This finding suggests that deep learning techniques can provide an objective and efficient solution for the grading process of red ginseng without an inner quality inspection.

1. Introduction

Ginseng is obtained from the root of Panax ginseng Meyer and has been consumed for centuries in East Asia as a high-quality herbal product [1]. It is known for reducing fatigue, strengthening the immune system, and enhancing bone health [2,3,4]. Red ginseng can be stored for a long time by steaming and drying it repeatedly. Through this, not only is the storage period increased, the functionality also becomes better than ginseng, owing to the production of physiologically active substances found only in red ginseng, such as Rh2, Rg3, Rh1, and Rh4 [5,6]. After steaming and drying, twigs are trimmed off to obtain a specific morphology with a head, body, and one or two primary legs. Subsequently, professional inspectors divide the trimmed red ginseng into four grades: first, second, third, and out-of-grade.
The grade depends on the appearance and inner quality of the red ginseng. The appearance is the most critical factor in grading and includes color, leg number, length, the proportion of each part, morphology, and outer defects such as cracks and wounds [7]. Internal quality assessment evaluates the inner white, inner hole, and tissue compactness. To evaluate inner quality, inspectors check for changes caused by the difference in internal permeability when illuminating red ginseng with intense light [8]. However, these processes incur high costs in terms of time and human resources, and have the disadvantage of subjective assessment results [9].
Previously, studies were conducted to set up an automatic and reliable system to classify red ginseng grades based on appearance and inner quality. Numerical appearance traits such as leg number [10], leg and body length and ratio [11], and head area and color [12] were extracted to evaluate the ginseng exterior quality with low accuracy; however, they do not reflect the overall shape and comprehensive appearance. To inspect internal quality, Nuclear Magnetic Resonance [8], Magnetic Resonance Imaging [9], and Infrared Radiation were considered [13]. However, these methods required additional expenses in terms of equipment and time.
Deep learning is a sub-field of machine learning that utilizes multiple layers of an artificial neural network. Remarkably, the convolutional neural networks (CNNs) based on convolutional layers have shown high performance in image analysis, including object detection, segmentation, and pattern recognition [14]. The CNN-based model comprehensively extracts features that are otherwise challenging to represent by numerical values, such as color distribution, morphology, and direction, without being directly extracted by humans [15]. Therefore, the CNN-based method has been successfully used to inspect comprehensive factors for sorting or grading in agriculture, including for apples [16], okra [17], and carrots [18].
Recent studies have proven that the CNN-based model can grade ginseng with appearance quality [19,20]. While ginseng is only graded on its external quality, during processing, red ginseng is also graded based on its internal quality. Therefore, a new research challenge is to explore the possibility and limitations of evaluating total red ginseng grade based on external quality alone.
Transfer learning is a method of training a new model based on a pre-trained model and it extracts the features of an image by retaining not only the structure of the pre-training model but also the weights. In the case of image analysis using a general convolutional model, a large amount of computation must be performed while adjusting weights for learning. However, since transfer learning fixes the learned weights, it has the advantage of quick convergence and high accuracy even with a small dataset [20].
Because deep learning automatically extracts features, image preprocessing not only ensures proper learning, but also improves the accuracy and learning efficiency [21]. The image preprocessing includes removing surplus information such as background or noise [22,23] and adjusting pixel values. Preprocessing can highlight the image characteristics by adjusting image pixel values via smoothing and color conversion. However, determining the most appropriate preprocessing technique that fits the features of the target is necessary; otherwise, important information may be removed or irrelevant information may be highlighted unintentionally [24,25].
In this study, several deep learning models with red, blue, and green (RGB) images were trained to automate the red ginseng grading to reduce human resource and to create an objective grading technique. The main points of this study can be largely divided into three categories. First, to examine the possibility of automatic rating discrimination solely with comprehensive external quality without internal quality information and evaluate its influence on each grade. Second, to compare the various preprocessing methods applied to red ginseng images to determine the image characteristics and select the optimal method. Finally, to optimize an accurate and efficient model by training several deep learning models, including transfer learning, on small datasets.

2. Materials and Methods

2.1. Sample Preparation

Red ginseng samples that were 6 years old were obtained from the Punggi Ginseng Cooperative Association (Punggi, Republic of Korea). All samples were graded as first, second, third, and out-of-grade by professional graders according to standard red ginseng grading measures [26]. A total of 1500 red ginseng roots, with 375 roots per grade, were prepared.

2.2. Image Acquisition

The image data were obtained in an illumination chamber that blocked external light (Figure 1). The chamber was equipped with 4-way LEDs, an illuminance meter (K14649784, SK electronic Co., Gwangju, Republic of Korea) that set the light intensity to 450 ± 10 lux, and a CMOS digital camera (Powershot G7X mark iii, Cannon Inc., Tokyo, Japan). The images were acquired with camera conditions fixed at sensitivity ISO 125, aperture f/2.8, exposure time 1/15 s, no zoom, no flash, resolution 72 dpi, focal length 9 mm, and the camera was positioned 32 cm from the sample. Two images of the front and back of the red ginseng root were acquired, and a total of 3000 images of 750 pieces of red ginseng, sorted by each grade, were saved in JPG format with a resolution of 5472 × 3648 (Figure 2). Finally, 20% of the images, i.e., 600 images (150 per grade), were designated as test data, and 2400 images were designated as training data.

2.3. Physical Characteristics

The acquired red ginseng images were analyzed using Image J 1.46r (National Institutes of Health, Bethesda, MD, USA) software to measure the length of the body, legs, and diameter. The body–leg ratio was calculated as the measured body length divided by the leg length, and the body–diameter ratio as the body length divided by the diameter length.

2.4. Image Preprocessing

To extract the desired red ginseng area by removing the background and shadow from the input image, the RGB image was converted into a hue, saturation, and value (HSV) image. Since the white background and shadow contain a low saturation value in the HSV image, the area with a 50 or higher saturation value was designated as the red ginseng area. Small holes and noise areas were removed using the morphology operations of closing and opening [27]. Subsequently, the HSV image was restored to RGB and all background areas were removed by changing their pixel value to zero. Then, the object was localized so that its long horizontal and vertical lengths were in the center and cropped to the size of each ginseng. After all preprocessing, CLAHE, Gaussian blur, Gray, and Binary were adapted to the image, and the image size was changed to 224 × 224.

2.4.1. Contrast-Limited Adaptive Histogram Equalization

Contrast-limited adaptive histogram equalization (CLAHE) is an algorithm that improves the shortcomings of adaptive histogram equalization, which leads to an excessive contrast increase and thus, noise. CLAHE redistributes pixel values by limiting the height of the histogram so that the contrast of the image can be appropriately adjusted [28]. The calculation of CLAHE can be represented by the following equation:
g = ( g m a x g m i n ) P ( f ) + g m i n
where g is the calculated pixel value, and g m a x and g m i n are the maximum and minimum pixel values of the image, respectively. P(f) is the cumulative probability distribution [29]. Red ginseng images were converted to CIE Lab color, which contains lightness, red/green, and blue/yellow channel. Then, CLAHE was applied to each channel and the CIE Lab image was converted to an RGB image. The clip limit was set to 2, and block size was (10, 10).

2.4.2. Gaussian Blur

Gaussian blur is widely used as a smoothing method by removing noise and high-frequency components from the image. Gaussian blur conversion in an image can be expressed using the following equation:
P ( x , y ) = 1 2 π σ 2 e x 2 + y 2 2 σ 2
where x and y are the horizontal and vertical axes distances from the origin pixel, respectively. σ is a standard deviation of the distribution. The sigma value was set to 5.

2.4.3. Grayscale

RGB images contain three channels, red, green, and blue. However, the gray image has one channel that expresses brightness without color. The gray conversion can be expressed using the following equation:
Y = 0.299 × R + 0.587 × G + 0.114 × B
where Y is the pixel value of the gray image, and the values of the red, green, and blue dimensions are represented by R, G, and B, respectively.

2.4.4. Binary

Binary is an image obtained by binarizing RGB. All pixel values of the red ginseng area in the three-dimensional RGB channel were processed as a single dimension of 255 and the background part was processed as zero. Binary images are effective for analyzing size and body shape information [30].

2.5. Architecture of Convolutional Neural Networks

A generalized CNN contains a convolution layer, a pooling layer, and a fully connected (FC) layer. Among them, the convolutional layer, which is the core of the convolutional model, serves to automatically extract image features [31]. Pooling layers reduce the dimensionality of the feature maps by combining a set of values into a mean, maximum or minimum value. This enables the removal of irrelevant information. A flatten layer collapses the spatial dimensions into one-dimensional features. Two FC layers were set to select the features extracted from the convolutional layer [32]. The two FC layers had 128 and 64 nodes, respectively, with a Rectified Linear Unit (ReLU) activation function and a dropout of 0.3, to show the grades as the result of the softmax function.
The CNN model was trained using the structure shown in Table 1. The CNN model used in this study was constructed by referring to Agarwal et al. [33]. The hyperparameters were a batch size of 32, 0.001 initial learning rate, and Adam optimizer [34]. Finally, 20% of the training data were designated as validation data, and categorical cross-entropy loss was used to evaluate the performance during model training.
In this study, three models, i.e., MobileNet [35], DenseNet121 [36], and VGG19 [37], trained using ImageNet data, were used for transfer learning. ImageNet refers to a dataset that classifies approximately 1.4 million images into approximately 1000 types, and models trained with this data extract general features of images from massive data; thus, they are the most commonly used models in transfer learning [20]. As shown in Figure 3, the convolutional layers were replaced with the transfer learning models. Two trainable, fully connected layers were set to select features from the transfer model. All fully connected layers were set the same as the CNN model, which has 128 and 64 nodes, with ReLU activation function and a dropout of 0.3. Finally, a softmax layer was set to decide the grade as a result.

2.6. Dataset

To check whether the grade groups were inter-class groups in the deep learning model, three different datasets were used for training. The dataset included all grades assigned to Dataset 1. Then, the first and second grades were set as the same group, high grade. Next, three grades, including high, third, and out-of-grade, were selected in Dataset 2. Finally, a dataset with only two grades, first and second, was set as Dataset 3.

2.7. Training Environment

All procedures were implemented in Python 3.7 with Visual Studio Code 1.7 using TensorFlow 2.7 for deep learning and OpenCV 4.6 for image processing. The hardware included an AMD Ryzen 5 5600H CPU processor (Santa Clara, CA, USA) with 4 GB of RAM, and an NVIDIA GeForce RTX 3060 Laptop graphics card (Santa Clara, CA, USA).

2.8. Performance Evaluation

The performances of the models were evaluated for accuracy, precision, recall, and F1 score. All measurements were shown as macro processes. They are expressed by the following equations.
Accuracy = (TP + TN)/(TP + TN + FP + FN),
Precision = TP/(TP + FP),
Recall = TP/(TP + FN),
F1score = (2 × Precision × Recall)/(Precision + Recall) = 2TP/(2TP + FP + FN),
where true positive (TP) is the number of correctly classified positive grades, and true negative (TN) is the number of correctly classified negative grades. False positive (FP) is the number of incorrectly classified positive grades, and false negative (FN) is the number of incorrectly classified negative grades 3.

3. Results and Discussions

3.1. Physical Characteristics

Table 2 shows the physical characteristics of each grade of red ginseng. In body length, only the out-of-grade was shorter than other grades at 5.40 cm. However, the leg length of the first grade was 5.51 cm, which was longer than the other grades. The diameter of the red ginseng was 1.65 cm for first grade, 1.77 cm for second, 1.83 cm for third, and 1.74 cm for out-of-grade. The ratio of body-to-leg was 1.96 for third, which was different from the other grades, and the ratio of body-to-diameter was 3.20 for out-of-grade, which was different from the other grades. Otherwise, first and second grades have no difference in body-to-leg and body-to-diameter ratios.

3.2. Preprocessing

As shown in Table 3, the CLAHE preprocessing had the highest performance in CNN, DenseNet121, and MobileNet. This was because, as shown in Figure 4, the histogram pixel values were distributed such that a clearer image was produced and a better recognition of defects and color in the appearance was achieved [38]. In the blurred image, the pixel values were clustered in a certain value as opposed to CLAHE, especially in the G and R channels. This was because the legs of the red ginseng ere browned more than the body, so the body and legs differed in color [11]. In VGG19, Blur showed the highest accuracy among the preprocessing, 79.83%, but there was no big difference between preprocessing.
Figure 5 shows the difference between gray and binary images is in the texture information expressed with brightness [30,39]. For all models except CNN, Gray was more accurate than Binary and RGB. In particular, the accuracy of the gray preprocessing was 82.5%, the same as CLAHE on MobileNet. This suggests that texture information is important for the classification of red ginseng grade.
According to the preprocessing data, the CLAHE method was the most ideal preprocessing method except for VGG19. Therefore, all subsequent studies used the CLAHE preprocessed dataset.

3.3. Model Selection

The number of epochs is one of the hyperparameters that should be appropriately specified to avoid underfitting and overfitting. Underfitting refers to the state in which the model fails to properly find the rules of the training set, and overfitting is a phenomenon in which the model becomes excessively fitted to the training set, resulting in testing performance degradation [40]. Therefore, finding the appropriate number of epochs is an important factor in model learning. The appropriate epoch number can be found by comparing the loss function values of the training data and the validation data [41].
Figure 6 shows the loss value and accuracy of training and validation data according to the epochs for each model. The losses and accuracies of the training and validation data for all models were close till approximately 10–20 epoch and then started to diverge. The difference between the CNN and the transfer model was the gap between the training and validation data. While the CNN model continued to show an increasing gap between the training and validation data as the epochs increased, the transfer learning model maintained this gap or showed a modest increase. This is due to the effectiveness of transfer learning in suppressing overfitting [42]. This can be observed in the accuracy trend over the epochs in Table 4. All models, except VGG19, showed a peak and decline in accuracy at epoch 20, with VGG19 showing the highest accuracy at epoch 30. This indicates that the accuracy of the test data was low due to underfitting before the appropriate epoch interval due to overfitting in the later epoch interval. Additionally, all transfer learning models were more accurate than the CNN model, indicating that transfer learning improves the accuracy of red ginseng grading. Among them, DenseNet121 was found to be the most suitable model for grading, with an accuracy of 84.67% at epoch 20.
Table 5 shows the number of parameters, training and test times, and size of each model. DenseNet121 had the highest accuracy, but also the longest test time, 5.05 ms. Although VGG19 had fewer parameters to learn than DenseNet121, its training time was the longest due to the large number of transferred and total parameters. However, VGG19’s accuracy was 80.33%, the lowest among the transfer learning, showing that computation and model size do not guarantee accuracy. The CNN model had the shortest test time, 1.47 s, as the number of parameters and model size were only approximately one-twentieth of transfer learning. MobileNet had the second highest accuracy, 82.50%, and the training and test times were only approximately half of the other transfer learning models. Based on this, although DenseNet121 had the highest accuracy, MobileNet had the highest time and memory efficiency. As a result, MobileNet could be a suitable feature-extracting model in a limited environment; however, DenseNet121 is the most accurate model for grading red ginseng.

3.4. Model Optimization

In the original DenseNet121, a global pooling layer was used as an FC layer to reduce the number of parameters and to avoid overfitting. However, an original FC layer structure may not perform well over the target dataset because the FC layers are designed for different source tasks [43]. Therefore, seven different FC layer structures, such as Dense1, Dense2, Dense3, Pool, Pool + Dense1, Pool + Dense2, and Pool + Dense3, were tested. Table 6 shows accuracy, number of parameters, model size, training, and test time depending on FC layer structure with DenseNet121. Models with the global pooling layer generally had fewer parameters and took less time to train and test. This shows that the global pooling layer dramatically reduces the number of parameters and model size. However, it did not lead to a significant reduction in training and test time. Given that the original DenseNet121 structure, Pool, had the lowest accuracy of 79.17%, changing the fully connected layer structure in transfer learning could improve accuracy. Among the modified structural models, Dense2 with no pooling layer and two FC layers had the highest accuracy at 84.67%.
Table 7 shows the accuracy of the modified DenseNet121 by optimizer and learning rate. Learning rate determines the rate at which the weights are updated, and the optimizer determines the way in which the weights are updated. So, learning rate and optimizer are both key factors in determining model performance [44,45]. Four different optimizers, including Adam, Adagrad, RMSprop, and SGD, were optimized to the modified DenseNet121. As a result, the RMSprop optimizer with 0.0001 learning rate showed the highest accuracy at 85.17%.
In Table 8, the optimized DenseNet121 model’s performance in Dataset 2 was higher than in Dataset 1, with an accuracy rate of 94.89%. This is consistent with the fact that the model’s performance increased when two similar classes were grouped into one class in the study by Nagpal et al. [46]. In Dataset 3, the discrimination accuracy was 75.67%, lower than in Dataset 1. In Figure 7a, among the 89 misclassified cases, 60 cases were between the first and second grades in the confusion matrix. Chang et al. [11] explained that small defects play a decisive role in high-graded red ginseng. Furthermore, the image data on one side may not contain the parts involved in the grading. Additionally, Chung and Shin [47] showed that internal quality, the presence of inner whitening or holes, was a significant factor in distinguishing high grades. In conclusion, the classification accuracy of first and second grades by appearance only is limited, and to compensate for this, a method to evaluate internal quality is required. However, the three groups, high, third, and out-of-grade, can be determined by external factors only.

4. Conclusions

This study aimed to automate the grading process of red ginseng by using learning models to apply various preprocessing methods, selecting an accurate and efficient model, and exploring the possibility of a classification based on comprehensive appearance. The classification performance varied depending on the preprocessing technique. These results indicate that RGB with CLAHE processing improved the deep learning models’ performance in red ginseng analysis. The optimized DenseNet121 model demonstrated the highest accuracy, 85.17%, among the models. However, its performance varied when trained on different datasets. Specifically, when trained with Dataset 2, the model achieved an accuracy of 94.89% by grouping first- and second-grade ginseng into the same high-grade group. Conversely, when trained with only the first and second grades in Dataset 3, the model’s accuracy decreased to 75.67%. These findings suggest that classifying first- and second-grade red ginseng solely with RGB images using deep learning methods has limitations and requires internal inspection. Nonetheless, the method shows great potential for classifying the third grade, out-of-grade, and high grades. Future studies can explore the development of an automated red ginseng grading system with practical experiments in an offline environment using deep learning methods, and investigate other methods that can inspect internal quality to improve the classification accuracy of first- and second-grade red ginseng.

Author Contributions

Conceptualization, M.K.; methodology, M.K.; software, M.K.; validation, M.K.; formal analysis, M.K.; investigation, M.K.; resources, M.K.; data curation, J.S.K. and J.K.; writing—original draft preparation, M.K.; writing—review and editing, J.K. and K.-D.M.; visualization, M.K.; supervision, K.-D.M.; project administration, J.-H.L. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by Korea Institute of Planning and Evaluation for Technology in Food, Agriculture and Forestry (IPET) through High Value-added Food Technology Development Program, funded by Ministry of Agriculture, Food and Rural Affairs (MAFRA) (321049-5).

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Pan, Y.X.; Yang, M.H.; Zhang, M.; Jia, M.Y. Rapid Discrimination of Commercial American Ginseng and Asian Ginseng According to Diols Composition Using a Colorimetric Sensor Array. Sens. Actuators B Chem. 2019, 294, 48–54. [Google Scholar] [CrossRef]
  2. Arring, N.M.; Millstine, D.; Marks, L.A.; Nail, L.M. Ginseng as a Treatment for Fatigue: A Systematic Review. J. Altern. Complement. Med. 2018, 24, 624–633. [Google Scholar] [CrossRef] [PubMed]
  3. Ratan, Z.A.; Youn, S.H.; Kwak, Y.S.; Han, C.K.; Haidere, M.F.; Kim, J.K.; Min, H.; Jung, Y.J.; Hosseinzadeh, H.; Hyun, S.H.; et al. Adaptogenic Effects of Panax ginseng on Modulation of Immune Functions. J. Ginseng Res. 2021, 45, 32–40. [Google Scholar] [CrossRef] [PubMed]
  4. Avsar, U.; Karakus, E.; Halici, Z.; Bayir, Y.; Bilen, H.; Aydin, A.; Avsar, U.Z.; Ayan, A.; Aydin, S.; Karadeniz, A. Prevention of Bone Loss by Panax ginseng in a Rat Model of Inflammation-Induced Bone Loss. Cell. Mol. Biol. 2013, 59, 1835–1841. [Google Scholar] [CrossRef]
  5. Choi, J.E.; Nam, K.Y.; Li, X.; Kim, B.Y.; Cho, H.S.; Hwang, K.B. Changes of chemical Compositions and Ginsenoside Contents of Different Root Parts of Ginsengs with Processing Method. Korean J. Med. Crop Sci. 2010, 18, 118–125. [Google Scholar]
  6. Kim, Y.J.; Jeon, J.N.; Jang, M.G.; Oh, J.Y.; Kwon, W.S.; Jung, S.K.; Yang, D.C. Ginsenoside Profiles and Related Gene Expression during Foliation in Panax ginseng Meyer. J. Ginseng Res. 2014, 38, 66–72. [Google Scholar] [CrossRef]
  7. Son, J.R.; Lee, G.J.; Gang, S.W. Development of External Appearance Quality Evaluation Algorithm-Branch Detect Algorithm. Proc. Korean Soc. Agric. Mach. Conf. 2005, 10, 245–248. [Google Scholar]
  8. Kim, S.M.; Lim, J.G. Analysis of Magnetic Resonance Characteristics and Image of Korean Red Ginseng. J. Biosyst. Eng. 2003, 28, 253–260. [Google Scholar] [CrossRef]
  9. Kim, C.S.; Jung, I.C.; Kim, S.B. Distinction of Internal Tissue of Red Ginseng Using Magnetic Resonance Image. J. Ginseng Res. 2008, 32, 332–336. [Google Scholar] [CrossRef]
  10. Jeong, S.; Lee, Y.M.; Lee, S. Development of an Automatic Sorting System for Fresh Ginsengs by Image Processing Techniques. Hum.-Centric Comput. Inf. Sci. 2017, 7, 41. [Google Scholar] [CrossRef]
  11. Chang, Y.H.; Chang, D.I.; Bhang, S.H. Development of a Korean red-ginseng’s shape Sorting System Using Image Processing. J. Biosyst. Eng. 2001, 26, 279–286. [Google Scholar]
  12. Park, J.; Lee, S. A Red Ginseng Internal Measurement System Using Back-Projection. KIPS Trans. Softw. Data Eng. 2018, 7, 377–382. [Google Scholar] [CrossRef]
  13. Lu, J.Z.; Tan, L.J.; Jiang, H.Y. Review on Convolutional Neural Network (CNN) Applied to Plant Leaf Disease Classification. Agriculture 2021, 11, 707. [Google Scholar] [CrossRef]
  14. Wang, Z.Q.; Li, M.; Wang, H.X.; Jiang, H.Y.; Yao, Y.D.; Zhang, H.; Xin, J.C. Breast Cancer Detection Using Extreme Learning Machine Based on Feature Fusion with CNN Deep Features. IEEE Access 2019, 7, 105146–105158. [Google Scholar] [CrossRef]
  15. Li, Y.F.; Feng, X.Y.; Liu, Y.D.; Han, X.C. Apple Quality Identification and Classification by Image Processing Based on Convolutional Neural Networks. Sci. Rep. 2021, 11, 16618. [Google Scholar] [CrossRef] [PubMed]
  16. Raikar, M.M.; Meena, S.; Kuchanur, C.; Girraddi, S.; Benagi, P. Classification and Grading of Okra-ladies finger using Deep Learning. Procedia Comput. Sci. 2020, 171, 2380–2389. [Google Scholar] [CrossRef]
  17. Deng, L.M.; Li, J.; Han, Z.Z. Online Defect Detection and Automatic Grading of Carrots Using Computer Vision Combined with Deep Learning Methods. LWT Food Sci. Technol. 2021, 149, 111832. [Google Scholar] [CrossRef]
  18. Li, D.M.; Piao, X.R.; Lei, Y.; Li, W.; Zhang, L.J.; Ma, L.A. Grading Method of Ginseng (Panax ginseng C. A. Meyer) Appearance Quality Based on an Improved ResNet50 Model. Agronomy 2022, 12, 2925. [Google Scholar] [CrossRef]
  19. Li, D.M.; Zhai, M.T.; Piao, X.R.; Li, W.; Zhang, L.J. A Ginseng Appearance Quality Grading Method Based on an Improved ConvNeXt Model. Agronomy 2023, 13, 1770. [Google Scholar] [CrossRef]
  20. Morid, M.A.; Borjali, A.; Del Fiol, G. A Ccoping. Review of Transfer Learning Research on Medical Image Analysis Using ImageNet. Comput. Biol. Med. 2021, 128, 104115. [Google Scholar] [CrossRef]
  21. Zhao, M.A.; Shi, P.X.; Xu, X.Q.; Xu, X.Y.; Liu, W.; Yang, H. Improving the Accuracy of an R-CNN-Based Crack Identification System Using Different Preprocessing Algorithms. Sensors 2022, 22, 7089. [Google Scholar] [CrossRef] [PubMed]
  22. Fang, W.; Ding, Y.W.; Zhang, F.H.; Sheng, V.S. DOG: A New Background Removal for Object Recognition from Images. Neurocomputing 2019, 361, 85–91. [Google Scholar] [CrossRef]
  23. Buades, A.; Coll, B.; Morel, J.M. A Review of Image Denoising Algorithms, with a New One. Multiscale Model. Simul. 2005, 4, 490–530. [Google Scholar] [CrossRef]
  24. Fan, Q.N.; Yang, J.L.; Wipf, D.; Chen, B.Q.; Tong, X. Image Smoothing via Unsupervised Learning. ACM Trans. Graph. 2018, 37, 1–14. [Google Scholar] [CrossRef]
  25. Goh, Y.Z.; Teoh, A.B.J.; Goh, M.K.O. Wavelet Local Binary Patterns Fusion as Illuminated Facial Image Preprocessing for Face Verification. Expert Syst. Appl. 2011, 38, 3959–3972. [Google Scholar] [CrossRef]
  26. Lee, J.H.; Lee, J.S.; Kwon, W.S.; Kang, J.Y.; Lee, D.Y.; In, J.G.; Kim, Y.S.; Seo, J.; Baeg, I.H.; Chang, I.M.; et al. Characteristics of Korean Ginseng Varieties of Gumpoong, Sunun, Sunpoong, Sunone, Cheongsun, and Sunhyang. J. Ginseng Res. 2015, 39, 94–104. [Google Scholar] [CrossRef]
  27. Jones, R.; Svalbe, I. The Design of Morphological Filters using Multiple Structuring Elements, Part II: Open (Close) and Close (Open). Pattern Recognit. Lett. 1992, 13, 175–181. [Google Scholar] [CrossRef]
  28. Sonali; Sahu, S.; Singh, A.K.; Ghrera, S.P.; Elhosen, M. An Approach for De-Noising and Contrast Enhancement of Retinal Fundus Image Using CLAHE. Opt. Laser Technol. 2019, 110, 87–98. [Google Scholar] [CrossRef]
  29. Garg, D.; Garg, N.K.; Kumar, M. Underwater Image Enhancement using Blending of CLAHE and Percentile Methodologies. Multimed. Tools Appl. 2018, 77, 26545–26561. [Google Scholar] [CrossRef]
  30. Schiele, S.; Arndt, T.T.; Martin, B.; Miller, S.; Bauer, S.; Banner, B.M.; Brendel, E.M.; Schenkirsch, G.; Anthuber, M.; Huss, R.; et al. Deep Learning Prediction of Metastasis in Locally Advanced Colon Cancer Using Binary Histologic Tumor Images. Cancers 2021, 13, 2074. [Google Scholar] [CrossRef]
  31. Fan, S.X.; Li, J.B.; Zhang, Y.H.; Tian, X.; Wang, Q.Y.; He, X.; Zhang, C.; Huang, W.Q. On Line Detection of Defective Apples using Computer Vision System Combined with Deep Learning Methods. J. Food Eng. 2020, 286, 110102. [Google Scholar] [CrossRef]
  32. Thirumaladevi, S.; Swamy, K.V.; Sailaja, M. Remote Sensing Image Scene Classification by Transfer Learning to Augment the Accuracy. Meas. Sens. 2023, 25, 100645. [Google Scholar] [CrossRef]
  33. Agarwal, M.; Singh, A.; Arjaria, S.; Sinha, A.; Gupta, S. ToLeD: Tomato Leaf Disease Detection using Convolution Neural Network. Procedia Comput. Sci. 2020, 167, 293–301. [Google Scholar] [CrossRef]
  34. Kingma, D.P.; Ba, J. Adam: A Method for Stochastic Optimization. arXiv 2014. [Google Scholar] [CrossRef]
  35. Howard, A.G.; Zhu, M.; Chen, B.; Kalenichenko, D.; Wang, W.; Weyand, T.; Andreetto, M.; Adam, H. Mobilenets: Efficient Convolutional Neural Networks for Mobile Vision Applications. arXiv 2017. [Google Scholar] [CrossRef]
  36. Huang, G.; Liu, Z.; Van Der Maaten, L.; Weinberger, K.Q. Densely Connected Convolutional Networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 4700–4708. [Google Scholar] [CrossRef]
  37. Simonyan, K.; Zisserman, A. Very Deep Convolutional Networks for Large-Scale Image Recognition. arXiv 2014. [Google Scholar] [CrossRef]
  38. Fu, Q.Q.; Celenk, M.; Wu, A.P. An Improved Algorithm Based on CLAHE for Ultrasonic Well Logging Image Enhancement. Clust. Comput. 2019, 22, 12609–12618. [Google Scholar] [CrossRef]
  39. Bui, H.M.; Lech, M.; Cheng, E.; Neville, K.; Burnett, I.S. Using Grayscale Images for Object Recognition with Convolutional-Recursive Neural Network. In Proceedings of the 2016 IEEE Sixth International Conference on Communications and Electronics (ICCE), Ha Long, Vietnam, 27–29 July 2016; pp. 21–325. [Google Scholar] [CrossRef]
  40. Gombolay, G.Y.; Gopalan, N.; Bernasconi, A.; Nabbout, R.; Megerian, J.T.; Siegel, B.; Hallman-Cooper, J.; Bhalla, S.; Gombolay, M.C. Review of Machine Learning and Artificial Intelligence (ML/AI) for the Pediatric Neurologist. Pediatr. Neurol. 2023, 141, 42–51. [Google Scholar] [CrossRef]
  41. An, H.; Park, S.; Lee, J.; Kang, L. Study on the Application of Deep Learning Model for Estimation of Activity Duration in Railway Construction Project. J. Korean Soc. Railw. 2020, 23, 615–624. [Google Scholar] [CrossRef]
  42. Espejo-Garcia, B.; Mylonas, N.; Athanasakos, L.; Fountas, S.; Vasilakoglou, I. Towards Weeds Identification Assistance through Transfer Learning. Comput. Electron. Agric. 2020, 171, 105306. [Google Scholar] [CrossRef]
  43. Basha, S.H.S.; Vinakota, S.K.; Dubey, S.R.; Pulabaigari, V.; Mukherjee, S. AutoFCL: Automatically Tuning Fully Connected Layers for Handling Small Dataset. Neural Comput. Appl. 2021, 33, 8055–8065. [Google Scholar] [CrossRef]
  44. Jayapal, P.K.; Park, E.; Faqeerzada, M.A.; Kim, Y.S.; Kim, H.; Baek, I.; Kim, M.S.; Sandanam, D.; Cho, B.K. Analysis of RGB Plant Images to Identify Root Rot Disease in Korean Ginseng Plants Using Deep Learning. Appl. Sci. 2022, 12, 2489. [Google Scholar] [CrossRef]
  45. Saleem, M.H.; Potgieter, J.; Arif, K.M. Plant Disease Classification: A Comparative Evaluation of Convolutional Neural Networks and Deep Learning Optimizers. Plants 2020, 9, 1319. [Google Scholar] [CrossRef] [PubMed]
  46. Nagpal, K.; Foote, D.; Tan, F.S.; Liu, Y.; Chen, P.H.C.; Steiner, D.F.; Manoj, N.; Olson, N.; Smith, J.L.; Mohtashamian, A.; et al. Development and Validation of a Deep Learning Algorithm for Gleason Grading of Prostate Cancer from Biopsy Specimens. JAMA Oncol. 2020, 6, 1372–1380. [Google Scholar] [CrossRef]
  47. Chung, C.M.; Shin, J.S. Comparison of Quality on the Raw and Red Ginseng in Korean and American Ginseng. Korean J. Med. Crop Sci. 2006, 14, 183–187. [Google Scholar]
Figure 1. Photograph of the light chamber for red ginseng image acquisition.
Figure 1. Photograph of the light chamber for red ginseng image acquisition.
Agronomy 13 02943 g001
Figure 2. Images of red ginseng: (a) first grade; (b) second grade; (c) third grade; and (d) out-of-grade.
Figure 2. Images of red ginseng: (a) first grade; (b) second grade; (c) third grade; and (d) out-of-grade.
Agronomy 13 02943 g002
Figure 3. Overall framework of the transfer models.
Figure 3. Overall framework of the transfer models.
Agronomy 13 02943 g003
Figure 4. Preprocessed image and histogram: image of (a) CLAHE, (b) Original, and (c) Blur; (df) histograms of CLAHE, Original, and Blur, respectively and the values of the blue, green, and red are represented by B, G, and R.
Figure 4. Preprocessed image and histogram: image of (a) CLAHE, (b) Original, and (c) Blur; (df) histograms of CLAHE, Original, and Blur, respectively and the values of the blue, green, and red are represented by B, G, and R.
Agronomy 13 02943 g004
Figure 5. Color-transformed images: (a) RGB; (b) Gray; (c) Binary.
Figure 5. Color-transformed images: (a) RGB; (b) Gray; (c) Binary.
Agronomy 13 02943 g005
Figure 6. Loss and accuracy of each model’s training and validation: (a) CNN; (b) DenseNet121; (c) MobileNet; and (d) VGG19.
Figure 6. Loss and accuracy of each model’s training and validation: (a) CNN; (b) DenseNet121; (c) MobileNet; and (d) VGG19.
Agronomy 13 02943 g006
Figure 7. Confusion matrix of (a) Dataset 1, (b) Dataset 2, and (c) Dataset 3.
Figure 7. Confusion matrix of (a) Dataset 1, (b) Dataset 2, and (c) Dataset 3.
Agronomy 13 02943 g007
Table 1. Specific configurations of the CNN model.
Table 1. Specific configurations of the CNN model.
LayerOperationKernel ShapeNumber of KernelsStrideNumber of NeuronsDropout
Conv1Convolution3 × 3161
Pool1Pooling3 × 3 2
Conv2-1Convolution3 × 3321
Conv2-2Convolution3 × 3321
Pool2Pooling3 × 3 2
Conv3-1Convolution3 × 3321
Conv3-2Convolution3 × 3321
Pool3Pooling3 × 3 2
Conv4-1Convolution3 × 3321
Conv4-2Convolution3 × 3321
Pool4Pooling3 × 3 2
Flatten
Dense1Full connection 1280.3
Dense2Full connection 640.3
Table 2. Physical characteristics of red ginseng by grade.
Table 2. Physical characteristics of red ginseng by grade.
GradeBody Length (cm)Leg Length (cm)Diameter (cm)Body-Leg RatioBody-Diameter Ratio
First6.38 ± 1.12 (1)b(2)5.51 ± 0.89 b1.65 ± 0.16 a1.21 ± 0.40 a3.91 ± 0.85 b
Second6.37 ± 1.19 b5.02 ± 0.99 a1.77 ± 0.47 b1.37 ± 0.62 a3.74 ± 0.96 b
Third6.62 ± 2.05 b4.68 ± 1.86 a1.83 ± 0.30 b1.96 ± 1.78 b3.77 ± 1.44 b
Out-of-grade5.40 ± 1.43 a4.74 ± 1.43 a1.74 ± 0.25 ab1.35 ± 1.02 a3.20 ± 1.08 a
(1) Values are mean ± Standard deviation (n = 100). (2) a, b means significantly different between grade by Duncan’s multi-range test (p < 0.05).
Table 3. Performance of models by preprocessing.
Table 3. Performance of models by preprocessing.
(Unit: %)
CLAHE (1)RGBBlurGrayBinary
CNN78.6773.0073.1772.3374.33
DenseNet12184.6778.8378.3381.0075.33
MobileNet82.5082.1776.8382.5074.83
VGG1979.3379.1779.8379.3376.00
(1) CLAHE, CLAHE preprocessed image; RGB, expressed red, green, blue color image; Blur, Gaussian blurred image; Gray, expressed lightness value image; Binary, expressed 0- or 255-pixel value image.
Table 4. Accuracy of models by epoch.
Table 4. Accuracy of models by epoch.
(Unit: %)
Model (1)10203050100
CNN72.0078.6774.0073.6773.33
DenseNet12161.6784.6784.1782.0073.33
MobileNet82.3382.5082.0081.5081.33
VGG1980.3379.3380.8379.6779.50
(1) Transfer learning model: DenseNet121, MobileNet, and VGG19; non-transfer learning model: CNN.
Table 5. Accuracy, model size, and training and test times of each model.
Table 5. Accuracy, model size, and training and test times of each model.
Model (1)Accuracy
(%)
Total
Parameter
Non-Trainable ParameterTrainable
Parameter
Model Size
(MB)
Training Time
(s/Epoch)
Test Time
(ms/Image)
CNN78.67391,748-391,7484.582.551.47
DenseNet12184.6713,468,6767,037,5046,431,172101.597.422.93
MobileNet82.509,660,0363,228,8646,431,17286.192.351.73
VGG1980.8323,244,29220,024,3843,219,908113.349.904.70
(1) Transfer learning models: DenseNet121, MobileNet, and VGG19; non-transfer learning model: CNN.
Table 6. Comparison of fully connected layer structure in the DenseNet121 model.
Table 6. Comparison of fully connected layer structure in the DenseNet121 model.
FC
Structure (1)
Accuracy
(%)
Total
Parameter
Non-Trainable ParameterTrainable
Parameter
Model Size
(MB)
Training Time
(s/Epoch)
Test Time
(ms/Image)
Dense183.3313,460,6767,037,5046,423,172101.497.332.92
Dense284.6713,468,6767,037,5046,431,172101.597.422.93
Dense382.2313,472,8367,037,5046,435,332101.657.472.91
Pool (2)79.177,041,6047,037,504410028.027.062.80
Pool + Dense183.837,169,2207,037,504131,71629.497.523.01
Pool + Dense279.837,177,2207,037,504139,71629.597.362.87
Pool + Dense379.677,181,3807,037,504143,87629.657.402.92
(1) Dense1, a flatten layer with one dense layer; Dense2, a flatten layer with two dense layers; Dense3, a flatten layer with three dense layers; Pool, a global pooling layer; Pool + Dense1, a global pooling layer with a flatten layer and a dense layer; Pool + Dense2, a global pooling layer with a flatten layer and two dense layers; and Pool + Dense3, a global pooling layer with a flatten layer and three dense layers. (2) Original DenseNet121’s fully connected layer structure.
Table 7. Accuracy of models by optimizer and learning rate.
Table 7. Accuracy of models by optimizer and learning rate.
(Unit: %)
Learning Rate0.10.010.0010.00010.00001
Adam25.0077.0084.6782.8383.17
Adagrad25.0082.5083.8380.3368.00
RMSprop25.0047.1780.6785.1781.67
SGD25.0081.8382.8380.0069.33
Table 8. Performance of optimized DenseNet121 by dataset.
Table 8. Performance of optimized DenseNet121 by dataset.
(Unit: %)
Grade Dataset (1)RecallPrecisionF1_ScoreAccuracy
Dataset 185.1785.1785.1285.17
Dataset 294.8994.8994.8994.89
Dataset 375.6777.3375.2975.67
(1) Dataset 1 contains first, second, third and out-of-grade; Dataset 2 contains high, third and out-of-grade; Dataset 3 contains first and second grade.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Kim, M.; Kim, J.; Kim, J.S.; Lim, J.-H.; Moon, K.-D. Automated Grading of Red Ginseng Using DenseNet121 and Image Preprocessing Techniques. Agronomy 2023, 13, 2943. https://doi.org/10.3390/agronomy13122943

AMA Style

Kim M, Kim J, Kim JS, Lim J-H, Moon K-D. Automated Grading of Red Ginseng Using DenseNet121 and Image Preprocessing Techniques. Agronomy. 2023; 13(12):2943. https://doi.org/10.3390/agronomy13122943

Chicago/Turabian Style

Kim, Minhyun, Jiyoon Kim, Jung Soo Kim, Jeong-Ho Lim, and Kwang-Deog Moon. 2023. "Automated Grading of Red Ginseng Using DenseNet121 and Image Preprocessing Techniques" Agronomy 13, no. 12: 2943. https://doi.org/10.3390/agronomy13122943

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop