Next Article in Journal
Human Face Detection Techniques: A Comprehensive Review and Future Research Directions
Previous Article in Journal
Electromagnetic Scattering and Its Applications: From Low Frequencies to Photonics
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Four-Dimension Deep Learning Method for Flower Quality Grading with Depth Information

College of Mechanical and Electronic Engineering, Nanjing Forestry University, Nanjing 210037, China
*
Authors to whom correspondence should be addressed.
The two authors contributed equally to this work.
Electronics 2021, 10(19), 2353; https://doi.org/10.3390/electronics10192353
Submission received: 31 August 2021 / Revised: 23 September 2021 / Accepted: 24 September 2021 / Published: 26 September 2021
(This article belongs to the Section Computer Science & Engineering)

Abstract

:
Grading the quality of fresh cut flowers is an important practice in the flower industry. Based on the flower maturing status, a classification method based on deep learning and depth information was proposed for the grading of flower quality. Firstly, the RGB image and the depth image of a flower bud were collected and transformed into fused RGBD information. Then, the RGBD information of a flower was set as inputs of a convolutional neural network to determine the flower bud maturing status. Four convolutional neural network models (VGG16, ResNet18, MobileNetV2, and InceptionV3) were adjusted for a four-dimensional (4D) RGBD input to classify flowers, and their classification performances were compared with and without depth information. The experimental results show that the classification accuracy was improved with depth information, and the improved InceptionV3 network with RGBD achieved the highest classification accuracy (up to 98%), which means that the depth information can effectively reflect the characteristics of the flower bud and is helpful for the classification of the maturing status. These results have a certain significance for the intelligent classification and sorting of fresh flowers.

1. Introduction

Flowers are a kind of ornamental plant. In recent years, the fresh-cut flower industry in the Yunnan Province of China has made certain achievements in the whole industrial chain, such as improved seed breeding, green production, primary processing, and deep processing. The daily trading volume of the Yunnan Dounan Flower Auction Center reaches 3–3.5 million, and millions of flowers are sent all over the world every day. Flowers can be seen everywhere in daily life, thus bringing economic value to the flower industry.
During the process of flower sales, flower quality grade identification is an important and arduous task. Traditionally, the classification and grade identification of fresh cut flowers have relied on manual work, which has low efficiency and poor accuracy. Manual classification/grading cannot meet the needs of the insurance period of fresh flowers, the rapid growth of transportation, and market demand. Moreover, manual classification suffers from subjectivity and fatigue and requires professionally trained personnel. The lack of flower classification methods and sorting technology and equipment has hindered the deep processing of cut flowers and the development of China’s flower industry.
In the competitive global flower market, an effective evaluation of flower product quality is key to maintaining high standards. The quality of flower products has an important impact on flower industrial sales, and non-destructive testing (NDT) is one of the effective ways of ensuring the quality of flowers. In recent years, the development of sensor technology and computer vision technology has made NDT more effective and convenient [1,2]. Therefore, computer vision technology is applied to automatically identify the quality grade of flowers with a smart camera and intelligent algorithms.
Machine learning methods have been used with some success in the classification and variety identification of flowers; these methods include the support vector machine [3], k-nearest neighbor [4], random forest classifier [5], and some combination methods [6]. However, these traditional machine learning methods based on manually designed features did not achieve high classification accuracy, partly due to the limitation of the feature design. Therefore, some other methods with an automatic feature extractor were proposed to improve the classification accuracy such as through the scale invariant feature transform and segmentation-based fractal texture analysis [7]. Albadarneh et al. [8] proposed an automatic flower species detection method based on digital images by extracting the features of color, texture, and shape in the selected part of the image, and the recognition accuracy was better than 83% on the Oxfoed17 dataset.
In order to further improve the classification accuracy, deep learning was used to identify or classify flowers. Abu et al. [9] used a deep neural network to classify five kinds of flower images, and the overall accuracy rate was over 90%. Hu et al. [10] used convolutional neural networks (CNN) to learn the salient features for detecting flower varieties. However, using common CNN directly in identifying flower species for industry applications is still unsatisfactory. Therefore, some combination methods were developed. Cıbuk et al. [11] proposed a combination method of CNN and SVM with an RBF kernel to classify flower species and achieved 96% accuracy in the Flower102 dataset. Hiary et al. [12] used the Fully Convolutional Neural Network (FCN) segmentation method and the VGG16 pretraining model to classify flowers, and obtained an accuracy of more than 97%. Tian et al. [13] proposed a deep learning method based on an improved tiny darknet and the accuracy on the Oxford 17-flower dataset was 92%. The high accuracy resulted from the differences between different types of flowers, which were obvious in this dataset and made them easy to detect. Anjani et al. [14] used a CNN with RGB images for classifying rose flowers, and the classification accuracy reached 96% in the small number of classes. Wang et al. [15] proposed a deep learning method based on pre-trained MobileNet with the weighted average method for flower image classification, and the mean accuracy of classification on one test set was 92%. However, the accuracy on the other test set was only 87%. Prasad et al. [16] proposed a deep CNN method to classify the flower images. The method achieved 98% recognition rate in the flower dataset. Gavai et al. [17] proposed a MobileNet model on the Tensorflow platform to retrain the flower category datasets. Although MobileNet could make the network structure smaller and more efficient, it sacrificed the performance accuracy.
From the research above, some novel CNN models or CNN combined with some other methods, such as VGG, MobileNet, and ResNet, have performed well in flower classification. On the other hand, transfer learning can improve the initial performance, convergence ability, and training speed of the neural network. Mehdipour et al. [18] used the transfer learning method with a deep neural network for flower species identification. Cengil et al. [19] used the transfer VGG16 model to achieve the best performance in the multiple classification problem of flowers, and the best validation accuracy was 94%. The prediction task with less training data can be learned by loading the pretrained model. Moreover, the use of transfer learning can improve the generalization ability of the model, providing good performance on a new dataset.
The research above mainly identifies the species of flowers, and the use of deep convolutional networks can improve the accuracy of the recognition of flowers. However, there are few studies applied to the detection of fresh flower quality. This research mainly discusses the grading of flower quality. The method of deep convolution neural network combined with transfer learning is used to classify the grade of flowers.
In this study, a method of flower sorting based on deep learning is used to collect color images and depth information of flowers; an improved convolution neural network is subsequently used to comprehensively analyze the image and depth data. According to the analysis results of the algorithm, the quality grade of fresh flowers is determined. In particular, the contribution of this research is as follows: (1) a method of grading the quality of fresh flowers is proposed based on fused depth information; (2) a set of classification models of four-dimensional deep learning convolutional neural network is proposed.
This paper is organized as follows: the data collection and the flower classification algorithms are introduced in Section 2, the experimental results and discussion are provided in Section 3, and the conclusion is given in Section 4.

2. Materials and Methods

2.1. Classification Criteria for Flowers

According to the classification standards of the Dounan fresh cut flowers in Yunnan Province, the factors that affect the quality of fresh cut flowers include the diameter and area of flower buds, the length of flower stems, and the maturing status of flower buds. In the standard of Product Quality Grade for Cut Flowers Auction (SB/T 11098.2-2014), the maturing status is divided into five grades, as shown in Table 1 and Figure 1. The maturing status of the rose represents the development and maturity of flowers, which is an important indicator of fresh cut flowers. Flowers with high maturity should be harvested as they have better ornamental value. Flowers with low maturity may still be underdeveloped. It is very important to accurately determine the grade of maturing status for the quality of flower products.
Different varieties of flowers have different requirements for the quality grade of stem length. Diana Rose is a kind of short-branch flower. The minimum length requirements corresponding to each classification level are shown in Table 2. The stem length is the length from the bottom of the bud to the bottom of the stem. Different actual lengths of the flower stem correspond to different grades of flower stems. Some basic flower characteristics can also be used to measure the quality of flowers such as the color, diameter, and area. The diameter of the flower is measured at the largest swelling point of the flower bud. The flower bud area indicates the effective area of the flower bud region.
This research mainly determines the maturing status of flowers using a deep learning method. In addition, the traditional machine learning algorithm uses information such as the area, diameter, and color of the flower bud and the length of the flower stem to classify the maturing status of flowers. After that, the traditional method is compared with the deep learning method.

2.2. Data Collection

Cut flower images were collected by a depth camera (Intel RealSense D435i). The depth camera contains depth sensors, an RGB sensor, and an infrared projector. It has an active infrared projector that illuminates objects to enhance the depth data and a set of image sensors that enable the capture of disparity between images up to a resolution of 1280 × 720. The field of view is 86° × 57° (±3°), and the depth accuracy is less than 2% at 2 m. The depth data can be accurately expressed up to 1 mm. The camera size is 90 × 25 × 25 mm (length × depth × height). The depth camera’s advantages include its small volume, portability, and high precision. The experiment flowers were 160 Diana Roses purchased from the Dounan flower market in Yunnan, China. The depth camera captured the flower bud RGB image and depth data with the flower bud facing upward. The depth camera collected the image of the flower stem when the flower stem was placed horizontally. The flower bud RGB image and depth map image can be seen in Figure 2a,b. The depth map image represents the distance information from each pixel to the camera. The flower stem RGB image can be seen in Figure 2c. The collected flower bud RGB image dimensions were 640 × 480 × 3 (width × height × color channels), and the flower bud depth image dimensions were 640 × 480 × 1 (width × height × depth data). The collected flower stem RGB image dimensions were 640 × 480 × 3 (width × height × color channels).
After image processing, the contour and a rectangle region of the flower bud were obtained using edge detection, as shown in Figure 3. The obtained rectangular region of the flower bud was used as the dataset for detecting the maturing status using the deep learning method. The dataset contained the RGB image of the flower bud, the depth information of the flower bud, and the maturing status grade of the flower bud.
The collected RGB images and depth information of 160 flower bud regions were expanded by horizontal flipping, vertical flipping, and diagonal mirror flipping, as shown in Figure 4. The obtained numbers of 640 RGB images and depth information were used as the dataset for the convolutional neural network. Seventy percent of the flower data were used as the training set, and 30% were used as the testing set. The sample dataset distribution is shown in Table 3.

2.3. Flower Quality Grading Based on Deep Learning

The process of the flower classification method is shown in Figure 5, and it includes two main parts: (1) Flower Bud Region Detection: the flower region is obtained through the image processing of the original RGB image. Then, the RGB image and depth data of this region are fused to obtain four-channel flower bud information. (2) Flower Bud Maturing Status Classification: the four-channel fused feature information of fresh flowers is analyzed through the convolutional neural network model based on deep learning. After the calculation of the multilayer network, the flower maturing status grade is obtained.

2.3.1. Flower Bud Region Detection

The RGB image of flower bud collected by the camera was used as the original image, as shown in Figure 6a. Some image processing methods were used to segment the flower bud. First of all, the B-channel image in the original image was processed by Gaussian blur. The size of the Gaussian blur filter was 5 × 5. The gray image after filtering is shown in Figure 6b. The grayscale image was segmented according to the flower bud threshold value, and then the binary image was obtained through morphological erosion and expansion operations, as shown in Figure 6c. After that, the edge of the binary image was obtained by edge detection. The contour of the flower bud was quickly found using the OpenCV findContours function. Finally, the flower bud region of the flower was drawn, as shown in Figure 6d. The red border in Figure 6d is the contour of the flower bud, and the green frame is the rectangular region of the flower bud.
Based on the flower bud region obtained by the edge detection algorithm, the area of the flower bud region can then be calculated and the average diameter of the flower bud can be obtained. After flower bud detection, the rectangular region was extracted. RGB and depth information in this region were used together as the dataset for the flower buds, which was used to determine the maturing status of flower (outlined in the next section).

2.3.2. Maturing Status Classification Based on CNN Models

The maturing status grade of fresh flowers is usually judged by observing the degree of development of the flower buds, which is difficult to directly express by data. Machine vision technology can analyze the image of flowers and can better determine the maturing status grade of flowers. This research uses the deep learning method to analyze the maturing status of flower buds, and uses convolutional neural networks to process the flower images. The fusion of collected RGB images and depth information can more effectively analyze the characteristics of flower buds. Since the depth information can express the distribution information on the surface of the flower buds, it can reflect the open state of the flowers. The depth data represent the depth of the flower bud surface from the camera, where the data are expressed in millimeters. Since the depth data are usually several hundred millimeters, standardizing the data is more helpful for the computation of the neural network. The depth information was processed using the formula for data standardization. Min–max standardization processing was performed on the original depth information data   x i , j   1 i n , 1 j m , where n is the width of the original depth data and m is the length of the original depth data. This transformed the data into numbers between 0 and 1, in order to obtain the new data y i , j [ 0 , 1 ] , as shown below:
y i , j = x i , j min 1 i n 1 j m { x i , j } max 1 i n 1 j m { x i , j } min 1 i n 1 j m { x i , j }
where min 1 i n 1 j m { x i , j } represents the min value in x i , j and max 1 i n 1 j m { x i , j } represents the max value in x i , j . The same standardized method for data preprocessing was used for the RGB data. The three-dimensional RGB information of the flower bud was combined with the standardized depth information to obtain the four-channel RGBD flower bud data.
This research used four convolutional neural networks: VGG16 [20], ResNet18 [21], MobileNetV2 [22], and InceptionV3 [23] for maturing status detection experiments.
The fused data of the flower bud were used as the inputs of the convolutional neural network, where VGG16, ResNet18, and InceptionV3 were used as the network backbone. Since the original flower bud images were of different sizes, converting the images to the same size is helpful for the calculation of the neural network. We converted the fused RGB and depth information into a certain size according to the input requirements of different convolutional neural networks. Through the convolution, pooling, full connection, and some activation functions of the neural network, the maturing status of the flowers was finally output.
For VGG16, MobileNetV2, and ResNet18, the input data were converted to 224 × 224 × 4 for the operation of the convolutional neural network. For VGG16, due to the addition of the depth information channel, we changed the first layer of the network structure to 4 channels and 64 convolution kernels (the size of the convolution kernel and the stride were unchanged, 3 × 3 and 1, respectively), and the number of output features of the last fully connected layer was changed to 5. The improved convolutional neural network structure based on VGG16 is shown in Figure 7.
For ResNet18, the input data were converted to 224 × 224 × 4 for network feature extraction. We changed the first layer of the network structure to 4 channels and 64 convolution kernels. The size of the convolution kernel was 7 × 7, and the stride remained at 2. According to the five-grade classification of flower maturing status, the output features of the last fully connected layer of the network were changed to 5. ResNet18 has a special residual network module, which can better extract feature information.
For MobileNetV2, the input data were converted to 224 × 224 × 4 for network feature extraction. The first feature layer structure was changed to 4 channels and 32 convolution kernels. The size of the convolution kernel was 3 × 3, and the stride was 2. The number of output features of the last classifier linear layer was changed to 5.
For the InceptionV3 network, the input data were converted to 299 × 299 × 4 for network feature extraction. We change the first Conv2d_1a layer input convolution structure of InceptionV3 to 4 input channels. The size of the convolution kernel was 3 × 3, the step size remained at 2, and the number of output features of the last fully connected layer was changed to 5.
By improving four kinds of network structure, input sizes and output sizes were changed. When constructing the model, the first convolutional layer set the initial parameters of the four channels, and the parameter format of the fourth channel was the same as that of the first three channels. The number of network layers, input sizes, and parameters of the improved VGG16, ResNet18, and InceptionV3 are shown in Table 4.

3. Results and Discussion

3.1. Experimental Environment

The experiment was carried out using the Pycharm software; the hardware and software parameters of the experiment are shown in Table 5. When a convolutional neural network is used for experiment model training, the GPU module is used to improve the training speed of the network model. The experiment used the Pytorch deep learning framework to build a convolutional neural network.

3.2. Performance of Different CNN Models on Flower Maturing Status

Two cases of RGB image inputs and fused RGBD information input were used for the experiments. The performance of the two models in flower maturing status detection was compared to analyze whether the depth information could improve the accuracy of maturing status detection. Using VGG16, ResNet18, MobileNetV2, and InceptionV3 as the main structure of the network, the experiment for detecting the maturing status of flowers was carried out. In the training of the three networks, the transfer learning method was used. Loading the pretraining model can improve the speed and accuracy of network learning.
First, without using depth information, only RGB images of flower buds were used for maturing status detection experiments. The dataset was converted to a size of 224 × 224 × 3 for experiments with the VGG16, MobileNetV2, and ResNet18 networks, and to a size of 299 × 299 × 3 for experiments with the Inception V3 network. Then, the RGB image and depth information in the dataset were combined to generate four-channel RGBD data. Due to the addition of depth information, the input size of the fused RGBD data was 224 × 224 × 4 for VGG16, MobileNetV2, and ResNet18, and 299 × 299 × 4 for InceptionV3. The experiment batch size was set to 32, and the cross-entropy loss function was used for training. The learning rate was set to 0.01 for VGG16, ResNet18, MobileNetV2, and InceptionV3. The optimizer of the training model was SGD (stochastic gradient descent). As shown in Table 6, the best accuracies of the validation sets of the four network models with depth data and without depth data were compared.
It can be seen from Table 6 that the accuracies of VGG16, ResNet18, MobileNetV2, and InceptionV3 were improved by adding the depth information data. Among them, VGG16 increased by 3.28%, ResNet18 increased by 2.6%, MobileNetV2 increased by 0.52%, and InceptionV3 increased by 1.04%. Among the four network models, InceptionV3 had the best effect. The accuracy of using RGB information reached 97.40%, and the accuracy after adding depth information reached 98.44%. It can also be seen from Table 6 that VGG16 had the most obvious improvement in accuracy after using depth information. When the depth information was not used, the accuracy effect of VGG16 was similar to that of ResNet18, but after adding the depth information, it increased by 3.28%. Figure 8 shows the Comparison of the best accuracy of four CNN models.
Table 7 shows the accuracy of four network models and the time required for training as well as the time required to test a single sample. It can be seen that the training time of VGG16 and InceptionV3 was longer. The average time required for testing the data of the four networks was very close, around 0.02 s, which requires very little calculation time. ResNet18 required about 17 ms for each piece of data and InceptionV3 required about 25 ms for each piece of data.
Figure 9 shows the confusion matrix of the four CNN networks with RGBD data for classifying five-grade flowers. It can be seen that the performance of three of these networks on the five-grade flower classification is worse than that of the inceptionV3 model. For the InceptionV3-based model, the classification accuracies of Grade 1 and Grade 2 were up to 100%, which may be because the status of the two types is distinct from the other types. However, 2.08% of the Grade 3 samples were classified as Grade 4, and 3.92% of the Grade 5 samples were classified as Grade 4. This may be due to the similar status of flower buds, which is determined by the open status of petals. Using the proposed method, the accuracy of each grade classification can reach more than 96%, and the accuracy of all samples can reach more than 98%. The results of the method are satisfactory.

3.3. Performance of Different Machine Learning Methods on Flower Maturing Status

In order to further evaluate the performance of the proposed model, another group of experiments were carried out by comparing with traditional machine learning methods. Commonly used machine learning algorithms—k-nearest neighbor (KNN) [24], support vector machine (SVM) [3], Random Forest (RF) [25], and logistic regression—were selected to classify the maturity status of the flowers. The experimental results of four machine learning algorithms are shown in Table 8.
As shown in Table 8, among the four traditional classification algorithms, logistic regression had the best performance, reaching 70.83%, and KNN had the worst performance, at only 62.5%. Because the size and color of some fresh flowers were close, it was very difficult to analyze the maturing status of the flower buds using traditional features. However, the proposed maturing status grade classification algorithm based on the improved InceptionV3-based model using fused RGBD information had a good effect, and the highest prediction classification accuracy was 98.44%. From these results, it can be concluded that the prediction performance of artificially extracted features in maturity classification is poorer than that of features automatically extracted by an algorithm.

3.4. Performance of Different Deep Learning Methods in Flower Classification

In order to illustrate the effectiveness of the proposed method, we compared the accuracy with other flower classification deep learning methods, as shown in Table 9. The classification accuracies of these methods range from 87.6% to 98.44%. It is clear that the proposed method achieved the highest classification accuracy, and the FCN with VGG16 model followed with 97.10% classification accuracy. The A-LDCNN method performed the worst, with 87.6% accuracy in the testing set.

4. Conclusions

This paper proposed a classification method for flower quality grading based on deep learning. The RGB image and depth information of fresh flowers were collected as the dataset, and the maturing status of fresh flowers was detected using four convolutional neural networks: VGG16, Resnet18, MobileNetV2, and InceptionV3. The accuracy of flower maturity detection was improved after using the depth information. The accuracy of the maturing status grade classification of the improved InceptionV3-based model with depth data was up to 98%. The depth data can well reflect the characteristics of flower buds and facilitate the classification of the maturing status.
The method used in this paper produced good classification results, offering a scheme for flower quality grading. It also provides ideas for classification by adding deep information, which can be applied in other industries. In the future, we will try to fuse the depth data and the corresponding RGB data in a more suitable way to improve the classification accuracy. Additionally, we will try to cooperate with a manufacturing company to design mechanical equipment that can keep the flower upright when the picture of the flower bud is taken, and then apply the proposed method in the industry.

Author Contributions

All authors designed this work; X.S. and Z.L. carried out the experiments and validation of this work; X.S. and T.Z. prepared and wrote the original draft of the manuscript; T.Z. and C.N. reviewed and edited the manuscript. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Natural Science Program of China, grant number 31570714.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Sunny, A.I.; Zhang, J.; Tian, G.Y.; Tang, C.; Rafique, W.; Zhao, A.; Fan, M. Temperature independent defect monitoring using passive wireless RFID sensing system. IEEE Sens. J. 2019, 19, 1525–1532. [Google Scholar] [CrossRef] [Green Version]
  2. Gracia, L.; Perez-Vidal, C.; Gracia, C. Computer vision applied to flower, fruit and vegetable processing. World Acad. Sci. Eng. Technol. 2011, 78, 430–436. [Google Scholar] [CrossRef]
  3. Liu, W.; Rao, Y.; Fan, B.; Song, J.; Wang, Q. Flower classification using fusion descriptor and SVM. In Proceedings of the 2017 International Smart Cities Conference (ISC2), Wuxi, China, 14–17 September 2017. [Google Scholar] [CrossRef]
  4. Tiay, T.; Benyaphaichit, P.; Riyamongkol, P. Flower recognition system based on image processing. In Proceedings of the 2014 3rd ICT International Student Project Conference (ICT-ISPC), Nakhonpathom, Thailand, 26–27 March 2014; pp. 99–102. [Google Scholar] [CrossRef]
  5. Paper, C.; Sripian, P.; Mongkut, K.; Tho, T. Flower Identification System by Image Processing Flower Identification System by Image Processing. In Proceedings of the 3rd International Conference on Creative Technology CRETECH, Bangkok, Thailand, 24–26 August 2016. [Google Scholar]
  6. Soleimanipour, A.; Chegini, G.R.; Massah, J. Classification of anthurium flowers using combination of PCA, LDA and support vector machine. Agric. Eng. Int. CIGR J. 2018, 20, 219–228. [Google Scholar]
  7. Zawbaa, H.M.; Abbass, M.; Basha, S.H.; Hazman, M.; Hassenian, A.E. An automatic flower classification approach using machine learning algorithms. In Proceedings of the 2014 International Conference on Advances in Computing, Communications and Informatics (ICACCI), Delhi, India, 24–27 September 2014; pp. 895–901. [Google Scholar] [CrossRef]
  8. Albadarneh, A.; Ahmad, A. Automated Flower Species Detection and Recognition from Digital Images. Int. J. Comput. Sci. Netw. Secur. 2017, 17, 144. [Google Scholar]
  9. Abu, M.A.; Indra, N.H.; Rahman, A.H.A.; Sapiee, N.A.; Ahmad, I. A study on image classification based on deep learning and tensorflow. Int. J. Eng. Res. Technol. 2019, 12, 563–569. [Google Scholar]
  10. Hu, F.; Yao, F.; Pu, C. Learning Salient Features for Flower Classification Using Convolutional Neural Network. In Proceedings of the 2020 IEEE International Conference on Artificial Intelligence and Information Systems (ICAIIS), Dalian, China, 20–22 March 2020; pp. 476–479. [Google Scholar] [CrossRef]
  11. Cıbuk, M.; Budak, U.; Guo, Y.; Cevdet Ince, M.; Sengur, A. Efficient deep features selections and classification for flower species recognition. Meas. J. Int. Meas. Confed. 2019, 137, 7–13. [Google Scholar] [CrossRef]
  12. Hiary, H.; Saadeh, H.; Saadeh, M.; Yaqub, M. Flower classification using deep convolutional neural networks. IET Comput. Vis. 2018, 12, 855–862. [Google Scholar] [CrossRef]
  13. Tian, M.; Chen, H.; Wang, Q. Flower identification based on Deep Learning. J. Phys. Conf. Ser. 2019, 1237, 022060. [Google Scholar] [CrossRef]
  14. Anjani, I.A.; Pratiwi, Y.R.; Norfa Bagas Nurhuda, S. Implementation of Deep Learning Using Convolutional Neural Network Algorithm for Classification Rose Flower. J. Phys. Conf. Ser. 2021, 1842, 012002. [Google Scholar] [CrossRef]
  15. Wang, Z.; Wang, K.; Wang, X.; Pan, S. A convolutional neural network ensemble for flower image classification. ACM Int. Conf. Proc. Ser. 2020, 225–230. [Google Scholar] [CrossRef]
  16. Prasad, M.V.D.; Lakshmamma, B.J.; Chandana, A.H.; Komali, K.; Manoja, M.V.N.; Kumar, P.R.; Prasad, C.R.; Inthiyaz, S.; Kiran, P.S. An efficient classification of flower images with convolutional neural networks. Int. J. Eng. Technol. 2018, 7, 384–391. [Google Scholar] [CrossRef] [Green Version]
  17. Gavai, N.R.; Jakhade, Y.A.; Tribhuvan, S.A.; Bhattad, R. MobileNets for flower classification using TensorFlow. In Proceedings of the 7 International Conference on Big Data, IoT and Data Science (BID), Pune, India, 20–22 December 2017; pp. 154–158. [Google Scholar] [CrossRef]
  18. Mehdipour Ghazi, M.; Yanikoglu, B.; Aptoula, E. Plant identification using deep neural networks via optimization of transfer learning parameters. Neurocomputing 2017, 235, 228–235. [Google Scholar] [CrossRef]
  19. Cengil, E.; Cinar, A. Multiple classification of flower images using transfer learning. In Proceedings of the 2019 International Artificial Intelligence and Data Processing Symposium (IDAP), Malatya, Turkey, 21–22 September 2019. [Google Scholar] [CrossRef]
  20. Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. arXiv 2015, arXiv:1409.1556. [Google Scholar]
  21. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar] [CrossRef] [Green Version]
  22. Sandler, M.; Howard, A.; Zhu, M.; Zhmoginov, A.; Chen, L.C. MobileNetV2: Inverted Residuals and Linear Bottlenecks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 4510–4520. [Google Scholar] [CrossRef] [Green Version]
  23. Szegedy, C.; Vanhoucke, V.; Ioffe, S.; Shlens, J.; Wojna, Z. Rethinking the Inception Architecture for Computer Vision. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 2818–2826. [Google Scholar] [CrossRef] [Green Version]
  24. Manjunath, S. Texture Features and KNN in Classification of Flower Images D S Guru. IJCA 2010, 1, 21–29. [Google Scholar]
  25. Belgiu, M.; Drăgu, L. Random forest in remote sensing: A review of applications and future directions. ISPRS J. Photogramm. Remote Sens. 2016, 114, 24–31. [Google Scholar] [CrossRef]
  26. Qin, M.; Xi, Y.; Jiang, F. A New Improved Convolutional Neural Network Flower Image Recognition Model. In Proceedings of the 2019 IEEE Symposium Series on Computational Intelligence (SSCI), Xiamen, China, 6–9 December 2019; pp. 3110–3117. [Google Scholar] [CrossRef]
  27. Bae, K.I.; Park, J.; Lee, J.; Lee, Y.; Lim, C. Flower classification with modified multimodal convolutional neural networks. Expert Syst. Appl. 2020, 159, 113455. [Google Scholar] [CrossRef]
Figure 1. Images of flowers of different maturing status.
Figure 1. Images of flowers of different maturing status.
Electronics 10 02353 g001
Figure 2. RGB images and depth map of Diana Roses, where (a) is an RGB image of the flower bud, (b) is a depth map of the flower bud, and (c) is the RGB image of the flower stem.
Figure 2. RGB images and depth map of Diana Roses, where (a) is an RGB image of the flower bud, (b) is a depth map of the flower bud, and (c) is the RGB image of the flower stem.
Electronics 10 02353 g002
Figure 3. Detection of flower bud region.
Figure 3. Detection of flower bud region.
Electronics 10 02353 g003
Figure 4. Flower data expansion, where (a) is the original image, (b) is after horizontal flipping, (c) is after vertical flipping, and (d) is after diagonal mirror flipping.
Figure 4. Flower data expansion, where (a) is the original image, (b) is after horizontal flipping, (c) is after vertical flipping, and (d) is after diagonal mirror flipping.
Electronics 10 02353 g004
Figure 5. Diagram of the proposed method for flower classification.
Figure 5. Diagram of the proposed method for flower classification.
Electronics 10 02353 g005
Figure 6. Flower bud image detection: (a) original image, (b) grayscale image, (c) binary image, and (d) flower bud region.
Figure 6. Flower bud image detection: (a) original image, (b) grayscale image, (c) binary image, and (d) flower bud region.
Electronics 10 02353 g006
Figure 7. Structure of improved VGG16 network, where Conv represents the convolutional layer, FC represents the fully connected layer, Pool represents the pooling layer, and Softmax corresponds to an activation function.
Figure 7. Structure of improved VGG16 network, where Conv represents the convolutional layer, FC represents the fully connected layer, Pool represents the pooling layer, and Softmax corresponds to an activation function.
Electronics 10 02353 g007
Figure 8. Comparison of CNN model performance on classifying the flower grades.
Figure 8. Comparison of CNN model performance on classifying the flower grades.
Electronics 10 02353 g008
Figure 9. Confusion matrix of the improved InceptionV3-based model using RGBD data.
Figure 9. Confusion matrix of the improved InceptionV3-based model using RGBD data.
Electronics 10 02353 g009
Table 1. Maturing status coding standard.
Table 1. Maturing status coding standard.
Grade of Maturing StatusDescription
1Sepals stay upright, but do not open from petals
2There are 3–5 petals separated from the top
3More than 5 petals open and separate from the top
450% of the petals open from the top
5More than 50% of the petals open from the top
Table 2. Requirements of stem length grade for short-branch flowers.
Table 2. Requirements of stem length grade for short-branch flowers.
Grade12345
Stem Length (cm)≥60≥55≥50≥40≥30
Table 3. Sample dataset distribution.
Table 3. Sample dataset distribution.
Data SetGrade1Grade2Grade3Grade4Grade5Total
Training Set7152101106118448
Testing Set2120435058192
Total9272144156176640
Table 4. Input size and number of model parameters.
Table 4. Input size and number of model parameters.
NetworkLayer NumberInput SizeParameters (Millions)
VGG1616224 × 224 × 4134.3
ResNet1818224 × 224 × 411.2
MobileNetV254224 × 224 × 42.2
InceptionV348299 × 299 × 421.8
Table 5. Hardware and software of the experimental environment.
Table 5. Hardware and software of the experimental environment.
DevicesDescription
CPUIntel (R) core (TM) i7-8700 3.2 GHz
Memory32.00 GB
Graphics cardNVIDIA GTX 2080ti
Operating SystemLinux Ubuntu 18.04
Software environmentPython3.6, Pytorch 1.4.0, CUDA 9.2
Table 6. Comparison of different model performances on classifying the flowers with the validation set.
Table 6. Comparison of different model performances on classifying the flowers with the validation set.
NetworkAccuracy without Depth DataAccuracy with Depth Data
VGG1693.60%96.88%
ResNet1892.19%94.79%
MobileNetV295.31%95.83%
InceptionV397.40%98.44%
Table 7. Time requirements of four CNN networks.
Table 7. Time requirements of four CNN networks.
NetworkAccuracyTraining TimeTest Time
VGG1696.88%41 min 27 s0.0188 s
ResNet1894.79%26 min 26 s0.0171 s
MobileNetV295.83%26 min 9 s0.0193 s
InceptionV398.44%36 min 40 s0.0252 s
Table 8. Comparison of classification models’ accuracy.
Table 8. Comparison of classification models’ accuracy.
MethodAccuracy
KNN62.50%
SVM64.58%
Random Forest68.75%
Logistic Regression70.83%
InceptionV398.44%
Table 9. Classification accuracy of different deep learning models.
Table 9. Classification accuracy of different deep learning models.
MethodAccuracy
A-LDCNN [26]87.60%
MobileNet with weighted average [15]91.76%
Modified m-CNN [27]93.69%
FCN+VGG16 [12]97.10%
The proposed method with InceptionV398.44%
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Sun, X.; Li, Z.; Zhu, T.; Ni, C. Four-Dimension Deep Learning Method for Flower Quality Grading with Depth Information. Electronics 2021, 10, 2353. https://doi.org/10.3390/electronics10192353

AMA Style

Sun X, Li Z, Zhu T, Ni C. Four-Dimension Deep Learning Method for Flower Quality Grading with Depth Information. Electronics. 2021; 10(19):2353. https://doi.org/10.3390/electronics10192353

Chicago/Turabian Style

Sun, Xinyan, Zhenye Li, Tingting Zhu, and Chao Ni. 2021. "Four-Dimension Deep Learning Method for Flower Quality Grading with Depth Information" Electronics 10, no. 19: 2353. https://doi.org/10.3390/electronics10192353

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop