Next Article in Journal
Potential of Debaryomyces hansenii Strains on the Inhibition of Botrytis cinerea in Blueberry Fruits (Vaccinium corymbosum L.)
Previous Article in Journal
Qualitative and Nutritional Characteristics of Plum Cultivars Grown on Different Rootstocks
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Growth Analysis of Plant Factory-Grown Lettuce by Deep Neural Networks Based on Automated Feature Extraction

1
Research Institute of Agriculture and Life Sciences, Seoul National University, Seoul 08826, Republic of Korea
2
Division of Animal, Horticultural and Food Sciences, Chungbuk National University, Cheongju 28644, Republic of Korea
3
Brain Korea 21 Center for Bio-Health Industry, Chungbuk National University, Cheongju 28644, Republic of Korea
*
Author to whom correspondence should be addressed.
Horticulturae 2022, 8(12), 1124; https://doi.org/10.3390/horticulturae8121124
Submission received: 5 November 2022 / Revised: 24 November 2022 / Accepted: 28 November 2022 / Published: 29 November 2022

Abstract

:
The mechanisms of lettuce growth in plant factories under artificial light (PFALs) are well known, whereby the crop is generally used as a model in horticultural science. Deep learning has also been tested several times using PFALs. Despite their numerous advantages, the performance of deep learning models is commonly evaluated based only on their accuracy. Therefore, the objective of this study was to train deep neural networks and analyze the deeper abstraction of the trained models. In total, 443 images of three lettuce cultivars were used for model training, and several deep learning algorithms were compared using multivariate linear regression. Except for linear regression, all models showed adequate accuracies for the given task, and the convolutional neural network (ConvNet) model showed the highest accuracy. Based on color mapping and the distribution of the two-dimensional t-distributed stochastic neighbor embedding (t-SNE) results, ConvNet effectively perceived the differences among the lettuce cultivars under analysis. The extension of the target domain knowledge with complex models and sufficient data, similar to ConvNet with multitask learning, is possible. Therefore, deep learning algorithms should be investigated from the perspective of feature extraction.

1. Introduction

Lettuce is a relatively easy to grow leafy vegetable that has been thoroughly characterized physiologically, such that it is generally used as model crop in horticultural science and is commercially cultivated in plant factories with artificial lighting (PFALs) worldwide [1]. Various advanced technologies have been tested based on the technology-intensive characteristics of PFALs [2,3,4]. Thus, our extensive knowledge of the growth mechanisms of lettuces in PFALs facilitates the analysis of the influence of the new technology.
As an advanced technology, deep learning has been introduced several times [5] and has shown state-of-the-art performance in many fields [6,7,8,9]. Deep learning can be used to conduct high-level abstraction based on automated feature extraction. Further, deep learning algorithms have high adaptability, which enables the reutilization of the models developed, such that they can be easily adapted to other fields of knowledge.
Recently, the application of deep learning in agriculture has increased [9,10,11]. Particularly, diverse deep learning models have been applied to horticultural data generated in controlled environments, such as greenhouses and PFALs. Further, the models are also applicable to horticultural open fields. However, most of the reports in agricultural science and engineering have focused on the technical improvement of the model accuracy, while other advantages of deep learning, such as automated feature extraction and high adaptability, have been neglected. The regression and classification of limited data are relatively easy tasks for machine learning techniques; therefore, adequate training might reduce the use of unnecessarily complicated models [12,13]. High accuracy for agricultural tasks can be easily achieved with canonical models that have long been used. Deep learning models have different advantages compared with existing models. Therefore, evaluating the performance of deep learning models based on accuracy alone is an unfortunate waste of analytical potential of the technology.
Automated feature extraction and high-level abstraction not only provide high performance but enable the development of novel analytic methods as well [14]. Achieving high performance for a given task with a complicated input and structure requires sophisticated optimization to facilitate the extraction of unprecedented information, which is difficult using existing methodologies, such as multivariate regression. This potential of deep learning models has led to the development of analytical methods for neural networks, although deep learning models are black box models [15,16,17]. An adequate analysis of the trained neural network might provide more value for setting horticultural hypotheses. Therefore, the objective of this study was to train deep neural networks and analyze the deeper abstraction of the trained models. The target growth variables were fresh weight (FW), dry weight (DW), the number of leaves (NL), the leaf area (LA), and soil plant-analysis development (SPAD). These parameters were analyzed with two-dimensional t-distributed stochastic neighbor embedding (t-SNE), which is often used for dimension reduction to understand entangled representation spaces; thus, the methodology might be used to interpret the behavior of deep learning algorithms. This study revealed that it is possible to extend the target domain knowledge with complex models and sufficient data, similar to ConvNet with multitask learning. Therefore, deep learning algorithms should be investigated from the perspective of feature extraction.

2. Materials and Methods

2.1. Plant Materials and Growth Conditions

Seeds of lettuce (Lactuca sativa L.) cultivars Corbana, Caipira, and Fairy (Enza zaden, Enkhuizen, The Netherlands) were sown in sponge cubes (35 × 35 × 30 mm, L × W × H) and placed on 60-hole trays in a PFAL module. During the first three days, lighting was continuously provided by warm, white light-emitting diodes (LEDs) at an intensity of 230 μmol m−2 s−1, and ambient temperature and relative humidity were set to 24 °C and 90%, respectively. Finally, the germination sponge cubes were sub-irrigated with distilled water; then, during the following 11 days, Sonneveld nutrient solution [18] was supplied with an electrical conductivity and acidity maintained at 0.8 dS m−1 and 5.8, respectively. Then, the seedlings of each cultivar were transplanted to three cultivation shelves with semi-DFT water channels (3540 × 80 × 58 mm, L × W × H) covered with perforated tops, which allowed for the growth of individual plants 11 cm apart from each other. After transplanting, the environmental conditions were changed as follows: a light period of 16 h, 21 °C ambient temperature, and 80% relative humidity. The cultivation space was divided into nine sections, and 32 plants per section were cultivated for four weeks (Figure 1). After transplanting, the electric conductivity and pH of the Sonneveld nutrient solution were changed to 1.2 dS m−1 and 5.8, respectively.

2.2. Plant Growth and Image Data Collection

Three independent growth experiments were conducted, from which growth data were collected at different intervals to acquire unbiased data (Figure 2). Shoot fresh weight was measured using a digital precision scale (SI-234; Denver Instruments, Denver, CO, USA), and leaf area was measured using a leaf area meter (Li-3100C; Li-COR, Lincoln, NE, USA). The shoots were dried in a forced-air drying oven (VS-1202D3, Vision, Daejeon, Korea) at 70 °C for more than 72 h to determine the dry weight. Soil plant-analysis development (SPAD), which represents the chlorophyll content of leaves, was measured using a SPAD meter (SPAD-502, Konica Minolta, Tokyo, Japan).
A blackout box (560 × 560 × 560 mm, L × H × W) equipped with white LEDs was used to obtain top-view images of the crops (Figure 3A). Each plant was placed in the center of the box, and images were taken at a height of 50 cm using a smartphone (Galaxy S20, Samsung Electronics Inc., Suwon, Korea). The image size was 3024 × 3024 pixels with RGB indices (Figure 3B).

2.3. Model Structure

In this study, we adopted convolutional neural networks (ConvNets) to design the structure of the model as the specific task was to convert lettuce images into growth parameters. ConvNets consist of several convolution layers, and the convolution process effectively abstracts the input into the desired output [5]. In particular, this is a state-of-art algorithm and has shown the highest performance in computer vision [10,19].
Additionally, some variations of ConvNet were included for comparison. Multitask learning helps the model generalize the data relation [20,21]. In this study, the target parameters were distinctive; therefore, multitask learning was introduced.
The feedforward neural network (FFNN) and long short-term memory (LSTM) algorithms were used as comparable deep learning models because these are representative algorithms in deep neural networks. Further, as a conventional methodology, multivariate linear regression (LinReg) was conducted as a baseline. Model structures and hyperparameters were empirically optimized (Table 1 and Table 2).

2.4. Data Preprocessing

A total of 443 images were collected. The images were re-sized and augmented as the model inputs. The original images were re-sized to 128 × 128 pixels. Input images were augmented by flipping, shifting, and rotating the original images. Processed images were directly fed to ConvNets after rearrangement for training the FFNN and LSTM (Table 1). The values from the red, green, and blue channels of the images were summed up for linear regression. Growth-related data were normalized to 0–1. Input and output data were combined into datasets.

2.5. Model Training, Validation, and Evaluation

All the models were trained to minimize the mean squared error (MSE). Five-fold cross-validation was conducted to ensure model robustness with a limited number of data. Datasets were divided into training and validation sets for each fold at a ratio of 8:2. Models were evaluated based on R2 and the root mean square error (RMSE). After model training, the trained ConvNet with multitask learning was tested using two-dimensional t-distributed stochastic neighbor embedding (t-SNE), which is often used to explore the black-box condition of deep learning models [16].

2.6. Computation

The AdamOptimizer was used for model training [22], and the TensorFlow software (v. 2.10.0, Google Inc., Mountain View, CA, USA) was used for deep learning computations [23]. A Linux server with a performance of 35.58 TFlops (RTX 3090, NVIDIA, Santa Clara, CA, USA) was used for all computations.

3. Results and Discussion

3.1. Model Accuracy

Except for linear regression, all models showed adequate accuracy for the given task, (Table 3). Specifically, deep learning models yielded R2 values higher than 0.7, but linear regression had relatively lower accuracy for the prediction of all growth-related parameters, except for the leaf area. Furthermore, ConvNets showed the highest accuracy among trained models, including linear regression, and Multitask learning did not remarkably improve the accuracy of ConvNet architecture.
Overall, the models exhibited subtle differences. Neural network architecture proved that all functions can be theoretically approximated using only two hidden layers [24]. Computing simple-regression tasks are not difficult for shallow deep learning models, although the inputs were images, which are complicated data for conventional models.
Conventional models, such as linear regression, can also be optimized to improve accuracy. In our study, five target growth parameters were regressed simultaneously in the linear regression procedure for model comparison, but a reduced output achieved a higher value for R2 [25]. Because the linear regression procedure is not complicated, five individual regressions were easily possible. Therefore, various types of regression, such as Bayesian regression and support vector machines, can be used to achieve the highest accuracy [26,27].
Our trained models were not optimized for the deep learning algorithms. In general, sophisticated parameter optimization is required to train deep learning models for big data [5]. However, in this study it was difficult to determine the prediction limitation of each model because of the limited number of data. Thus, hyperparameters and structures should be determined for the better optimization of the model. That is, deep learning models can also show higher accuracy based on the empirical skills of users.
Therefore, the exact performance of the deep learning models could not be determined based on the prediction accuracy using approximately 400 images. Because complex deep learning algorithms require a large number of data for model robustness, an adequate methodology should be adopted for the given task, considering the available data and computing power. The models could easily be improved with some technical parameter fitting and model optimization. For example, the low learning rate and high training time could make the model converge more sophisticatedly; sensitive activation functions such as hyperbolic tangent and sigmoid with a shallow neural network structure could also be helpful for the regression tasks; thorough exploration of the model parameters can achieve the highest performance for the given data; and parameter optimization methodology, such as Bayesian optimization, may reduce the time-cost of the exploration for the hyperparameters.
However, this does not mean that ConvNet might not recognize the target properly. The flipping, shifting, and rotating of the original images would enhance the relation between plants in the image with the target growth factors. That is, the ConvNet models can effectively determine what plants are in the image with transformation-based training.
The two ConvNet models achieved the same average accuracy, although they differed slightly in terms of accuracy. The leaf area was the easiest target for the trained models. The no-target prediction was particularly less accurate for the deep learning models, which showed similar overall accuracy. Most models tended to underestimate the five targeted variables (Figure 4). In particular, the models underestimated the number of leaves.
As all of the models generally underestimated the target variables towards the latter part of the cultivation period, it can be concluded that the top-view images do not contain enough information about growth during the second half of the cultivation period. Since this phenomenon has been previously reported, it may be due to a limitation of the top-view itself [28,29,30]. Therefore, images capturing multiple perspectives at the same time should be considered, even for rosette leafy vegetables. In this study, well-trained models such as ConvNet showed adequate robustness, but it would be helpful to input other images or environments and growth data for the further estimation of precision.

3.2. t-SNE Analysis

Output values from the terminal hidden layers of the trained ConvNet model using multitask learning were extracted and tested using t-SNE. In particular, the best-trained ConvNet distinctively recognized the distribution of Corbana (Figure 5).
According to the color mapping and distribution of the t-SNE results, ConvNet effectively differentiated between the three cultivars tested, which were distinguished based on their appearance according to the number of leaves and SPAD. In practice, Corbana showed distinctive growth compared with the others [31]. Analysis based on t-SNE could be expanded to a more complicated interpretation according to the data and task. This study revealed that it is possible to extend the target domain knowledge with complex models and sufficient data, just as ConvNet with multitask learning here. Therefore, deep learning algorithms should be investigated from the perspective of feature extraction.

4. Conclusions

In this study, the plant growth characteristics of three lettuce cultivars grown in a PFAL, including fresh weight, dry weight, number of leaves, leaf area, and SPAD, were estimated using deep neural networks based on top-view images. Trained ConvNet models showed the highest accuracy; moreover, the ConvNet with multitask learning was tested using t-SNE. The 2D distribution from the t-SNE showed that the trained ConvNet model recognized the differences between the cultivars based on the raw images. Because in this study, the number of input images was not very large, model accuracy can be improved in the future, with intensive optimization of model parameters. Therefore, evaluating the performance of deep learning models based on accuracy alone is not recommended. Our t-SNE results showed the potential to analyze the automated feature extraction potential of deep learning algorithms, a strategy that can help the scientific discovery of target domain knowledge in horticultural science.

Author Contributions

Conceptualization, T.M. and M.-M.O.; methodology, T.M.; validation, T.M. and W.-J.C.; formal analysis, T.M., W.-J.C., and M.-M.O.; investigation, T.M., S.-H.J., D.-S.C., and W.-J.C.; writing—original draft preparation, T.M.; writing—review and editing, T.M., W.-J.C., and M.-M.O.; visualization, T.M. and M.-M.O.; supervision, M.-M.O.; project administration, M.-M.O.; funding acquisition, M.-M.O.; and data curation, M.-M.O. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Korea Institute of Planning and Evaluation for Technology in Food, Agriculture, and Forestry (IPET) and the Korea Smart Farm R&D Foundation (KosFarm), through the Smart Farm Innovation Technology Development Program funded by the Ministry of Agriculture, Food, and Rural Affairs (MAFRA); the Ministry of Science and ICT (MSIT); and the Rural Development Administration (RDA) (421033-4).

Institutional Review Board Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Kozai, T.; Niu, G.; Takagaki, M. Plant Factory: An Indoor Vertical Farming System for Efficient Quality Food Production; Academic Press: Cambridge, MA, USA, 2019; ISBN 978-0-12-816692-5. [Google Scholar] [CrossRef]
  2. Marondedze, C.; Liu, X.; Huang, S.; Wong, C.; Zhou, X.; Pan, X.; An, H.; Xu, N.; Tian, X.; Wong, A. Towards a Tailored Indoor Horticulture: A Functional Genomics Guided Phenotypic Approach. Hortic. Res. 2018, 5, 68. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  3. Buxbaum, N.; Lieth, J.H.; Earles, M. Non-Destructive Plant Biomass Monitoring with High Spatio-Temporal Resolution via Proximal RGB-D Imagery and End-to-End Deep Learning. Front. Plant Sci. 2022, 13, 758818. [Google Scholar] [CrossRef] [PubMed]
  4. Lin, Z.; Fu, R.; Ren, G.; Zhong, R.; Ying, Y.; Lin, T. Automatic Monitoring of Lettuce Fresh Weight by Multi-Modal Fusion Based Deep Learning. Front. Plant Sci. 2022, 13, 980581. [Google Scholar] [CrossRef]
  5. LeCun, Y.; Bengio, Y.; Hinton, G. Deep Learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef]
  6. Aloysius, N.; Geetha, M. A Review on Deep Convolutional Neural Networks. In Proceedings of the 2017 International Conference on Communication and Signal Processing (ICCSP), Melmaruvathur, India, 6–8 April 2017; pp. 0588–0592. [Google Scholar] [CrossRef]
  7. Kamilaris, A.; Kartakoullis, A.; Prenafeta-Boldú, F.X. A Review on the Practice of Big Data Analysis in Agriculture. Comput. Electron. Agric. 2017, 143, 23–37. [Google Scholar] [CrossRef]
  8. Dhillon, A.; Verma, G.K. Convolutional Neural Network: A Review of Models, Methodologies and Applications to Object Detection. Prog. Artif. Intell. 2020, 9, 85–112. [Google Scholar] [CrossRef]
  9. Benos, L.; Tagarakis, A.C.; Dolias, G.; Berruto, R.; Kateris, D.; Bochtis, D. Machine Learning in Agriculture: A Comprehensive Updated Review. Sensors 2021, 21, 3758. [Google Scholar] [CrossRef]
  10. Kamilaris, A.; Prenafeta-Boldú, F.X. A Review of the Use of Convolutional Neural Networks in Agriculture. J. Agric. Sci. Technol. 2018, 156, 312–322. [Google Scholar] [CrossRef]
  11. Hasan, A.S.M.M.; Sohel, F.; Diepeveen, D.; Laga, H.; Jones, M.G.K. A Survey of Deep Learning Techniques for Weed Detection from Images. Comput. Electron. Agric. 2021, 184, 106067. [Google Scholar] [CrossRef]
  12. Hornik, K.; Stinchcombe, M.; White, H. Multilayer Feedforward Networks are Universal Approximators. Neural Netw. 1989, 2, 359–366. [Google Scholar] [CrossRef]
  13. Hornik, K. Approximation Capabilities of Multilayer Feedforward Networks. Neural Netw. 1991, 4, 251–257. [Google Scholar] [CrossRef]
  14. Moon, T.; Park, J.; Son, J.E. Prediction of the Fruit Development Stage of Sweet Pepper (Capsicum Annum Var. Annuum) by an Ensemble Model of Convolutional and Multilayer Perceptron. Biosyst. Eng. 2021, 210, 171–180. [Google Scholar] [CrossRef]
  15. Medina, J.R.; Kalita, J. Parallel Attention Mechanisms in Neural Machine Translation. In Proceedings of the 2018 17th IEEE International Conference on Machine Learning and Applications (ICMLA), Orlando, FL, USA, 17–20 December 2018; pp. 547–552. [Google Scholar] [CrossRef] [Green Version]
  16. Mnih, V.; Kavukcuoglu, K.; Silver, D.; Rusu, A.A.; Veness, J.; Bellemare, M.G.; Graves, A.; Riedmiller, M.; Fidjeland, A.K.; Ostrovski, G.; et al. Human-Level Control through Deep Reinforcement Learning. Nature 2015, 518, 529–533. [Google Scholar] [CrossRef] [PubMed]
  17. Vinyals, O.; Babuschkin, I.; Czarnecki, W.M.; Mathieu, M.; Dudzik, A.; Chung, J.; Choi, D.H.; Powell, R.; Ewalds, T.; Georgiev, P.; et al. Grandmaster Level in StarCraft II Using Multi-Agent Reinforcement Learning. Nature 2019, 575, 350–354. [Google Scholar] [CrossRef]
  18. Sonneveld, C.; Straver, N. Nutrient Solutions for Vegetables and Flowers Grown in Water or Substrates. Voedingspolossingen Glas. 1988, 8, 33. [Google Scholar]
  19. Rawat, W.; Wang, Z. Deep Convolutional Neural Networks for Image Classification: A Comprehensive Review. Neural Comput. 2017, 29, 2352–2449. [Google Scholar] [CrossRef]
  20. Caruana, R. Multitask Learning. Mach. Learn. 1997, 28, 41–75. [Google Scholar] [CrossRef]
  21. Crawshaw, M. Multi-Task Learning with Deep Neural Networks: A Survey. arXiv 2020, arXiv:2009.09796. [Google Scholar]
  22. Kingma, D.P.; Ba, J. Adam: A Method for Stochastic Optimization. ArXiv 2017, arXiv:1412.6980. [Google Scholar]
  23. Abadi, M.; Barham, P.; Chen, J.; Chen, Z.; Davis, A.; Dean, J.; Devin, M.; Ghemawat, S.; Irving, G.; Isard, M.; et al. TensorFlow: A System for Large-Scale Machine Learning. In Proceedings of the 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI 16), Savannah, GA, USA, 2–4 November 2016; USENIX Association: Savannah, GA, USA, 2016; pp. 265–283. [Google Scholar]
  24. Rosenblatt, F. The Perceptron: A Probabilistic Model for Information Storage and Organization in the Brain. Psychol. Rev. 1958, 65, 386–408. [Google Scholar] [CrossRef]
  25. Rousseeuw, P.J.; Van Aelst, S.; Van Driessen, K.; Gulló, J.A. Robust Multivariate Regression. Technometrics 2004, 46, 293–305. [Google Scholar] [CrossRef]
  26. Slovic, P.; Lichtenstein, S. Comparison of Bayesian and Regression Approaches to the Study of Information Processing in Judgment. Organ. Behav. Hum. Perform. 1971, 6, 649–744. [Google Scholar] [CrossRef]
  27. Noble, W.S. What Is a Support Vector Machine? Nat. Biotechnol. 2006, 24, 1565–1567. [Google Scholar] [CrossRef] [PubMed]
  28. Patrício, D.I.; Rieder, R. Computer Vision and Artificial Intelligence in Precision Agriculture for Grain Crops: A Systematic Review. Comput. Electron. Agric. 2018, 153, 69–81. [Google Scholar] [CrossRef] [Green Version]
  29. Moon, T.; Park, J.; Son, J.E. Estimation of Sweet Pepper Crop Fresh Weight with Convolutional Neural Network. J. Bioenviron. Contr. 2020, 29, 381–387. [Google Scholar] [CrossRef]
  30. Moon, T.; Kim, D.; Kwon, S.; Ahn, T.I.; Son, J.E. Non-Destructive Monitoring of Crop Fresh Weight and Leaf Area with a Simple Formula and a Convolutional Neural Network. Sensors 2022, 22, 7728. [Google Scholar] [CrossRef]
  31. Noh, K.; Jeong, B.R. Optimizing Temperature and Photoperiod in a Home Cultivation System to Program Normal, Delayed, and Hastened Growth and Development Modes for Leafy Oak-Leaf and Romaine Lettuces. Sustainability 2021, 13, 10879. [Google Scholar] [CrossRef]
Figure 1. Cultivation system for the experimental lettuce cultivars under study: Corbana (A), Caipira (B), and Fairy (C). A semi-DFT system was installed in the plant factory with artificial lighting module.
Figure 1. Cultivation system for the experimental lettuce cultivars under study: Corbana (A), Caipira (B), and Fairy (C). A semi-DFT system was installed in the plant factory with artificial lighting module.
Horticulturae 08 01124 g001
Figure 2. Sampling time points for image acquisition and destructive sampling during the 4-week-cultivation period. Each horizontal block represents an independent experiment.
Figure 2. Sampling time points for image acquisition and destructive sampling during the 4-week-cultivation period. Each horizontal block represents an independent experiment.
Horticulturae 08 01124 g002
Figure 3. A box covered with a blackout curtain was used for image collection (A); representative images obtained of each lettuce cultivar (B).
Figure 3. A box covered with a blackout curtain was used for image collection (A); representative images obtained of each lettuce cultivar (B).
Horticulturae 08 01124 g003
Figure 4. Validation results of the trained models: linear regression (LinReg), feedforward neural network (FFNN), long short-term memory (LSTM), convolutional neural network (ConvNet), and ConvNet trained with multitask learning (multitask). The target output was fresh weight (FW), dry weight (DW), number of leaves (NL), leaf area (LA), and soil plant-analysis development (SPAD). The title of each column indicates labels of x- and y-axis. The values in each graph represent root mean square error (RMSE). The units of RMSE were omitted. As model validation is more important than model training results, only the validation results from each cross-validation set are shown.
Figure 4. Validation results of the trained models: linear regression (LinReg), feedforward neural network (FFNN), long short-term memory (LSTM), convolutional neural network (ConvNet), and ConvNet trained with multitask learning (multitask). The target output was fresh weight (FW), dry weight (DW), number of leaves (NL), leaf area (LA), and soil plant-analysis development (SPAD). The title of each column indicates labels of x- and y-axis. The values in each graph represent root mean square error (RMSE). The units of RMSE were omitted. As model validation is more important than model training results, only the validation results from each cross-validation set are shown.
Horticulturae 08 01124 g004
Figure 5. Two-dimensional t-distributed stochastic neighbor embedding (t-SNE) for the ConvNet with multitask learning. Target output included fresh weight (FW), dry weight (DW), number of leaves (NL), leaf area (LA), and soil plant-analysis development (SPAD). Since the absolute distance of the depicted points was not significant, the axes values were omitted.
Figure 5. Two-dimensional t-distributed stochastic neighbor embedding (t-SNE) for the ConvNet with multitask learning. Target output included fresh weight (FW), dry weight (DW), number of leaves (NL), leaf area (LA), and soil plant-analysis development (SPAD). Since the absolute distance of the depicted points was not significant, the axes values were omitted.
Horticulturae 08 01124 g005
Table 1. Architectures of deep learning models.
Table 1. Architectures of deep learning models.
ModelFFNNBiLSTMConvNet
Input size49,152 × 1128 × 384128 × 128 × 3
LayersDense-256BiLSTM-512Conv3-32
Dense-256BiLSTM-512Waxpool
Dense-5Dense-32Conv3-64
Dense-5MaxPool
Conv3-128
MaxPool
Conv3-128
MaxPool
Conv3-256
MaxPool
Conv3-512
MaxPool
Flatten
Dense-128 a
Dense-5 b
Output size5 × 1
a ConvNet trained using multitask learning had branches for each task from this layer. b Five sets of Dense-1 layers were used for the multitask learning. FFNN, feed-forward neural network; BiLSTM, bidirectional long short-term memory; and ConvNets, convolutional neural networks. Dense is a fully connected layer, a basic form of the neural network. Conv is a convolution layer. Maxpool and Flatten represent the maximum pooling and flattening, respectively. Parameters for Conv are denoted as “{type of layer}{kernel size}-{number of filters},” and parameters for the other layers are denoted as “{type of layer}-{number of nodes in the layer}”.
Table 2. Parameters used for model construction and training to estimate plant fresh weight.
Table 2. Parameters used for model construction and training to estimate plant fresh weight.
HyperparameterModel
FFNNBiLSTMConvNet
Nonlinearity function-
NormalizationBatchLayerBatch
Batch size128128128
Kernel initializer--Glorot normal
Learning rate0.0010.0010.0015
Epsilon10−810−810−8
β10.90.90.9
β20.9990.9990.999
Learning-rate decay0.10.10.1
Output size5 × 1
Hyphens represent unused values for the corresponding model.
Table 3. R2 values of each target output and the average of the trained models.
Table 3. R2 values of each target output and the average of the trained models.
Target OutputModel
LinRegFFNNBiLSTMConvNetMultitask
Fresh weight0.670.720.730.770.77
Dry weight0.700.740.760.750.77
Number of leaves0.620.760.750.760.76
Leaf area0.750.800.810.840.83
SPAD0.480.790.760.860.85
Average0.640.760.760.800.80
LinReg represents a multivariate linear regression.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Moon, T.; Choi, W.-J.; Jang, S.-H.; Choi, D.-S.; Oh, M.-M. Growth Analysis of Plant Factory-Grown Lettuce by Deep Neural Networks Based on Automated Feature Extraction. Horticulturae 2022, 8, 1124. https://doi.org/10.3390/horticulturae8121124

AMA Style

Moon T, Choi W-J, Jang S-H, Choi D-S, Oh M-M. Growth Analysis of Plant Factory-Grown Lettuce by Deep Neural Networks Based on Automated Feature Extraction. Horticulturae. 2022; 8(12):1124. https://doi.org/10.3390/horticulturae8121124

Chicago/Turabian Style

Moon, Taewon, Woo-Joo Choi, Se-Hun Jang, Da-Seul Choi, and Myung-Min Oh. 2022. "Growth Analysis of Plant Factory-Grown Lettuce by Deep Neural Networks Based on Automated Feature Extraction" Horticulturae 8, no. 12: 1124. https://doi.org/10.3390/horticulturae8121124

APA Style

Moon, T., Choi, W. -J., Jang, S. -H., Choi, D. -S., & Oh, M. -M. (2022). Growth Analysis of Plant Factory-Grown Lettuce by Deep Neural Networks Based on Automated Feature Extraction. Horticulturae, 8(12), 1124. https://doi.org/10.3390/horticulturae8121124

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop