Next Article in Journal
Barley Genetic Resources: Advancing Conservation and Applications for Breeding—Series II
Previous Article in Journal
CaHY5 Mediates UV-B Induced Anthocyanin Biosynthesis in Purple Pepper
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Development of Lettuce Growth Monitoring Model Based on Three-Dimensional Reconstruction Technology

College of Horticulture, South China Agricultural University, Guangzhou 510642, China
*
Author to whom correspondence should be addressed.
Agronomy 2025, 15(1), 29; https://doi.org/10.3390/agronomy15010029
Submission received: 25 November 2024 / Revised: 19 December 2024 / Accepted: 25 December 2024 / Published: 26 December 2024

Abstract

:
Crop monitoring can promptly reflect the growth status of crops. However, conventional methods of growth monitoring, although simple and direct, have limitations such as destructive sampling, reliance on human experience, and slow detection speed. This study estimated the fresh weight of lettuce (Lactuca sativa L.) in a plant factory with artificial light based on three-dimensional (3D) reconstruction technology. Data from different growth stages of lettuce were collected as the training dataset, while data from different plant forms of lettuce were used as the validation dataset. The partial least squares regression (PLSR) method was utilized for modeling, and K-fold cross-validation was performed to evaluate the model. The testing dataset of this model achieved a coefficient of determination (R2) of 0.9693, with root mean square error (RMSE) and mean absolute error (MAE) values of 3.3599 and 2.5232, respectively. Based on the performance of the validation set, an adaptation was made to develop a fresh weight estimation model for lettuce under far-red light conditions. To simplify the estimation model, reduce estimation costs, enhance estimation efficiency, and improve the lettuce growth monitoring method in plant factories, the plant height and canopy width data of lettuce were extracted to estimate the fresh weight of lettuce in addition. The testing dataset of the new model achieved an R2 value of 0.8970, with RMSE and MAE values of 3.1206 and 2.4576.

1. Introduction

In recent years, the rapid development of computer technology, artificial intelligence, and mechanization has significantly advanced production techniques in agriculture. Plant factories with artificial lighting (PFALs) are distinguished by high efficiency, labor savings, and stable cultivation. PFALs have advantages such as high resource utilization, higher annual yield per unit area, superior crop quality, year-round production capabilities, and lower site selection requirements [1]. Despite their high efficiency and productivity, PFALs still encounter issues such as substantial investment costs, elevated operational expenses, considerable energy consumption, and less-than-ideal economic returns, with labor costs constituting 25% to 30% [2]. The automation of PFALs is inherently reliant on the synergy between mechanization and digitalization. Within this context, crop monitoring technology emerges as a pivotal component in advancing the automation of PFALs, which plays a crucial role in crop production, as it enables timely reflection of crop growth and precise management. Machine vision can acquire, process, and interpret image sequences of target objects. The application of machine vision in agriculture has seen significant development, such as in precision agriculture, postharvest quality and safety inspection of products, classification and sorting of crops, and automatically monitoring of production processes. Machine vision systems not only recognize the size, shape, color, and texture of monitored objects but also to provide quantitative attributes of the imaged subjects or scenes [3]. In intelligent plant factories, this technology is nearly ubiquitous, encompassing all stages of production from seedling cultivation, transplanting, daily production management, harvesting, to product grading [4].
Three-dimensional (3D) reconstruction is a technique that involves analyzing and processing machine data or two-dimensional (2D) photographs to reconstruct 3D objects or scenes from reality. Active 3D reconstruction techniques involve the emission of specific signals, such as infrared rays and LiDAR, by hardware devices that pass through or are reflected by objects to acquire 3D information of the subject. This primarily includes structured light methods [5], 3D laser scanning methods [6], and time-of-flight methods [7]. The structured light method projects special light gratings or patterns composed of multiple or arbitrary stripes onto the object, thereby obtaining depth data along with various physical attributes such as color, texture, shape, and size [8], providing a solution for precise 3D characterization. This technology is known for its fast speed, high accuracy, and robustness [9] and is widely applied across various fields. However, structured light methods are susceptible to interference from ambient light, and their accuracy decreases with increasing measurement distance [10]. 3D laser scanning employs precise laser projection and highly sensitive receivers to acquire the 3D morphology and surface information of plants, and it offers high efficiency, high precision, and non-contact measurement [11]. However, the high cost of 3D laser scanners and the vast amount of data they generate make the phenotypic measurement and data processing of plants complex, limiting their widespread application [12]. Time-of-flight methods involve emitting continuous pulses of light towards an object; when the light signal encounters the object and reflects, the reflection is captured by a receiving sensor. The time difference between emission and reception is calculated to determine the object’s distance and depth information, which has been widely applied in plant 3D reconstruction and can be used to measure plant parameters, including height and biomass estimation [13,14]. The common equipment employed in this method is depth cameras. The depth camera RealSense D435i in combination with SFM technology can achieve 3D reconstruction of cabbage plants [15]. However, depth cameras are highly susceptible to environmental influences, leading to the introduction of noise.
Passive 3D reconstruction typically relies on ambient light sources, capturing images of the subject with a camera and then employing specific algorithms to construct a stereoscopic 3D model [16]. Structure from motion (SFM) is a 3D reconstruction technique predicated on the fundamental principles of multi-view geometry [17]. In contrast to the Kinect camera, SFM employs standard RGB cameras without additional hardware, thereby offering a more cost-effective solution with simplified operation and reduced environmental constraints [18,19]. However, a limitation of SFM technology is that it can only conduct measurements at a fixed distance, without the capability to adjust the measurement range [20], which introduces additional uncertainty into the reconstructed models. Three-dimensional reconstruction techniques demand the processing and assembly of extensive image datasets, which also necessitates calibration of the data. These requirements impose significant demands on computer hardware and software capabilities. Moreover, given the substantial variability in color and shape among different types of vegetables, it is imperative to develop tailored models and algorithms to ensure the precision and completeness of data processing. Furthermore, the costs associated with the equipment and labor required for 3D reconstruction are substantial, particularly when considering the scale of production environments.
Lettuce (Lactuca sativa L.) is the most widely consumed and cultivated leafy vegetable worldwide, and is the main vegetable in PFALs. The non-destructive detection techniques of plant growth are significantly important and necessary for higher efficient production in PFALs, due to automatic and precise growth management. The 3D reconstruction of lettuce plants by Kinect camera could acquire phenotypic data such as plant height and leaf area, as well as estimation of fresh weight [21,22]. By integrating depth cameras with computer vision algorithms, 3D reconstruction of lettuce plants has been achieved, which could estimate the fresh weight and aid in determining the optimal plant spacing and harvest timing during production [23]. A series of lettuce images were captured using smartphones and subsequently converted into 3D point clouds based on SFM technology. The lettuce plant height could be calculated from these 3D point clouds [24]. However, the research on fresh weight estimation of lettuce plant in PFALs is relatively scarce. Additionally, machine vision technology based on three-dimensional reconstruction of lettuce plants is predominantly single-perspective and involves lettuce plants with relatively uniform morphologies, which might not correspond to actual production conditions. On the other hand, the use of multiple industrial cameras for image capture could lead to increased data processing time and higher costs [25]. In this study, the estimation of lettuce plant fresh weight was conducted by 3D reconstruction technology using two undistorted USB cameras to capture images at 0° and 45° angles. Lettuce at various growth stages provided training set data and lettuce of different shapes provided validation set data. The partial least squares regression (PLSR) method was applied for model construction, and the model was evaluated using K-fold cross-validation. Concurrently, plant height and canopy width data extracted from RGB images obtained in the aforementioned experiments were used to estimate the fresh weight of lettuce, which was to simplify the estimation model, reduce estimation costs, and enhance estimation efficiency, thereby refining the monitoring methods for lettuce growth in PFALs.

2. Materials and Methods

2.1. Plant Material and Growth Conditions

The study was conducted in a PFAL at the South China Agricultural University. The seeds of leaf lettuce (Lactuca sativa L. ‘Green Butter’) were sown on a sponge block, and the seedlings were cultivated with modified Hoagland nutrient solution and under photosynthetic photon flux density (PPFD) 250 µmol·m−2·s−1 white light-emitting diodes (LEDs), with a 10/14 h light/dark period after germination. The air temperature was 22–26 °C with 65–75% relative humidity; seedlings with third-true leaves were transplanted to a hydroponic system 14 days after sowing.
Lettuce plants were cultivated in a hydroponic system (1/2 strength Hoagland nutrient solution, pH of 6.6–7.0, electrical conductivity of 1.45–1.55 mS·cm−1) at a density of 24 plants per plate (95 × 60 × 3 cm3), which was equivalent to 42 plants per m2. Adjustable LED panels (Chenghui Equipment Co., Ltd., Guangzhou, China; 150 × 30 cm2) containing white (peaking at 440 nm), red (660 ± 10 nm; R), and far-red (730 ± 10 nm; FR) LEDs were used as light sources.

2.2. Image Acquisition and Processing System

In order to obtain the original images of lettuce, we constructed a shooting platform. The lettuce was placed on a custom-made turntable, and two undistorted USB cameras (FY-0050A + 180, 2560 × 1944 pixels) from Huayun Technology (Shenzhen, China) Co., Ltd. were used to capture images at 0° and 45° angles and placed 25 cm away from the subject, respectively.
The acquired images were processed using Agisoft Metashape software v.1.8.2 (Agisoft LLC, St. Petersburg, Russia), a self-contained software program that facilitates the photogrammetric analysis of digital images and produces 3D spatial data. The underlying principle of the software involves utilizing the imported image sequence to generate a sparse point cloud using the SFM technique, followed by employing the multi-view stereo (MVS) technique to establish a dense point cloud, ultimately culminating in the accomplishment of 3D reconstruction. The specific flow chart is shown in Figure 1.

2.3. Establishment of Lettuce Growth Models Under Consistent Light Conditions

White and R LED lights (250 and 40 µmol·m−2·s−1, respectively) were used as cultivation light sources after transplanting. Three treatments were set as follows: Sampling 21 (D21), 28 (D28) and 35 (D35) days after sowing. There were 6 repetitions for each treatment; each plate (24 plants per plate) was defined as a repeat. The schematic diagram of the specific setup of the experiment and the spectral composition of LED are shown in Figure 2 and Figure 3.
Each treatment involved the selection of 75 lettuce plants for 3D reconstruction. For each plant, a total of 48 images were captured, including both frontal and 45° top views. Consequently, a grand total of 225 lettuce plants were utilized in the construction of the predictive model. We performed model fitness by using the volume extracted from 3D reconstruction and its corresponding fresh weight. The collected dataset of 225 samples was randomly partitioned into training and testing sets at an 8:2 ratio, resulting in 180 training samples and 45 testing samples.

2.4. Establishment of Validation Set Under Diverse Shapes of Lettuce

Four light treatments were set as follows: entire growth stage under white LEDs (W), which was considered basal light; entire growth stage under basal light with supplemental R and FR light (A); the first 10 days of basal light with FR light and another 10 days with R light (FRR); and the first 10 days of basal light with R light and another 10 days with FR light (RFR). The radiation of the white LEDs was set as 250 µmol·m−2·s−1; R and FR were set at the same radiation of 40 µmol·m−2·s−1. There were 3 repetitions for each light treatment, each plate (24 plants per plate) was defined as a repeat. All the light treatments were set as the same photoperiod, which was 10/14 h light/dark period [26]. The ratios of R to FR and R to B can be seen in Table 1.
Each treatment involved the selection of 24 lettuce plants for 3D reconstruction. A set of 96 lettuce plants with diverse morphological characteristics was selected as the validation dataset for training the model. This approach aimed to enhance the generalization capability and adaptability of the predictive model.

2.5. Establishment of Lettuce Fresh Weight Estimation Model Based on 2D Metrics

Based on the RGB images from the aforementioned experiments, lettuce plant height and canopy width data were extracted and fitted to their corresponding fresh weight data to establish a 2D metric-based biomass estimation model for lettuce. A schematic diagram of the specific experimental setup is shown in Figure 4. The collected datasets of 120 samples were randomly partitioned into training and testing sets at an 8:2 ratio, resulting in 96 training samples and 24 testing samples.

2.6. Measurement of Plant Morphology and Growth Characteristics

Eight uniform plants were randomly selected from each treatment for measuring the indicators of biometrics. The fresh weight of shoots and roots was weighed separately by electronic balance. The leaf area was measured by ImageJ 1.52V (National Institutes of Health, Bethesda, MD, USA). To measure the dry weight, the samples were deactivated at 105 °C and weighed by electronic balance after tissue dehydration at 75 °C for 48 h. The actual volume of lettuce (shoot) was measured using the drainage method in the experiment. First, a measuring cylinder with a quantitative amount of water was prepared, then the lettuce was put into the cylinder to read the change of the water volume in the cylinder, which is the actual volume of the lettuce [22].

2.7. Model Establishment and Evaluation

To predict the fresh weight of lettuce, the partial least squares regression (PLSR) method [27] was employed to construct the model. The model was developed and run in MATLAB R2022b software (MathWorks, Natick, MA, USA). The coefficient of determination (R2), root mean square error (RMSE) and mean absolute error (MAE) were used as evaluation metrics. In this study, K-fold cross-validation [28] was also employed to assess the performance of the predictive model constructed on the given datasets. The formulas for calculating these parameters are as follows:
R 2 = 1     i ( y i ^ y i ) 2 i ( y i y ¯ ) 2 ,
RMSE = i = 1 n ( y i ^ y i ) 2 n ,
MAE = i = 1 n ( y i ^ y i ) n ,
where y i represents the true fresh weight value of lettuce, y i ^ represents the estimated fresh weight value of lettuce by the model, y ¯ represents the mean value of true fresh weight values of lettuce in the datasets, and n represents the number of samples.

2.8. Computing Environment

The computer experiments were performed on a Lenovo laptop (Legion Y7000p2021, CPU: i7-11800H @ 2.30 GHz, Guangzhou, China).

2.9. Statistical Analysis

Data were expressed as mean ± standard error (n = 8 replicates). Analyses of variance (ANOVA) followed by Duncan’s multiple range test was conducted using IBM SPSS Statistics 25 software (SPSS Inc., Chicago, IL, USA).

3. Results

3.1. Lettuce Growth Models Under Consistent Light Conditions

3.1.1. Plant Morphology and Growth Characteristics

Through sampling at three different growth stages, significant differences in the morphology and growth characteristics of lettuce were observed across different growth stages (Figure 5).
At different growth stages, there were significant differences in the biomass and growth characteristics of the lettuce (Figure 6a–d). As lettuce grows, the biomass of roots and shoots, leaf area, and number of leaves exhibit similar increasing trends. Compared with D21 and D28, the shoot fresh weight was significantly increased by 16.4 and 1.7 times, respectively. Leaf area was significantly enhanced by 10.2 and 1.5 times. The number of the leaves were also enhanced by 2.1 and 1.5 times, respectively.

3.1.2. Linear Relationship Between Plant Morphological Indicators and Fresh Weight

In this study, the overhead projected area, true volume, and corresponding fresh weight (shoot) of 225 lettuce samples were extracted and established a linear relationship. In the linear relationship constructed between the projected area and fresh weight (Figure 7a), an R2 value of 0.8489 was obtained, indicating the presence of numerous scattered points and an unclear linear relationship. However, in the linear relationship constructed between the true volume and fresh weight (Figure 7b), an R2 value of 0.9902 was achieved, demonstrating a high level of explanatory power. Thus, estimating the fresh weight of lettuce based on its volume offers greater accuracy and stability.

3.1.3. Linear Relationship Between 3D Reconstruction Volume and Correlative Metrics

Based on the aforementioned linear relationship findings, this study extracted volumetric data from 3D reconstructions and performed linear fitness with the measured volumes and their corresponding fresh weights, as depicted in Figure 8. After 3D reconstruction, the lettuce models were rendered as enclosed solid models. Although this resulted in a significant numerical discrepancy between the reconstructed and measured volumes, a high level of explanatory power was still observed, with an R2 value of 0.9633. Leveraging the correlation based on volume, this study further conducted linear fitness between the 3D reconstructed volumes of lettuce and their fresh weights, achieving an R2 value of 0.9743. These results also indicate that estimating the fresh weight of lettuce using the 3D reconstructed volume data is feasible and accurate.

3.1.4. Model Performance Based on 3D

As shown in Figure 9, it could be seen that the training set and test set achieved an impressive R2 of 0.9727 and 0.9693, RMSE of 3.2697 and 3.3599, and MAE of 2.1251 and 2.5232, respectively. In addition, this study separately conducted model fitness for 3 different growth stages (Figure 9c–e). As the lettuce grew, the model’s R2 showed a trend of initially decreasing and then increasing, while RMSE and MAE demonstrated an increasing trend. To further evaluate the constructed lettuce fresh weight estimation model, we employed a 10-fold cross-validation approach and used RMSE as the evaluation metric. The results are presented in Table 2. The average RMSE value obtained through 10-fold cross-validation was 3.2295.

3.2. Validation Set Under Diverse Shapes of Lettuce

3.2.1. Plant Morphology and Validation Set

In order to enhance the generalization capability and adaptability of the prediction model, this study utilized four distinct staged supplementary lighting strategies to obtain four different morphologies of lettuce (Figure 10) as the validation datasets. After different light treatments, a set of 96 lettuce plants with diverse morphological characteristics was selected as the validation dataset for training the model (Figure 11a). As shown in Figure 9a, the R2 value is 0.9105; however, there is a significant estimation error in the validation dataset, with RMSE and MAE reaching 24.7689 and 21.1536, respectively.
In addition, we also separately conducted model fitness for 4 different light treatments (Figure 11b–e). Compared to the other treatments, treatment W exhibited the best performance in the predictive model, with RMSE and MAE values of 4.3371 and 3.3947, respectively. Following supplementation with far-red light, treatments A, FRR, and RFR all showed an overestimation in the predictive models. The largest prediction error was observed in treatment A, with RMSE reaching 31.1259 and MAE reaching 29.2695. This also indicated that the morphological characteristics of lettuce plants had a certain impact on the fitness of the estimation model.

3.2.2. Optimization of the Model Under Far-Red Light Supplementation Conditions

Based on the performance of the validation set for the estimation model, it was observed that the datasets associated with FR light supplementation treatments A, FRR, and RFR all exhibited a tendency towards overestimation. In order to further optimize the model and enhance its generalization capability under FR light supplementation conditions, two distinct dataset construction approaches were utilized to refine the original estimation model.
By integrating the validation set data with the original model data to form a new estimation model (Model 1, Figure 12), the training set achieved an R2 of 0.9260, with RMSE and MAE values of 6.8476 and 5.8219, respectively. The test set demonstrated a similar performance, with an R2 of 0.9312 and RMSE and MAE values of 6.5161 and 5.5407, respectively.
Incorporating the datasets from treatments A, FRR, and RFR as the new estimation model datasets (Model 2, Figure 13), the model constructed using this approach achieved an R2 of 0.8435 for the training set, with RMSE and MAE values of 5.5518 and 4.6590, respectively. The test set exhibited a comparable performance, with an R2 of 0.8879 and RMSE and MAE values of 4.3660 and 3.2299, respectively.
Compared to the test set of the lettuce fresh weight estimation model (Figure 9b), Model 1 (Figure 12b) exhibited a 3.93% decrease in R2, with RMSE and MAE increasing by 93.94% and 119.59%, respectively. Meanwhile, the model (Figure 13b) showed an 8.40% decrease in R2, with RMSE and MAE increasing by 29.94% and 28.01%, respectively. Thus, the morphological characteristics of lettuce plants were influenced by FR light regulation, and the fit of the estimation model was affected. Compared to Model 1, Model 2 demonstrated higher estimation accuracy. Thus, establishing a separate estimation model specifically for lettuce data under FR light supplementation was more feasible.
To further assess the estimation models constructed by these two methods, a 10-fold cross-validation approach was employed, and RMSE was selected as the metric for cross-validation assessment. The results of the cross-validation are presented in Table 3. The average RMSE values obtained through the 10-fold cross-validation were close to the RMSE values of their corresponding models, indicating that both methods possessed good generalization capabilities on unseen data or specific tasks.

3.3. Lettuce Fresh Weight Estimation Model Based on 2D Metrics

3.3.1. Model Performance Based on 2D

Based on the performance of the estimation model with datasets from different growth stages and plant types of lettuce, 120 experimental samples were extracted. The plant height and canopy width data of these samples were fitted to their corresponding fresh weights. PLSR was employed to construct a fresh weight estimation model for lettuce, with the training and testing set performances depicted in Figure 14. Utilizing this method to build a 2D metric-based fresh weight estimation model, the training set achieved an R2 of 0.8846, with RMSE and MAE values of 3.3627 and 2.8859, respectively. The test set demonstrated a similar performance, with an R2 of 0.8970 and RMSE and MAE values of 3.1206 and 2.4576, respectively. It was observed that the model construction did not exhibit signs of under-fitness or over-fitness and possessed a certain degree of stability and reliability.

3.3.2. K-Fold Cross-Validation

To further assess the 2D metric-based fresh weight estimation model, a 10-fold cross-validation method was employed and RMSE was selected as the metric for cross-validation assessment. The results of the cross-validation are presented in Table 4. The average RMSE obtained through the 10-fold cross-validation was 3.3039, which was close to the RMSE values of both the training and test sets. These indicated that the 2D metric-based fresh weight estimation model possessed good generalization capabilities on unseen data or specific tasks. Compared to the method using 3D reconstruction volume for estimation, the RMSE for fresh weight estimation using lettuce plant height and canopy width only increased by 2.30%, but the operational efficiency and the adaptability of the equipment to the production environment of PFALs were significantly improved and the monitoring costs thereby reduced.

4. Discussion

PFALs offer a conducive environment and platform for crop cultivation, increased yield per unit area, efficient cultivation, and year-round production [1]. Moreover, PFALs exhibit high adaptability to machinery and equipment, which enhances their operational efficiency. These benefits have facilitated the availability of a substantial number of samples for agricultural machine vision research and have propelled the development of technologies such as machine vision, non-destructive detection, and digital twinning [29].
The selection and analysis of crop traits hold significant importance for the construction of estimation models. By screening and analyzing crop traits, the most relevant and representative phenotypic characteristics can be matched with estimation indicators, thereby enhancing the model’s predictive accuracy. Eliminating irrelevant or redundant phenotypic traits also reduces noise and interference, allowing the model to more accurately capture the relationship between traits and targets. Moreover, in the field of agricultural production, large-scale crop data collection and processing can be time-consuming, labor-intensive, and resource-heavy. By screening and analyzing phenotypic traits of crops, the number of traits that need to be collected and processed can be reduced, and costs and workload thereby reduced. Additionally, reasonable feature selection and analysis makes the features used by the model more interpretable and understandable [30]. Through 3D reconstruction of crops, phenotypic traits such as plant height, leaf area, projected area, volume, and compactness can be extracted [31]. When lettuce was segmented from colored 3D point clouds to estimate its fresh weight, the estimated fresh weight of lettuce was found to be linearly related to its volume and non-linearly related to other phenotypic traits. Therefore, quadratic functions were employed to model the non-linear relationships [32]. In this study, the linear regression relationship between the overhead projected area and actual volume of lettuce and their fresh weight was investigated (Figure 7). A linear relationship was established between the actual volume and fresh weight, confirming a good fitness between the plant volume and fresh weight of lettuce, with an R2 value reaching 0.9902. Further investigation involved linear fitness of the 3D reconstructed volume of lettuce with the measured volume and fresh weight, achieving R2 values of 0.9633 and 0.9743, respectively, indicating a high level of explanatory power. Based on this fundamental logic, phenotypic traits for estimating the fresh weight of lettuce were also established. Of course, 3D reconstruction still faces challenges such as the complex shape of lettuce, curled leaves, and severe occlusions. Extracting additional morphological indicators from 3D reconstruction and fitness morphological indicators with nonlinear relationships using quadratic functions to enhance the model’s accuracy and generalization capabilities require further improvement in the future.
PLSR is considered a superior alternative to conventional multiple linear regression and principal component regression methods due to its greater robustness [27]. Capitalizing on the ability of PLSR and ridge regression to accurately address collinearity issues among predictive variables, it was possible to precisely estimate plant biomass in situations where plant morphology and growth habits exhibited significant differences [33]. With the advent of deep learning and neural networks, the application of PLSR analysis has become increasingly widespread. Statistical models established using remote sensing data and PLSR, in conjunction with artificial neural networks, enabled the estimation of sunflower leaf area index, plant height, and seed yield [34]. Concurrently, the errors associated with PLSR were reduced when combined with CNN [32]. In this study, the PLSR model was employed to predict the fresh weight of lettuce, achieving R2 values of 0.9727 and 0.9693 for the training and testing sets (Figure 9), indicating a high degree of model fit with no signs of under-fitness or over-fitness. Compared to other predictive models, the PLSR model demonstrated its strengths and weaknesses. By utilizing oblique views and position-guided networks for estimating the fresh weight of lettuce, the constructed model achieved a favorable R2 value of 0.9223 and an RMSE value of 1.8939 on the test datasets. Although the R2 of the PLSR model constructed for estimating the fresh weight of lettuce in this study was superior to that of PosNet, VGG19, ResNet50, and DenseNet121, its RMSE was significantly higher than these models [35]. Compared to the individual datasets at the D21, D28, and D35 stages, the merged datasets resulted in R2 values that were enhanced by 10.60%, 35.44%, and 20.97%, respectively. These indicated that the incorporation of data from multiple time points could augment the adaptability and accuracy of the estimation model. To circumvent over-fitness issues arising from a fixed validation datasets, K-fold cross-validation was commonly employed in predictive models [36]. In this study, a 10-fold cross-validation was conducted, yielding an average RMSE of 3.2295 (Table 2), which further confirmed the stability and reliability of the model.
The specificity and complexity of plant morphology pose challenges to the accuracy of 3D reconstruction, thereby affecting the precision and stability of estimation models. During the growth and development, the leaf and stem might exhibit curling, hence not all 3D reconstruction methods could provide accurate results [37]. In the study of estimating the fresh weight of lettuce at different growth stages [35], the estimated values of lettuce models in the early growth stage were higher than the actual values, with an average error of 1.323. And in the mid-growth stage, the estimated values exhibited fluctuations, the average error remained at 1.397. However, the average error reached 2.055 in the later growth stage, with the estimated values were lower than the actual values. In this study, the lettuce fresh weight estimation model was constructed using data from three growth stages: D21, D28, and D35 (Figure 9c–e). As the lettuce grew, the plant structure became increasingly complex, and the leaf occlusion enhanced, leading to higher RMSE and MAE values. During the D21 stage, the lettuce plant structure was relatively simple, and the leaves had not yet severe occlusions, resulting in the best performance in terms of R2, RMSE, and MAE among the three growth stages. At the D28 stage, the lettuce’s estimated values also exhibited fluctuations. The data from the D35 stage showed a tendency for relatively lower estimation and the increased errors, which might be attributed to the thickening of the peripheral leaves, enhancing the likelihood of occlusions and resulting in the loss of image features in the occluded areas. These also indicated that integrating data from different growth stages of lettuce for model construction, which was the incorporation of data from multiple time points, enhanced the accuracy and generalization capability of the model. Based on the current data from three growth stages, the collection of data at additional time points were an aspect that required further optimization in this study.
Plant growth and morphology are influenced by light intensity and spectral quality. The supplementation of FR light altered the morphology of lettuce leaves and canopies, increasing leaf length, leaf width, and the projected area of the canopy [38]. These alterations in leaf dimensions and canopy size led to an enhanced ability to capture incident light, increasing lettuce biomass [39]. When plants detected a decreased red to far-red (R:FR) ratio (low R:FR), it triggers the shade avoidance syndrome, inducing shade-avoiding morphologies [40]. Compared to fluorescent light treatments, lettuce cell division was facilitated under varying R:FR ratios (0.7, 1.2, 4.1, and 8.6), there was a significant increase in biomass, leaf length, and leaf area of lettuce [41]. In this study, different light supplementation strategies were employed for lettuce with varying architecture, involving four distinct R:FR ratios, namely 12.36 (W), 2.17 (W + R + FR), 1.44 (W + FR), and 14.22 (W + R). During the initial phase of the lighting treatments, a decrease in the R:FR ratio led to more upright plant architecture in treatments A and FRR. When the lighting treatment changed, the R:FR ratio in treatment FRR increased from 1.44 to 14.22, causing the initially upright plant architecture to gradually flattened. Conversely, in treatment RFR, the R:FR ratio decreased from 14.22 to 1.44, and the plant architecture exhibited an opposite trend [26]. These observations suggest that the shade-avoiding architecture of lettuce plants might primarily be induced by lower R:FR ratios. In the validation set, treatments A, FRR, and RFR, which involved FR light supplementation, showed significant differences in lettuce architecture compared to the original datasets, and overestimation was observed in the model predictions (Figure 11c–e). This could be attributed to the fact that FR light supplementation caused the leaves to become thinner, leading to a greater increase in apparent volume than in actual biomass, and resulting in an overestimation of the volume presented by 3D reconstruction. Comparisons between Model 1 and Model 2 revealed that a new estimation model specifically for lettuce data under FR light supplementation (Model 2) yielded higher estimation accuracy (Figure 12 and Figure 13). However, compared to the original estimation model’s test set (Figure 9b), the R2 decreased by 8.40%, and the RMSE and MAE increased by 29.94% and 28.01%, respectively. Additionally, the RMSE values from the 10-fold cross-validation exhibited considerable variation, which might be due to the limited number of samples used. Further optimization of the estimation model for lettuce fresh weight under FR light supplementation, along with the expansion of the dataset, remained further improvement in this study.
Compared to 2D vision, 3D vision has the capability to reconstruct the 3D structure and information of a scene from one or more 2D images. It focuses on understanding and reconstructing the spatial structure of objects and includes depth information, which can more intuitively present the phenotypic characteristics of crops and more easily estimate plant growth indicators. Conventional methods for 3D reconstruction of vegetables often required multiple scans and stitching, which were costly and operationally complex. In contrast, the combination of end-to-end deep learning methods can automatically extract features and learn from plant images at multiple angles or different cross-sections through deep neural networks. By optimizing the integration of these features, high-quality 3D models can be generated, offering greater flexibility, speed, and convenience. Li et al. proposed an end-to-end segmentation method based on a fully convolutional network, PlantU-net, which improved the high-throughput segmentation performance of seedling group top-view images and achieved accurate extraction of phenotypic data [42]. However, deep learning models require a large amount of annotated data for training, thus necessitating significant time and labor costs for data collection and annotation [43]. These deep learning models also have certain hardware requirements for the equipment and complex parameter structures that require extensive hyperparameter tuning [44], increasing the complexity of vegetable 3D reconstruction. In this study, lettuce image sequences were collected and combined with SFM and MVS technologies to achieve 3D reconstruction of lettuce plant, with simple operation. Based on this method, the lettuce fresh weight estimation model was constructed, achieving an R2 of 0.9693 on the test datasets, with RMSE and MAE values of 3.3599 and 2.5232, respectively, demonstrating good accuracy and stability. Compared to the method used in this study, using the colored 3D point cloud segmentation method to extract leaf area for estimating fresh weight, the R2 could reach up to 0.94 [32]. Estimating fresh weight based on projected area and other parameters extracted from 3D point cloud segmentation, the R2 could reach 0.966 [45]. It can be seen that the method used in this study for estimating the fresh weight of lettuce using SFM and MVS technology to obtain 3D reconstruction volume was similar to the estimation effect based on 3D point cloud segmentation methods and has high stability. Although constructing a 3D reconstruction model of lettuce using this method was relatively simple, due to the large variations between images, fewer image sequences might generate erroneous points in the 3D point cloud. Conversely, a large number of images can lead to processing redundant information, thereby inevitably increasing software operation and computation time, reducing efficiency. Moreover, in the production of PFALs, there are limited conditions for obtaining lettuce images from multiple angles. Therefore, how to progress from 3D reconstruction of individual lettuce plants to the 3D reconstruction of cultivated groups should be studied.
Compared to 3D vision, 2D vision possesses the capability to understand and analyze the content within 2D images, which includes tasks such as object detection, recognition, and classification within the images. Although 2D vision cannot intuitively obtain depth information, it focuses on the analysis and comprehension of image content and is more rapid in acquiring 2D phenotypic data of crops. Building upon the exploration of lettuce 3D reconstruction, this study further extracted sample data on plant height and canopy width, fitness these with corresponding fresh weight data to construct a 2D metric-based fresh weight estimation model. The R2 of this model’s test dataset reached 0.8970, with RMSE and MAE values of 3.1206 and 2.4576, respectively. Compared to the test datasets of the fresh weight estimation model based on 3D reconstruction, the R2 for estimating fresh weight using lettuce plant height and canopy width decreased by 7.46%, while the RMSE and MAE also decreased by 7.12% and 2.60%, respectively. In comparison with cross-validation results, the RMSE increased by 2.30%, possibly due to the limited sample size used to build the 2D metric-based fresh weight estimation model. In research on estimating the fresh weight of lettuce from 2D images, the sample size for training the model determined the model’s credibility and accuracy. By rotating the acquired lettuce RGB images by 90, 180, and 270 degrees, as well as horizontally and vertically flipping them, and adjusting their brightness to 80% to 120%, the training sample size could be expanded by 26 times [46]. Utilizing images of various lettuce varieties and different seasonal plant types in combination with neural networks for deep learning to estimate growth indicators could achieve an R2 of up to 0.9277, enhancing the model’s generalization capabilities and enabling classification. Collecting image data of lettuce at 5, 8, 11, 14, 17, and 20 days after sowing, capturing the plant types at different growth stages, and training the image data in conjunction with neural networks could increase the training sample size while improving model accuracy [47]. In contrast, the sample size for the 2D metric-based fresh weight estimation model in this study was based solely on the quantity of lettuce cultivation, resulting in a limited training sample size. Expanding the training sample size through sample image rotation and brightness adjustment, and enhancing model estimation accuracy by integrating neural networks might improve model accuracy. Compared to methods using 3D reconstruction volume for estimation, although the model accuracy slightly decreased, the speed of data acquisition, operational efficiency, and the adaptability of the equipment to the production environment of PFALs were greatly improved, thereby reducing the cost of crop monitoring in PFALs.

5. Conclusions

In this study, 3D reconstruction technology was employed to construct a 3D model of lettuce plant. By utilizing lettuce at various growth stages as the training datasets, a fresh weight estimation model for lettuce was established. Lettuce with different plant types served as the validation datasets. PLSR was applied for model construction, and K-fold cross-validation was used to assess the model. Concurrently, plant height and canopy width data extracted from the aforementioned experiments were used to estimate the fresh weight of lettuce.
Lettuce volume was highly correlated with its fresh weight, with a linear regression R2 of 0.9902, indicating that estimating the fresh weight of lettuce based on its volume offers greater accuracy and stability. The PLSR method was found to be highly feasible for constructing models to estimate the fresh weight of lettuce using both single and multiple factors. The lettuce fresh weight estimation model constructed using the PLSR method demonstrated a certain degree of stability and reliability, with the R2 of 0.9693 for the test datasets, and RMSE and MAE values of 3.3599 and 2.5232, respectively. Moreover, integrating data from lettuce at different growth stages resulted in a more accurate model, enhancing its generalization capabilities. FR light supplementation influenced the architecture of lettuce plants. The new estimation model was established using data from lettuce under FR light supplementation, achieving the R2 of 0.8879, with RMSE and MAE values of 4.3660 and 3.2299, respectively. Estimating the fresh weight of lettuce using plant height and canopy width data also demonstrated a certain degree of stability and reliability, with the R2 of 0.8970 for the test datasets, and RMSE and MAE values of 3.1206 and 2.4576, respectively. Compared to the method using 3D reconstruction volume for estimation, although the model accuracy slightly decreased, with the increase in RMSE of 2.30%, the speed of data acquisition, operational efficiency, and the adaptability of the equipment to the production environment of PFALs were significantly improved.
In summary, this study provided novel and efficient fresh weight estimation methods for the growth monitoring of lettuce in PFALs from both 3D and 2D perspectives. For intelligent production solutions for PFALs, 3D reconstruction of plant, integrating multi-spectral and hyper-spectral devices estimate crop nutritional quality, will construct a monitoring platform for both crop architecture and dynamically tracking the nutritional quality, this is a challenge to be addressed in the next steps.

Author Contributions

Conceptualization, J.J. and H.L.; Data curation, J.J.; Formal analysis, J.J.; Funding acquisition, H.L.; Methodology, J.J., M.Z., Y.Z., Q.C., Y.G., Y.Y., Z.W. and H.L.; Project administration, H.L.; Resources, H.L.; Software, J.J., M.Z. and Z.W.; Supervision, Q.C., Y.G., Y.Y. and H.L.; Validation, J.J. and Y.Z.; Writing—original draft, J.J.; Writing—review & editing, Y.H., X.L. and J.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by National Key Research and Development of China (2021YFD2000701).

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Kozai, T. Plant Factories with Artificial Lighting (PFALs): Benefits, Problems, and Challenges. In Smart Plant Factory; Kozai, T., Ed.; Springer: Singapore, 2018; pp. 15–29. ISBN 9789811310645. [Google Scholar]
  2. Kozai, T.; Niu, G.; Takagaki, M. (Eds.) Plant Factory: An Indoor Vertical Farming System for Efficient Quality Food Production, 2nd ed.; Academic Press: London, UK; Elsevier: San Diego, CA, USA, 2020; ISBN 978-0-12-816691-8. [Google Scholar]
  3. Chen, Y.-R.; Chao, K.; Kim, M.S. Machine Vision Technology for Agricultural Applications. Comput. Electron. Agric. 2002, 36, 173–191. [Google Scholar] [CrossRef]
  4. Tian, Z.; Ma, W.; Yang, Q.; Duan, F. Application Status and Challenges of Machine Vision in Plant Factory—A Review. Inf. Process. Agric. 2022, 9, 195–211. [Google Scholar] [CrossRef]
  5. Wang, J.; Zhang, Y.; Gu, R. Research Status and Prospects on Plant Canopy Structure Measurement Using Visual Sensors Based on Three-Dimensional Reconstruction. Agriculture 2020, 10, 462. [Google Scholar] [CrossRef]
  6. Cheng, L.; Chen, S.; Liu, X.; Xu, H.; Wu, Y.; Li, M.; Chen, Y. Registration of Laser Scanning Point Clouds: A Review. Sensors 2018, 18, 1641. [Google Scholar] [CrossRef] [PubMed]
  7. Gokturk, S.B.; Yalcin, H.; Bamji, C. A Time-Of-Flight Depth Sensor—System Description, Issues and Solutions. In Proceedings of the 2004 Conference on Computer Vision and Pattern Recognition Workshop, Washington, DC, USA, 27 June–2 July 2004; IEEE: Piscataway, NJ, USA, 2004; p. 35. [Google Scholar]
  8. Hennad, A.; Cockett, P.; McLauchlan, L.; Mehrubeoglu, M. Characterization of Irregularly-Shaped Objects Using 3D Structured Light Scanning. In Proceedings of the 2019 International Conference on Computational Science and Computational Intelligence (CSCI), Las Vegas, NV, USA, 5–7 December 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 600–605. [Google Scholar]
  9. Lin, H.; Gao, J.; Mei, Q.; He, Y.; Liu, J.; Wang, X. Adaptive Digital Fringe Projection Technique for High Dynamic Range Three-Dimensional Shape Measurement. Opt. Express 2016, 24, 7703. [Google Scholar] [CrossRef] [PubMed]
  10. Hu, Y.; Chen, Q.; Feng, S.; Zuo, C. Microscopic Fringe Projection Profilometry: A Review. Opt. Lasers Eng. 2020, 135, 106192. [Google Scholar] [CrossRef]
  11. Paulus, S. Measuring Crops in 3D: Using Geometry for Plant Phenotyping. Plant Methods 2019, 15, 103. [Google Scholar] [CrossRef]
  12. Teng, X.; Zhou, G.; Wu, Y.; Huang, C.; Dong, W.; Xu, S. Three-Dimensional Reconstruction Method of Rapeseed Plants in the Whole Growth Period Using RGB-D Camera. Sensors 2021, 21, 4628. [Google Scholar] [CrossRef] [PubMed]
  13. Vázquez-Arellano, M.; Paraforos, D.S.; Reiser, D.; Garrido-Izard, M.; Griepentrog, H.W. Determination of Stem Position and Height of Reconstructed Maize Plants Using a Time-of-Flight Camera. Comput. Electron. Agric. 2018, 154, 276–288. [Google Scholar] [CrossRef]
  14. Xie, W.; Wei, S.; Yang, D. Morphological Measurement for Carrot Based on Three-Dimensional Reconstruction with a ToF Sensor. Postharvest Biol. Technol. 2023, 197, 112216. [Google Scholar] [CrossRef]
  15. Guo, R. Improved 3D Point Cloud Segmentation for Accurate Phenotypic Analysis of Cabbage Plants Using Deep Learning and Clustering Algorithms. Comput. Electron. Agric. 2023, 211, 108014. [Google Scholar] [CrossRef]
  16. Moons, T. 3D Reconstruction from Multiple Images Part 1: Principles. Found. Trends® Comput. Graph. Vis. 2008, 4, 287–404. [Google Scholar] [CrossRef]
  17. Iglhaut, J.; Cabo, C.; Puliti, S.; Piermattei, L.; O’Connor, J.; Rosette, J. Structure from Motion Photogrammetry in Forestry: A Review. Curr. For. Rep. 2019, 5, 155–168. [Google Scholar] [CrossRef]
  18. Liu, R.; Yao, L.; Yan, L.; Li, H.; Liu, X.; Yang, Y. Research on Real Scene 3D Modelling Based on Multi-View. J. Phys. Conf. Ser. 2021, 1852, 022080. [Google Scholar] [CrossRef]
  19. Xiao, S.; Chai, H.; Shao, K.; Shen, M.; Wang, Q.; Wang, R.; Sui, Y.; Ma, Y. Image-Based Dynamic Quantification of Aboveground Structure of Sugar Beet in Field. Remote Sens. 2020, 12, 269. [Google Scholar] [CrossRef]
  20. Lu, Y.; Wang, Y.; Chen, Z.; Khan, A.; Salvaggio, C.; Lu, G. 3D Plant Root System Reconstruction Based on Fusion of Deep Structure-from-Motion and IMU. Multimed. Tools Appl. 2021, 80, 17315–17331. [Google Scholar] [CrossRef]
  21. Wang, L.; Zheng, L.; Wang, M.; Wang, Y.; Li, M. Kinect-Based 3D Reconstruction of Leaf Lettuce. In Proceedings of the 2020 ASABE Annual International Virtual Meeting, American Society of Agricultural and Biological Engineers, Omaha, NE, USA, 13–15 July 2020. [Google Scholar]
  22. Lou, M.; Lu, J.; Wang, L.; Jiang, H.; Zhou, M. Growth Parameter Acquisition and Geometric Point Cloud Completion of Lettuce. Front. Plant Sci. 2022, 13, 947690. [Google Scholar] [CrossRef] [PubMed]
  23. Petropoulou, A.S.; Van Marrewijk, B.; De Zwart, F.; Elings, A.; Bijlaard, M.; Van Daalen, T.; Jansen, G.; Hemming, S. Lettuce Production in Intelligent Greenhouses—3D Imaging and Computer Vision for Plant Spacing Decisions. Sensors 2023, 23, 2929. [Google Scholar] [CrossRef] [PubMed]
  24. Chong, J.E.; Harith, H.H. Performance of Structure-from-Motion Approach on Plant Phenotyping Using Images from Smartphone. IOP Conf. Ser. Earth Environ. Sci. 2022, 1038, 012031. [Google Scholar] [CrossRef]
  25. Wada, K.C.; Hayashi, A.; Lee, U.; Tanabata, T.; Isobe, S.; Itoh, H.; Maeda, H.; Fujisako, S.; Kochi, N. A Novel Method for Quantifying Plant Morphological Characteristics Using Normal Vectors and Local Curvature Data via 3D Modelling—A Case Study in Leaf Lettuce. Sensors 2023, 23, 6825. [Google Scholar] [CrossRef]
  26. Ju, J.; Zhang, S.; Hu, Y.; Zhang, M.; He, R.; Li, Y.; Liu, X.; Liu, H. Effects of Supplemental Red and Far-Red Light at Different Growth Stages on the Growth and Nutritional Properties of Lettuce. Agronomy 2023, 14, 55. [Google Scholar] [CrossRef]
  27. Geladi, P.; Kowalski, B.R. Partial Least-Squares Regression: A Tutorial. Anal. Chim. Acta 1986, 185, 1–17. [Google Scholar] [CrossRef]
  28. Fushiki, T. Estimation of Prediction Error by Using K-Fold Cross-Validation. Stat. Comput. 2011, 21, 137–146. [Google Scholar] [CrossRef]
  29. Ariesen-Verschuur, N.; Verdouw, C.; Tekinerdogan, B. Digital Twins in Greenhouse Horticulture: A Review. Comput. Electron. Agric. 2022, 199, 107183. [Google Scholar] [CrossRef]
  30. Kumar, J.; Pratap, A.; Kumar, S. (Eds.) Phenomics in Crop Plants: Trends, Options and Limitations; Springer: New Delhi, India, 2015; ISBN 978-81-322-2225-5. [Google Scholar]
  31. Wu, S.; Wen, W.; Gou, W.; Lu, X.; Zhang, W.; Zheng, C.; Xiang, Z.; Chen, L.; Guo, X. A Miniaturized Phenotyping Platform for Individual Plants Using Multi-View Stereo 3D Reconstruction. Front. Plant Sci. 2022, 13, 897746. [Google Scholar] [CrossRef] [PubMed]
  32. Mortensen, A.K.; Bender, A.; Whelan, B.; Barbour, M.M.; Sukkarieh, S.; Karstoft, H.; Gislum, R. Segmentation of Lettuce in Coloured 3D Point Clouds for Fresh Weight Estimation. Comput. Electron. Agric. 2018, 154, 373–381. [Google Scholar] [CrossRef]
  33. Ohsowski, B.M.; Dunfield, K.E.; Klironomos, J.N.; Hart, M.M. Improving Plant Biomass Estimation in the Field Using Partial Least Squares Regression and Ridge Regression. Botany 2016, 94, 501–508. [Google Scholar] [CrossRef]
  34. Zeng, W.; Xu, C.; Gang, Z.; Wu, J.; Huang, J. Estimation of Sunflower Seed Yield Using Partial Least Squares Regression and Artificial Neural Network Models. Pedosphere 2018, 28, 764–774. [Google Scholar] [CrossRef]
  35. Tan, J.; Hou, J.; Xu, W.; Zheng, H.; Gu, S.; Zhou, Y.; Qi, L.; Ma, R. PosNet: Estimating Lettuce Fresh Weight in Plant Factory Based on Oblique Image. Comput. Electron. Agric. 2023, 213, 108263. [Google Scholar] [CrossRef]
  36. Gang, M.-S.; Kim, H.-J.; Kim, D.-W. Estimation of Greenhouse Lettuce Growth Indices Based on a Two-Stage CNN Using RGB-D Images. Sensors 2022, 22, 5499. [Google Scholar] [CrossRef] [PubMed]
  37. Paturkar, A.; Sen Gupta, G.; Bailey, D. Making Use of 3D Models for Plant Physiognomic Analysis: A Review. Remote Sens. 2021, 13, 2232. [Google Scholar] [CrossRef]
  38. Tan, T.; Li, S.; Fan, Y.; Wang, Z.; Ali Raza, M.; Shafiq, I.; Wang, B.; Wu, X.; Yong, T.; Wang, X.; et al. Far-Red Light: A Regulator of Plant Morphology and Photosynthetic Capacity. Crop J. 2022, 10, 300–309. [Google Scholar] [CrossRef]
  39. Legendre, R.; Van Iersel, M.W. Supplemental Far-Red Light Stimulates Lettuce Growth: Disentangling Morphological and Physiological Effects. Plants 2021, 10, 166. [Google Scholar] [CrossRef]
  40. Xie, Y.; Liu, Y.; Wang, H.; Ma, X.; Wang, B.; Wu, G.; Wang, H. Phytochrome-Interacting Factors Directly Suppress MIR156 Expression to Enhance Shade-Avoidance Syndrome in Arabidopsis. Nat. Commun. 2017, 8, 348. [Google Scholar] [CrossRef] [PubMed]
  41. Lee, M.-J.; Park, S.-Y.; Oh, M.-M. Growth and Cell Division of Lettuce Plants under Various Ratios of Red to Far-Red Light-Emitting Diodes. Hortic. Environ. Biotechnol. 2015, 56, 186–194. [Google Scholar] [CrossRef]
  42. Li, Y.; Wen, W.; Guo, X.; Yu, Z.; Gu, S.; Yan, H.; Zhao, C. High-Throughput Phenotyping Analysis of Maize at the Seedling Stage Using End-to-End Segmentation Network. PLoS ONE 2021, 16, e0241528. [Google Scholar] [CrossRef] [PubMed]
  43. Kamilaris, A.; Prenafeta-Boldú, F.X. Deep Learning in Agriculture: A Survey. Comput. Electron. Agric. 2018, 147, 70–90. [Google Scholar] [CrossRef]
  44. Yu, S.; Ma, J.; Wang, W. Deep Learning for Denoising. Geophysics 2019, 84, 1ND-Z34. [Google Scholar] [CrossRef]
  45. Zhang, Y. Multi-Phenotypic Parameters Extraction and Biomass Estimation for Lettuce Based on Point Clouds. Measurement 2022, 204, 112094. [Google Scholar] [CrossRef]
  46. Xu, D.; Chen, J.; Li, B.; Ma, J. Improving Lettuce Fresh Weight Estimation Accuracy through RGB-D Fusion. Agronomy 2023, 13, 2617. [Google Scholar] [CrossRef]
  47. Zhang, L.; Xu, Z.; Xu, D.; Ma, J.; Chen, Y.; Fu, Z. Growth Monitoring of Greenhouse Lettuce Based on a Convolutional Neural Network. Hortic. Res. 2020, 7, 124. [Google Scholar] [CrossRef] [PubMed]
Figure 1. The process of (a) image acquisition and (b) 3D reconstruction.
Figure 1. The process of (a) image acquisition and (b) 3D reconstruction.
Agronomy 15 00029 g001
Figure 2. The schematic diagram of the specific setup of the experiment. D21, 21 days after sowing; D28, 28 days after sowing; D35, 35 days after sowing.
Figure 2. The schematic diagram of the specific setup of the experiment. D21, 21 days after sowing; D28, 28 days after sowing; D35, 35 days after sowing.
Agronomy 15 00029 g002
Figure 3. (a) Spectral composition of the light-emitting diode (LED) treatments measured at the top of the plant canopy and (b) the process of cultivation.
Figure 3. (a) Spectral composition of the light-emitting diode (LED) treatments measured at the top of the plant canopy and (b) the process of cultivation.
Agronomy 15 00029 g003
Figure 4. The schematic diagram of the 2D metric-based biomass estimation model.
Figure 4. The schematic diagram of the 2D metric-based biomass estimation model.
Agronomy 15 00029 g004
Figure 5. The lettuce plants at 3 different growth stages: D21, 21 days after sowing; D28, 28 days after sowing; D35, 35 days after sowing.
Figure 5. The lettuce plants at 3 different growth stages: D21, 21 days after sowing; D28, 28 days after sowing; D35, 35 days after sowing.
Agronomy 15 00029 g005
Figure 6. The (a) dry weight, (b) fresh weight, (c) leaf area, and (d) number of leaves of lettuce at different growth stages. D21, 21 days after sowing; D28, 28 days after sowing; D35, 35 days after sowing. Different letters indicate significant differences among growth stages according to Duncan’s multiple-range test (p ≤ 0.05).
Figure 6. The (a) dry weight, (b) fresh weight, (c) leaf area, and (d) number of leaves of lettuce at different growth stages. D21, 21 days after sowing; D28, 28 days after sowing; D35, 35 days after sowing. Different letters indicate significant differences among growth stages according to Duncan’s multiple-range test (p ≤ 0.05).
Agronomy 15 00029 g006
Figure 7. (a) The linear relationship between overhead projected area and fresh weight of lettuce. (b) The linear relationship between actual volume and fresh weight of lettuce.
Figure 7. (a) The linear relationship between overhead projected area and fresh weight of lettuce. (b) The linear relationship between actual volume and fresh weight of lettuce.
Agronomy 15 00029 g007
Figure 8. (a) The linear relationship between 3D reconstruction volume and actual volume of lettuce. (b) The linear relationship between 3D reconstruction volume and fresh weight of lettuce.
Figure 8. (a) The linear relationship between 3D reconstruction volume and actual volume of lettuce. (b) The linear relationship between 3D reconstruction volume and fresh weight of lettuce.
Agronomy 15 00029 g008
Figure 9. Performance of the model on (a) the training set, (b) test set, (c) D21 datasets, (d) D28 datasets, and (e) D35 datasets. D21, 21 days after sowing; D28, 28 days after sowing; D35, 35 days after sowing.
Figure 9. Performance of the model on (a) the training set, (b) test set, (c) D21 datasets, (d) D28 datasets, and (e) D35 datasets. D21, 21 days after sowing; D28, 28 days after sowing; D35, 35 days after sowing.
Agronomy 15 00029 g009
Figure 10. The 360° viewing angle of lettuce under different light treatments. W, entire growth stage used white light; A, entire growth stage used basal light with supplemental red and far-red light; FRR, the first 10 days of white light with far-red light and another 10 days with red light; RFR, the first 10 days of white light with red light and another 10 days with far-red light.
Figure 10. The 360° viewing angle of lettuce under different light treatments. W, entire growth stage used white light; A, entire growth stage used basal light with supplemental red and far-red light; FRR, the first 10 days of white light with far-red light and another 10 days with red light; RFR, the first 10 days of white light with red light and another 10 days with far-red light.
Agronomy 15 00029 g010
Figure 11. Performance of the model on (a) validation set, (b) treatment W datasets, (c) treatment A datasets, (d) treatment FRR datasets, and (e) treatment RFR datasets. W, entire growth stage used white light; A, entire growth stage used basal light with supplemental red and far-red light; FRR, the first 10 days of white light with far-red light and another 10 days with red light; RFR, the first 10 days of white light with red light and another 10 days with far-red light.
Figure 11. Performance of the model on (a) validation set, (b) treatment W datasets, (c) treatment A datasets, (d) treatment FRR datasets, and (e) treatment RFR datasets. W, entire growth stage used white light; A, entire growth stage used basal light with supplemental red and far-red light; FRR, the first 10 days of white light with far-red light and another 10 days with red light; RFR, the first 10 days of white light with red light and another 10 days with far-red light.
Agronomy 15 00029 g011
Figure 12. Performance of Model 1 on (a) the training set and (b) the test set.
Figure 12. Performance of Model 1 on (a) the training set and (b) the test set.
Agronomy 15 00029 g012
Figure 13. Performance of Model 2 on (a) the training set and (b) the test set.
Figure 13. Performance of Model 2 on (a) the training set and (b) the test set.
Agronomy 15 00029 g013
Figure 14. Performance of the model based on 2D metrics on (a) the training set and (b) the test set.
Figure 14. Performance of the model based on 2D metrics on (a) the training set and (b) the test set.
Agronomy 15 00029 g014
Table 1. The ratios of R to FR and R to B in different spectral composition and light treatments.
Table 1. The ratios of R to FR and R to B in different spectral composition and light treatments.
Spectral Composition and Light TreatmentR:FRR:B
Spectral
Composition
First 10 DaysLast 10 DaysSpectral
Composition
First 10 DaysLast 10 Days
W12.3612.3612.360.930.930.93
W + R14.22--1.47--
W + FR1.44--0.93--
W + R + FR2.17--1.47--
A-2.172.17-1.471.47
FRR-1.4414.22-0.931.47
RFR-14.221.44-1.470.93
W, white; R, red; FR, far-red; B, blue; R:FR, the ratio of red to far-red; R:B, the ratio of red to blue; W + R, combination of white and red; W + FR, combination of white and far-red; W + R + FR, combination of white, red, and far-red; A, entire growth stage used basal light with supplemental red and far-red light; FRR, the first 10 days of white light with far-red light and another 10 days with red light; RFR, the first 10 days of white light with red light and another 10 days with far-red light.
Table 2. Ten-fold cross-validation results of lettuce fresh weight estimation model.
Table 2. Ten-fold cross-validation results of lettuce fresh weight estimation model.
Iteration12345678910Mean
RMSE (g)3.94692.05993.03172.73173.43184.53673.05173.79833.60382.10273.2295
RMSE, root mean square error.
Table 3. 10-Fold cross-validation results of Model 1 and Model 2.
Table 3. 10-Fold cross-validation results of Model 1 and Model 2.
Iteration12345678910Mean
Model 1
RMSE (g)
6.31657.54576.92475.70777.76197.21096.37706.49986.43267.16716.7944
Model 2
RMSE (g)
4.83084.96065.44624.79513.57986.67374.53764.00557.51086.78305.3123
RMSE, root mean square error.
Table 4. Ten-fold cross-validation results of 2D metric-based fresh weight estimation model.
Table 4. Ten-fold cross-validation results of 2D metric-based fresh weight estimation model.
Iteration12345678910Mean
RMSE (g)3.77703.05402.97672.52184.05523.28742.33034.88702.50863.64073.3039
RMSE, root mean square error.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ju, J.; Zhang, M.; Zhang, Y.; Chen, Q.; Gao, Y.; Yu, Y.; Wu, Z.; Hu, Y.; Liu, X.; Song, J.; et al. Development of Lettuce Growth Monitoring Model Based on Three-Dimensional Reconstruction Technology. Agronomy 2025, 15, 29. https://doi.org/10.3390/agronomy15010029

AMA Style

Ju J, Zhang M, Zhang Y, Chen Q, Gao Y, Yu Y, Wu Z, Hu Y, Liu X, Song J, et al. Development of Lettuce Growth Monitoring Model Based on Three-Dimensional Reconstruction Technology. Agronomy. 2025; 15(1):29. https://doi.org/10.3390/agronomy15010029

Chicago/Turabian Style

Ju, Jun, Minggui Zhang, Yingjun Zhang, Qi Chen, Yiting Gao, Yangyue Yu, Zhiqiang Wu, Youzhi Hu, Xiaojuan Liu, Jiali Song, and et al. 2025. "Development of Lettuce Growth Monitoring Model Based on Three-Dimensional Reconstruction Technology" Agronomy 15, no. 1: 29. https://doi.org/10.3390/agronomy15010029

APA Style

Ju, J., Zhang, M., Zhang, Y., Chen, Q., Gao, Y., Yu, Y., Wu, Z., Hu, Y., Liu, X., Song, J., & Liu, H. (2025). Development of Lettuce Growth Monitoring Model Based on Three-Dimensional Reconstruction Technology. Agronomy, 15(1), 29. https://doi.org/10.3390/agronomy15010029

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop