1. Introduction
Driven by a complex genetic mechanism, plant height is an important agronomic and phenotypic characteristic that varies according to the agricultural crop [
1]. It is typically measured manually using a ruler, which limits the sample size in the field and makes the process labor-intensive, low-throughput, and prone to human error [
2]. Therefore, the employment of techniques and technologies aimed at indirectly measuring variables in the field, including plant height, is encouraged whenever they can capture the true characteristics of the plants. Additionally, plant height enables important estimations such as leaf area index [
3], biomass [
4], and productivity [
5], highlighting the need for this variable to be quantified with caution and adequately represent what is observed in the field.
Alternative methods have been developed by employing techniques derived from remote sensing, which are particularly valuable for generating highly reliable products that assist in the study and monitoring of agricultural crops. Remote sensing techniques allow for the collection of information from the Earth’s surface without direct contact between the sensor and the imaged object [
6], enabling the study of important characteristics of agricultural crops. Therefore, they can estimate the same variable using different sensor systems, allowing for the observation of which best approximates the real condition of the plant in the field.
Three-dimensional point cloud detection techniques of a digitized field can be obtained by airborne sensors on Remotely Piloted Aircraft (RPA). RPA refer to unmanned aircraft remotely controlled through interfaces such as computers, simulators, digital devices, or remote controllers, and can be programmed to execute semi-automated flight plans, allowing operator intervention at any time [
7]. The airborne sensors can be conventional high-resolution RGB (Red, Green, Blue) type, with images collected as sequences of overlapping images that, when processed using Structure-from-Motion (SfM) algorithms, generate geometrically accurate point cloud structures [
8]. The LiDAR (Light Detection and Ranging) sensor, on the other hand, actively scans the terrain by emitting high-frequency laser pulses towards the object, recording the reflected responses and transmission time, and producing a dense point cloud of the sampled terrain [
9]. These sensors can, therefore, produce structural parameters of crops, such as height, for example. It is noteworthy that the SfM algorithm allows high-resolution RGB images to be used to produce point clouds with quality similar to those obtained by LiDAR, which is important for the precise study of crop structure [
10].
Studies employing point clouds obtained by RGB and LiDAR sensors are already described in the literature, aiming to compare the effectiveness of these two methods in measuring plant characteristics. Malambo et al. [
11] used RGB sensors on RPA and terrestrial LiDAR for multitemporal crop modeling and developed and evaluated a methodology to estimate plant height data from generated point clouds, obtaining Root Mean Square Errors (RMSE) of 0.01 and 0.12 m, respectively, compared to ground-truth data. Holman et al. [
12] developed and evaluated a method to rapidly derive crop height and growth rate from multitemporal digital surface models obtained by RGB and LiDAR sensors, with an RMSE of 0.03 m when compared to ground truth. Ziliani et al. [
13] aimed to evaluate the capability of point clouds derived from RGB and LiDAR sensors to capture soil surface variability and spatially variable crop height, observing a strong correlation between the heights derived from the employed sensors. Shendryk et al. [
14] estimated height and biomass using an RPA equipped with LiDAR and multispectral sensors, demonstrating similarity between the prediction results. Thus, all these results show a potential path to reduce the time spent on manual field measurement and expand the use of indirect measurements, as also evidenced in other studies described by Bareth et al. [
15], Wallace et al. [
16], Li et al. [
17], and Guerra-Hernández et al. [
18].
The acquisition of plant height data through products derived from remote sensing enables the indirect estimation of other relevant variables, such as soil penetration resistance. Soil compaction results in the formation of dense layers that reduce porosity and hinder root development, limiting the absorption of water and nutrients by plants. This condition compromises plant growth, especially in perennial crops such as coffee, directly impacting their vigor and, consequently, the height observed in the field [
19]. Thus, plant height can be used as an indirect indicator of soil penetration resistance, allowing the occurrence of compaction to be inferred from the spatial variability of vegetative development [
20]. This approach is especially useful in large areas, where direct measurement of soil penetration resistance may be unfeasible, and contributes to the monitoring and adequate management of soil compaction in agricultural systems.
The integration of technologies for measuring variables in the field should be applied especially to agricultural crops of importance in the global economic balance, enabling more accurate targeting of products, increased productivity gains, and greater profitability for agricultural areas. In this scenario, coffee cultivation holds a relevant position, contributing to the global economic balance with total production estimates for the 2023/2024 coffee year of 178 million 60 kg bags, with Brazil being the world’s leader in coffee production and export [
21]. Internationally, coffee cultivation generates significant revenue among producing and exporting countries, with important earnings for farmers and agricultural workers involved in the activity. In this context, this study aims to estimate plant height and soil penetration resistance in coffee plantations through a dense point cloud-based three-dimensional digital model using two sensors, RGB and LiDAR, airborne by RPA, and to evaluate the accuracy of the proposed estimates.
2. Materials and Methods
2.1. Study Area
Located in the municipality of Santo Antônio do Amparo, Minas Gerais, Brazil, the study area consists of a coffee plantation (
Coffea arabica L.) established in December 2016, covering an area of 2.10 hectares. The crop was planted with a spacing of 3.50 m between rows and 0.50 m between plants, using the Mundo Novo Vermelho (IAC 379/19) cultivar, according to the National Cultivar Registry (Registro Nacional de Cultivares—RNC) of the Ministry of Agriculture, Livestock, and Supply (Ministério da Agricultura, Pecuária e Abastecimento—Mapa) (
Figure 1).
The area has an average altitude of 1022 m and is part of the Atlantic Forest biome, with soil classified as dystrophic Red–Yellow Latosol [
22]. The climate characterization of the study area is humid and temperate according to Köppen’s classification, with dry winters and rainy summers (Cwa), with average annual precipitation and temperature of 1.53 mm and 19.40 °C, respectively [
23].
The experimental design in this study area involves 18 systematically distributed points with variations in plant height (9 points for tall plants and 9 points for short plants), pre-defined based on aerial survey by RPA, as described in the study by Bento et al. [
19].
The experimental area is in a region with predominantly flat terrain, a characteristic that favors the uniformity of topographic conditions throughout the evaluated area. This homogeneity is particularly relevant in studies involving the generation of digital models derived from remote sensing imagery, as it minimizes potential distortions associated with significant elevation changes. The flat topography thus contributes to greater consistency in the comparative analysis of the sensors used.
2.2. Data Acquisition
The imaging was carried out using the RPA Matrice 300 RTK (DJI, Shenzhen, China). According to the manufacturer’s specifications, this equipment has a flight time of up to 55 min, advanced artificial intelligence capabilities, and a six-directional detection and positioning system. It also features an optimized transmission system with real-time automatic frequency switching between the 2.40 and 5.80 GHz bands, resulting in greater flight stability in high-interference environments, such as transmission lines.
The Matrice 300 RTK was equipped with the Zenmuse L1 (DJI, Shenzhen, China) sensor for capturing RGB and LiDAR data. This sensor integrates a Livox LiDAR module with a 903 nm wavelength, a high-precision Inertial Navigation System (IMU), and a 20 MP RGB camera with a 1-inch CMOS sensor, an 8.80 mm focal length, and a mechanical shutter, mounted on a 3-axis stabilized gimbal (
Figure 2A). Data collection was performed using the integrated GNSS RTK L1 L2 system (China Resources Building, Chongqing, China) on the RPA and corrected using the DJI DRTK-2 (DJI, Shenzhen, China) base station (
Figure 2B), with the base station position adjusted using the IBGE’s PPP system. The altimetry was referenced to the hgeoHNOR2020 model from IBGE, with the average sea level measured at the tide gauge in Imbituba-SC.
The aerial survey was conducted using the DJI Pilot 2 (version 7.1) application to define the semi-automated flight plan. The flight parameters considered include the aircraft’s takeoff and landing location (home point), wind direction, topographic conditions of the area, and other information described in
Table 1.
2.3. Data Processing
The RGB sensor data was processed using Pix4D Mapper software (version 4.5.6) to generate sparse and dense point clouds based on the SfM technique. Initially, image alignment was performed, an automated phototriangulation process where internal camera orientation parameters and external orientation parameters of the aerial photographs are determined, resulting in a sparse point cloud formed by homologous point identification. Subsequently, the point cloud obtained in the previous processing stage was densified, increasing the number of points in the cloud and reducing empty spaces. Finally, the final RGB orthomosaic was generated, a process in which orthorectification of the images was performed through orthogonal reprojection and with a constant scale, eliminating or minimizing distortions caused by the sensor system and the surface. Additionally, the Digital Surface Model (DSM) and Digital Terrain Model (DTM) were produced.
The LiDAR sensor data was processed using DJI Terra software (version V4.4.6). The point clouds were directly georeferenced using the RTK sensor of the Matrice 300, followed by automatic and subsequent manual classification to define terrain representative points. From these, the DSM and DTM were generated.
2.4. Canopy Height Model
The Canopy Height Model (CHM) was determined based on the two analyzed products (RGB and LiDAR), allowing for the acquisition of coffee plant height information in the field. The CHM expresses the height of a surface object and describes the continuous distribution of the object’s surface in the horizontal direction, according to the three-dimensional structure and the height variation in the vertical direction. For the calculation, Equation (1) was used, with the DTM and the DSM as input data. The DTM refers to the solid terrain model representing the elevation of the terrain, containing only the terrain elevation information, while the DSM is based on the DTM and includes the elevation information of other surfaces beyond the soil, such as the height of plants, trees, and constructions. The spatial resolution of the generated products is 0.15 m (RGB sensor) and 0.90 m (LiDAR sensor).
This analysis was performed using QGIS software (version 3.6.2) (QGIS Development Team, Open Source Geospatial Foundation) utilizing the Map Algebra tool. Additionally, to demonstrate the differences between the CHMs obtained from the two sensors in the study, a cross-sectional profile was drawn in the sample area, and plant height data were extracted using the 3D Analyst toolbar in QGIS (version 3.6.2).
where CHM is the Canopy Height Model (m), DMS is the Digital Surface Model (m) and DTM is the Digital Terrain Model (m).
2.5. Soil Penetration Resistance
To estimate the soil penetration resistance in the coffee plantation based on the plant height data obtained from the RGB/SfM and LiDAR point clouds of the two study sensors, the method described by Bento et al. [
19] was applied, using Equation (2) in QGIS software (version 3.6.2).
where Y is the soil penetration resistance (MPa), and X is the canopy height (m).
2.6. Accuracy Assessment
In this study, plant height was initially defined based on the dense point cloud obtained by LiDAR, which served as the reference. The plant height derived from the RGB sensor using the SfM algorithm was then compared to the reference LiDAR plant height. The correlation between the plant height values estimated by LiDAR and the RGB sensor was examined, and a linear model describing the correlation was established. Statistical measures including the Coefficient of Determination (R2), Correlation Coefficient (R), and Root Mean Square Error (RMSE) were employed to assess the accuracy of the study products for plant height estimation.
The R2 value was used to assess the agreement between the estimated values, the R value to measure the intensity and direction of linear relationships, and the RMSE to quantify the deviation between the estimated values. A higher R2 and R value indicate a better fit of the data, while a lower RMSE indicates greater estimation accuracy.
Subsequently, statistical differences between the estimated plant heights from the RGB and LiDAR sensors were analyzed. Initially, data normality was assessed using the Anderson–Darling statistical test. After confirming data normality, the Tukey multiple comparison test at a 5% probability level was applied.
Finally, the quality of the soil compaction estimate via penetration resistance was verified for sample points in high-plant zones and low-plant zones, observing the agreement between the values estimated by the two study sensors based on the statistical analyses of R2, R, RMSE, and the Tukey multiple comparison test at a 5% probability level, with prior normality analysis using the Anderson–Darling statistical test.
All statistical analyses were conducted using R software (version 4.4.2) (R Development Core Team, R project, Auckland, New Zealand), with processes described in the flowchart in
Figure 3.
In this study, LiDAR data were used as a reference for comparing plant height and resistance to soil penetration estimates derived from RGB imagery. The use of LiDAR as a reference is supported by its extensive validation and established reliability in scientific literature for accurately characterizing vegetation structure, particularly canopy height estimation. Thus, the methodological approach focused on a comparative analysis between remote sensing sensors, emphasizing the consistency and relative performance of the generated estimates. The statistical analyses performed, including the Correlation Coefficient (R), Coefficient of Determination (R2), and Root Mean Square Error (RMSE), yielded robust and coherent results consistent with the expected behavior between the evaluated sensors, supporting the validity of the adopted approach.
3. Results
The descriptive statistics of the coffee plant height variable obtained by the two study sensors (RGB and LiDAR) are described in
Table 2, with proximity observed between the obtained values.
The Digital Surface Model (DSM) and Digital Terrain Model (DTM) generated from the RGB and LiDAR sensors are presented in
Figure 4. A visual comparison of these products reveals slight differences in the representation of both terrain and vegetation surfaces, highlighting the distinct acquisition and processing characteristics of each sensor.
The CHMs for the two study sensors are presented in
Figure 5. The sampling points were based on the definition presented by Bento et al. [
19], considering different height zones of the plants, a fact also observed in the CHMs generated in this study. High-plant zones are highlighted with height values ranging from 2.50 to 3.50 m, and low-plant zones with values ranging from 1.50 to 2.50 m in height. The cross-sectional profile with plant height information obtained by the comparative CHM for the two study sensors is presented in
Figure 6, also showing a close resemblance between the obtained results.
Figure 7, in turn, presents the scatterplot of the height data for the study sensors, which also includes the regression equation, the Coefficient of Determination (R2) of 0.72, Correlation Coefficient (R) of 0.85, and the Root Mean Square Error (RMSE) of 0.44. Overall, it was observed that the model provided satisfactory estimates for the analyzed metrics, indicating good correspondence between the study variables as the proportion of the variable estimated by one sensor explained by the other sensor.
Based on the exploratory data analysis (
Table 2), close agreement was observed between the coffee plant height data obtained from the two sensors under study. However, the statistical differences by mean test via boxplot analysis for the plant height variable are presented in
Figure 6, highlighting that no significant statistical differences were determined by the Tukey test at a 5% probability level (
Figure 8).
The estimates of the spatial distribution indirectly between plant height and soil penetration resistance (MPa) are presented in
Figure 9 for the two study sensors (
Figure 9A) RGB and (
Figure 9B) LiDAR. The values of the statistical metrics for accuracy assessment between the two study sensors were 0.87 for R2, 0.83 for R, and 0.14 m for RMSE (
Figure 9), with no significant statistical differences determined by the Tukey test at a 5% probability level (
Figure 10).
4. Discussion
This study provided an assessment of measuring the plant height variable obtained by an RGB sensor with processing via SfM compared to measurement by a LiDAR sensor. The CHMs of coffee plants presented in
Figure 5, obtained based on the DSM and DTM presented in
Figure 4, as well as the cross-sectional profile described in
Figure 6, depict similar distribution patterns and indicate good representativeness of the sampled coffee plantation with consistent levels of accuracy for both study sensors. It is important to note that the quality of the Canopy Height Model (CHM) is directly influenced by the accuracy of both the DSM and DTM, underscoring the relevance of this comparative analysis between sensors. In this study, RGB-derived data tend to show reduced accuracy in depicting terrain in densely vegetated areas, primarily due to the limitations of the Structure-from-Motion (SfM) reconstruction method, which relies on ground visibility and may be affected by canopy occlusion. In contrast, LiDAR data exhibits a greater ability to penetrate vegetation cover, resulting in more accurate and detailed terrain models. Despite these variations, the spatial distribution revealed by the models is globally similar, and the observed differences did not compromise the overall accuracy of the derived products, such as the canopy height models in the evaluated area.
The results demonstrate that the data acquired by RGB and LiDAR sensors are fully capable of recreating accurate 3D models and subsequent coffee plant heights. Mapping the plant height in this study enabled observation of the spatial variability in the study area, identifying low-plant zones associated with soil compaction and high-plant zones not affected by compaction, as described by Bento et al. [
19].
In terms of evaluation, the results of precision metrics for the study sensors show good agreement (R2 = 0.72, R = 0.85, and RMSE = 0.44 m). It is worth noting that LiDAR scanning and RGB photogrammetry can be collected simultaneously via RPAs, allowing for the integration of both approaches in capturing and generating datasets with information for reconstructing structural information of agricultural areas. However, the method of estimating plant height by RGB sensor is also capable of achieving a high level of precision and has time and cost efficiency compared to the method of estimating plant height by LiDAR sensor.
It is evident to note, however, that the principles of estimating plant height by RGB sensor via SfM and LiDAR sensor are distinct. The SfM algorithm is based on multiple viewing angles for reconstructing the dense point cloud, while the LiDAR sensor depends on a single directional scan for constructing the dense point cloud; thus, the latter method exhibits greater canopy penetration [
24]. In this study, however, despite the different reconstruction methods, the point cloud generated by the SfM algorithm compared to the point cloud generated by the LiDAR sensor proved adequate for accurately representing the agricultural surface.
It is worth noting that a lower fidelity of SfM point clouds from RGB sensors is typically observed because this sensor does not generate complete 3D datasets like the LiDAR sensor, which can store more information per geographic location [
25]. Therefore, the accuracy of the RGB product is strongly related to the density of the point cloud generated by point matching using the SfM algorithm, and in some cases requiring a greater number of images to improve the quality of the point cloud and achieve high-quality data processing [
26].
Throughout the crop cycle, canopy overlap and closure may occur, particularly in cultivars with high vegetative vigor. This densification significantly reduces ground visibility in images acquired by optical sensors, negatively affecting the accuracy of the Digital Elevation Model (DEM) generated using Structure-from-Motion (SfM) techniques, which rely on the identification of ground points for three-dimensional reconstruction [
27]. However, this factor did not influence the results of the present study, as evidenced by
Figure 6, which shows the similarity between the transverse profiles obtained by the two sensors used in generating the canopy height model.
RGB sensors, in turn, offer important advantages such as high spatial resolution, relatively low cost, and ease of operation, and they are widely used in crop phenotyping studies and in monitoring morphological characteristics of vegetation. Even under partially closed canopies, RGB imagery can be successfully employed to extract structural information from the upper parts of plants and to perform complementary analyses, such as soil cover indices and vegetative vigor, through photointerpretation and surface-level 3D reconstruction [
28]. Therefore, RGB sensors remain a useful and feasible tool, particularly when used in conjunction with correction or calibration methodologies, or in the absence of active sensors such as LiDAR.
Furthermore, environmental factors and structural characteristics of the canopy can significantly influence and limit the performance of sensors based on RGB imagery and Structure-from-Motion (SfM) reconstruction, particularly when compared to LiDAR. Variations in light intensity, the presence of shadows, and adverse weather conditions, such as wind during image acquisition, can negatively impact the quality of the three-dimensional reconstruction generated by SfM, leading to distortions in plant height estimations [
29]. And as highlighted earlier, in areas with dense canopy cover, overlapping foliage can hinder the generation of accurate surface models using RGB data, as the method strongly depends on surface visibility for reliable stereoscopic reconstruction [
30]. In contrast, LiDAR tends to be more robust under such conditions, as it can partially penetrate the vegetation canopy and provide more accurate information on vertical vegetation structure, even under low light or in closed-canopy environments [
31]. Therefore, sensor selection should consider both the specific characteristics of the study area and the environmental conditions at the time of data acquisition to ensure the accuracy and consistency of the resulting estimates. Some studies mention sources of SfM error in crop canopies, such as Fawcett et al. [
32], Slade et al. [
30], and Magar et al. [
33].
Upon analysis of the generated results, a slight underestimation of the plant height values estimated by the RGB sensor compared to those estimated by the LiDAR sensor was observed (
Table 2), resulting in an increase in RMSE but not affecting the values of R and R2 (
Figure 6). This is likely due to poor image geometry, which leads to terrain occlusion by the plant canopy in most viewing angles, the methodological limitation of the reconstruction algorithm itself. In addition, the spatial resolution of the images, the variation in illumination, and the movement of leaves caused by wind during acquisition may introduce additional errors in the reconstruction, contributing to the slight underestimation observed. However, this underestimation was not sufficient to determine significant statistical differences for estimating coffee plant height between the two sensors used (
Figure 8). Similar conclusions have been reported in studies by Xie et al. [
34], Malambo et al. [
11], Ziliani et al. [
13], Bareth et al. [
15], and Holman et al. [
12].
Based on the soil penetration resistance estimate described in
Figure 9, the values ranged from 1.28 to 10.23 MPa for the RGB sensor and from 1.28 to 10.10 MPa for the LiDAR sensor, with noticeable proximity between the values estimated by the two study sensors. The lower penetration resistance values estimated correspond to the high-plant zone, i.e., plants less subject to soil compaction, while the higher values correspond to the low-plant zone, i.e., plants more subject to soil compaction. As reported in the literature, for coffee cultivation, a perennial crop not subjected to annual soil tillage, the tolerated values of penetration resistance are up to 4 MPa, mainly due to pore continuity and aggregate stability [
35]. Additionally, sufficient performance metric values with good agreement among them are highlighted to demonstrate the relationships of the soil penetration resistance variable for the two study sensors (R2 = 0.87, R = 0.83, and RMSE = 0.14 m) and non-significant statistical differences at a 5% probability level (
Figure 10).
Indirectly estimating soil penetration resistance based on plant height data is essential for understanding how such effects can influence plant development in the field. Compacted soils typically exhibit low productivity, reduced water infiltration, decreased soil aeration, increased root penetration resistance, lower plant survival rates, and, consequently, economic losses in agricultural crop profitability [
36]. Ramos et al. [
37] demonstrated that high levels of soil compaction (above 80%) adversely affect the development of coffee plants by limiting growth and altering physiological, morphological, and root anatomical characteristics. Similarly, Szatanik-Kloc et al. [
38] reported that elevated soil compaction significantly impairs plant development. Shaheb et al. [
39] identified increased soil penetration resistance and the resulting rise in soil bulk density, leading to reduced porosity and impaired hydraulic properties, as potential causes of growth limitation in compacted soils. Costa and Coutinho [
40] also observed reductions in key growth parameters, including plant height, biomass, shoot and root density, and root length, as a consequence of soil compaction. These findings underscore the critical impact of soil compaction on coffee plant development and height. However, despite these insights, the indirect estimation of such variables, particularly using plant height, remains scarcely addressed in the literature, thus emphasizing the relevance and contribution of the present study.
As observed in this study, the effectiveness of remote sensing technologies in acquiring detailed three-dimensional measurements of the plant canopy structure allows for the indirect measurement of coffee crop characteristics, such as height and soil penetration resistance. However, the adoption of LiDAR sensor technology is still limited and costly, especially for surveys conducted over medium and large areas. In contrast, the use of RGB sensors is a low-cost alternative configured to provide similar information for capturing three-dimensional structure and describing plant morphological parameters, especially in small areas [
41], highlighting the advantage of using RGB sensors for measurement purposes.
In the literature, several scientific studies have explored the use of RGB and LiDAR sensors for estimating crop height in agricultural systems. Magar et al. [
33] compared these sensors for soybean plant height estimation and found that LiDAR performed better (R2 = 0.83) than RGB cameras (R2 = 0.53) during the pod development and seed filling stages. However, RGB proved more reliable at physiological maturity, when LiDAR faced difficulties in accurately capturing plant height. Wu et al. [
42] assessed the accuracy of RGB sensors and LiDAR data in estimating the canopy height of mango and avocado trees. Their results showed that RGB data provided tree height measurements comparable to ground-based observations (R2 = 0.89 for mango; R2 = 0.81 for avocado), while LiDAR also achieved reasonable accuracy (R2 = 0.67 for mango; R2 = 0.63 for avocado). Malambo et al. [
11] compared plant height estimates derived from RGB imagery processed using Structure-from-Motion (SfM) techniques to those obtained from Terrestrial Laser Scanning (TLS), reporting R2 values ranging from 0.60 to 0.94 depending on the crop growth stage. These results demonstrate the potential of RPAS equipped with RGB cameras as an effective and low-cost alternative to LiDAR for multitemporal monitoring of plant height in agricultural research. Together, these studies offer valuable insights into the capabilities and limitations of RGB and LiDAR sensors for evaluating agricultural variables, supporting informed sensor selection for specific applications in crop monitoring.
Therefore, the results demonstrate that the methodology for indirectly measuring coffee plant characteristics can serve as an effective monitoring tool for the development of precision agriculture management strategies. It is noteworthy that the prediction of plant height can be employed to provide additional information that complements other study variables, with the goal of enhancing productivity and profitability in agricultural systems.