Next Article in Journal
Robust Flight-Path Angle Consensus Tracking Control for Non-Minimum Phase Unmanned Fixed-Wing Aircraft Formation in the Presence of Measurement Errors
Next Article in Special Issue
An Intelligent Grazing Development Strategy for Unmanned Animal Husbandry in China
Previous Article in Journal
The Fixed-Time Observer-Based Adaptive Tracking Control for Aerial Flexible-Joint Robot with Input Saturation and Output Constraint
Previous Article in Special Issue
Independent Control Spraying System for UAV-Based Precise Variable Sprayer: A Review
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Missing Plant Detection in Vineyards Using UAV Angled RGB Imagery Acquired in Dormant Period

by
Salvatore Filippo Di Gennaro
1,†,
Gian Luca Vannini
1,2,†,
Andrea Berton
3,
Riccardo Dainelli
1,*,
Piero Toscano
1 and
Alessandro Matese
1
1
Institute of BioEconomy, National Research Council (CNR-IBE), Via Caproni 8, 50145 Florence, Italy
2
Institute of Information Science and Technologies, National Research Council (CNR-ISTI), Via G. Moruzzi 1, 56127 Pisa, Italy
3
Institute of Geosciences and Earth Resources, National Research Council (CNR-IGG), Via G. Moruzzi 1, 56124 Pisa, Italy
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Drones 2023, 7(6), 349; https://doi.org/10.3390/drones7060349
Submission received: 4 April 2023 / Revised: 16 May 2023 / Accepted: 24 May 2023 / Published: 26 May 2023
(This article belongs to the Special Issue Advances of UAV in Precision Agriculture)

Abstract

:
Since 2010, more and more farmers have been using remote sensing data from unmanned aerial vehicles, which have a high spatial–temporal resolution, to determine the status of their crops and how their fields change. Imaging sensors, such as multispectral and RGB cameras, are the most widely used tool in vineyards to characterize the vegetative development of the canopy and detect the presence of missing vines along the rows. In this study, the authors propose different approaches to identify and locate each vine within a commercial vineyard using angled RGB images acquired during winter in the dormant period (without canopy leaves), thus minimizing any disturbance to the agronomic practices commonly conducted in the vegetative period. Using a combination of photogrammetric techniques and spatial analysis tools, a workflow was developed to extract each post and vine trunk from a dense point cloud and then assess the number and position of missing vines with high precision. In order to correctly identify the vines and missing vines, the performance of four methods was evaluated, and the best performing one achieved 95.10% precision and 92.72% overall accuracy. The results confirm that the methodology developed represents an effective support in the decision-making processes for the correct management of missing vines, which is essential for preserving a vineyard’s productive capacity and, more importantly, to ensure the farmer’s economic return.

1. Introduction

Technological and methodological advances in agriculture have provided farmers effective solutions to optimize spatial variability management in vineyards, overcoming the traditional homogeneous approach. This precision viticulture (PV) scenario aims at site-specific practices that allow vignerons to determine the best methods to meet different plants’ needs within the same vineyard. The baseline of this concept is the knowledge of the vineyard’s status through an accurate field heterogeneity characterization, which could be achieved using different types of sensing technologies. Among these, remote sensing platforms, such as satellites, aircraft and unmanned aerial vehicles (UAV) are widely used to quickly generate an accurate picture of plant vegetative conditions within the vineyard. Each method has different pros and cons in terms of a series of factors, such as the pixel ground resolution, flexibility and timing acquisition, and area coverage [1]. Considering the discontinuous structure of the canopy cover in the vineyard due to the alternating presence of rows and inter-rows, remote sensing applications in precision viticulture require a filtering step in the data analysis workflow to remove the noise effect generated by bare soil and/or inter-row vegetation [2]. In this context, low-quote UAV surveys represent the best monitoring solution because they provide the centimetric ground resolution required to filter the inter-row space and extract pure canopy pixels. There is also a wide range of sensors that can be equipped, represented by RGB, multispectral, hyperspectral and thermal imaging cameras, as well as LIDAR sensors [3,4,5,6]. The analysis of the canopy’s properties by means of the calculation of a wide number of indices derived from different types of sensors [7] provides information strongly related to agronomic parameters, which play a key role in vineyard management. These indices include vegetative development factors, such as the canopy volume, height and width [8,9]; LAI and biomass [10,11,12,13,14,15,16]; water stress [17,18,19,20,21]; disease investigation [22,23,24]; weed detection [25,26]; and yield and quality estimation [27,28,29]. Alongside these important descriptors of vine health status, there is a further factor of extreme importance in evaluating vineyard performance, represented by the number of missing vines. This condition occurs when a plant dies and must be replaced to preserve the productive potential of the vineyard. Vine death can be associated with both biotic factors, such as fungal diseases, bacteria, viruses, phytoplasmas, insects or nematodes, and abiotic factors due to nutritional deficiencies, water stress, frost, high temperatures, mechanical damage or toxicity due to chemical substances [12]. In addition to safeguarding the production potential, the rapid identification of the death of vines in the vineyard allows for timely intervention to remove a plant affected by a pathogen and prevent it from transmitting the infection to adjacent plants. The replacement operation is part of the normal maintenance of the vineyard to be carried out over the years of its life to preserve its efficiency. This obviously represents a burden, both for the direct costs of the plant material and the related installation, and for the need for greater attention, care and differentiated management of the vines that are replanted, which must be implemented in the year or two following the replacement. Knowing the number of plants that are missing allows the grower to promptly order the grafted vines needed to replace the dead plants from a nursery, thus avoiding delays due to a lack of availability for the specific scion–rootstock varieties for each vineyard. A traditional approach for missing vine detection is based on ground visual observation by the farmer, with the aim of counting the number of dead vines along each row. This solution is extremely time consuming, requiring the farmer to walk the entire acreage of the vineyard, and is frequently characterized by difficult walking conditions due to slopes and bare or muddy soil. Although a quantification of the total number of missing plants per vineyard is sufficient to place an accurate order of new plants, in order to provide real support to the replacement phase, it is also necessary to record their position in the vineyard, which is not possible with the traditional method based on visual counting. In fact, mapping the missing plants would allow the farmer to quickly reach the points of interest without having to cross all the rows with a tractor loaded with new plants, thus saving not only time and money in terms of man hours and machine operational costs, but also limiting the soil compaction and environmental impact [30]. In recent years, there has been a growing interest in the adoption of vineyard monitoring solutions based on RGB cameras, which has been boosted by the wide availability on the market of low-cost and turnkey UAV models with integrated high-resolution sensors (15 MP or even higher), such as the multirotor products of the DJI leader company (SZ DJI Technology Co., Ltd., Shenzhen, China). Using these models with an autonomy of 20 to 30 min, it is possible to monitor large vineyards (up to 4 ha) with a single battery, performing surveys at a 30 m flight quote and delivering sub-centimeter ground resolution imagery [31]. Combining such precision with flight plans with high degrees of overlap (70–85% on both sides) allows the generation of an optimal dataset for the application of structure from motion (SfM) algorithms and the reconstruction of the canopy and even the vine trunk. RGB images processed with photogrammetry software, such as Agisoft Metashape Professional (Agisoft LLC, St. Petersburg, Russia), DJI Terra (SZ DJI Technology Co., Ltd., Shenzhen, China) or Pix4D (Pix4D S.A., Prilly, Switzerland), allow one to produce different types of digital models, such as two-dimensional (2D) orthomosaics, digital surface models (DSM) and three-dimensional (3D) point clouds [32]. A comparable level of accuracy in plant spatial architecture reconstruction can also be achieved using LIDAR sensors; however, while they are superior in terms of the monitoring and processing speed for the generation of dense clouds, and allow lower weight datasets, they are very sensitive to noise caused by canopy and soil light reflection and significantly more expensive than RGB cameras [33,34]. Several studies in viticulture have applied the photogrammetric approach on UAV-captured images to identify vines and missing vines and estimate the geometric and biophysical traits. Su et al. [35] developed an automated method for the identification of biotic/abiotic stressed vines and missing vines, validated by ground observation in a 1.5 ha vineyard. The methodology starting with building the DSM from the RGB images then extracts the canopy boundaries, binarizes and rotates the image, and finally retrieves information to classify the vines along the rows using computer vision algorithms based on the measurement of the canopy cover. A different approach was suggested by Primicerio et al. [36], in which individual vine plants and missing plants were extrapolated from an orthomosaic of a study site (<1 ha) using a binary multivariate logistic regression model combined with the vegetation segmentation method proposed by Comba et al. [37]. Matese and Di Gennaro [17] described a method applied in a 0.2 ha experimental vineyard to identify the missing plants by means of a semi-automatic procedure involving the filtering of the DSM, supervised generation of a 1.00 m × 0.60 m polygons grid representing individual vine positions along the rows, measurement of the pixel number within each polygon, and, finally, classification into five potential missing vine classes based on quantiles. De Castro et al. [8] tested a procedure that applied an object-based image analysis algorithm (OBIA) on the DSM of three small plots (at least 0.5 ha). Assuming a constant space between the plants, the procedure allowed an unsupervised classification of the gaps and the estimation of the position and size (projected area, height and volume) of the plants along the rows. Pádua et al. [38] used data from various sensors (RGB, multispectral and infrared) in six different plots (the largest of which measured 0.6 ha) to perform a multitemporal characterization at the single vine level. Focusing on missing vine detection, a dataset obtained with a DJI Phantom 4 equipped with a 12 MP RGB camera was processed by applying a method previously developed by the authors Pádua et al. [9], in which the vine rows are first extracted from the binarized vegetation mask of each site obtained using CSM and the G% index, then the vine positions were estimated by splitting each row line in points equally using the vine spacing. Each point was expanded with a morphological structuring element of the line with the same orientation as the rows to form a grid of clusters related to the area where vegetation from each grapevine is confined, and the absence of a canopy pixel in the case of a missing vine was then determined. Jurado et al. [39] suggested a method based on the recognition of key geometric parameters to detect and locate individual vine trunks, missing plants and vineyard posts from a 3D point cloud generated on a 0.3 ha vineyard with slope condition and irregular vine spacing. Starting from the previous reported method for center line row detection [9], the authors elaborated the dense cloud by creating a 0.60 m buffer around the lines for spatial selection of the canopy and to avoid interrow points. The workflow proposed a cutting plane adjustment to correct the terrain’s slope for the accurate classification of the ground, trunks and canopy points, which took into account both the height and point color. Recently, Di Gennaro and Matese [12] used a multirotor equipped with a 20 MP RGB camera to propose a study on a 2.60 ha vineyard with extreme and different conditions, characterized by a younger part planted in 1999 with regular spacing, and an older area planted in 1973 with irregular structure with different plant ages, training systems (single and bilateral cordon) and spacing along the rows due to large number of replacements over the years. The authors implemented an unsupervised biomass estimation and missing plant detection procedure using a 2.5D-surface and a 3D-alphashape method. Starting from the CHM, a uniform polygon grid based on vine spacing was generated to isolate blocks of three plants along the rows. For the 2.5D method, a series of functions based on Otsu’s thresholds were applied in Matlab (Mathworks, Natick, MA, USA) to binarize the image and extract the canopy’s geometric features (average height and thickness), and, finally, the metrical distance of gaps along the row identified by a thickness < 0.10 m was converted into a number of missing vines. As regards the 3D method, the volume of the canopy was reconstructed from the point cloud using the alphaShape function in Matlab, which allows the reconstruction of an object’s shape from a set of sparse points by tuning the correct ’tightness’ of the shape around the points set to 0.5 of the α parameter of the function. For each polygon grid, the missing plants are estimated from the total volume within each polygon divided by the average vine volume in the field, using the average volume of the polygon grids without missing plants divided by three (vines per polygon grid). A recent work proposed by Hajjar et al. [40] on 10 vine plots (among which, the largest was 3.4 ha) set up a two-stage approach to identify and characterize living and missing plants in a goblet training system of the vineyards, with a chessboard scheme (no rows structure) at 2.5 m × 2.5 m vine spacing. As the first identification step, RGB imagery acquired at a 300 m flight quote (0.08 m/pixel resolution) were segmented through K-means clustering and binarized, then a rotation step was performed using the Hough transform, which allows the generation of a regular grid based on vine spacing, and the identification of the vine position was achieved by means of the peak locations in the sum of 1-valued pixels in the binary images along the row and column. In the second characterization phase, starting from the grid points, a watershed marker-controlled transformation was applied in order to recognize each vine as a separate structure, even if it overlaps with adjacent ones. This significant study emphasizes the importance of the topic, making a significant contribution to both farmers and researchers through the investigation of different methods for quantifying the incidence of missing plants in the vineyard. The examination of these works made it possible to discover a number of weak points that should be addressed through additional studies in order to develop an approach that is as adaptable and flexible as possible. Some studies base the results on small parcel trials in vineyards, whereas testing the potential of the methodologies described in vineyards with larger surface areas is preferable to stress the performance of the approach presented. Many works propose techniques based on the extraction of geometric parameters of the canopy along the rows, thus limiting the applicability of the method to the vegetative phase with a well-formed canopy, which is confined to 4–5 months. Moreover, considering the large activities carried out by farmers for the agronomic management during the vegetative period, UAV surveys commonly create a disturbance to traditional operations. Furthermore, most of these methods are based on vine canopy features; however, there are many factors that can cause alterations of the canopy architecture, such as different training systems that drastically modify the structure of the leaf wall, or the occurrence of symptoms related to biotic or abiotic stresses (diseases, nutrient deficiencies, water deficit, etc.), or even the presence of replacements where young vines with minimal vegetative development are planted to replace dead ones. Another critical issue is that many methods carry out the determination of missing plants on regular grids generated on the basis of the vine spacing. In this case, irregularities of the layout, which are very common in old vineyards, significantly reduce the accuracy of the methodology. Given the shortcomings identified in previous research studies, the aim of this study is to develop a flexible methodology for missing vine detection via a UAV platform, which is not affected by the canopy characteristics or by irregular vine spacing, and that is also able to be used in the dormant phenological phase (without a canopy), a period in which minimal activities are carried out in the vineyard. In detail, a commercial vineyard was monitored in winter in the absence of leaf cover by performing a series of low quote flights using a UAV platform equipped with a high-resolution RGB sensor. The developed image processing workflow exploits the point cloud as the primary input instead of the largely used CHM, allowing the reconstruction of the basal part of the vine trunk, which permits the grower to identify the missing plants independently of the planting layout. The performance of four alternative techniques used to discriminate geometries representing trunks from those representing wood posts, which support the vine rows, was also compared.

2. Materials and Methods

2.1. Study Site and UAV Data Acquisition

The research was conducted at the beginning of the 2022 season in a 1.3 ha sangiovese cultivar (Vitis vinifera) vineyard located in Castelnuovo Berardenga, (Italy) (43°21′20.54″ N, 11°30′25.60″ E) in a slope condition at 370 m above sea level (Figure 1a). The vines were trained in a guyot VSP (vertical shoot-positioned) trellis system, with 2.30 m × 0.80 m vine spacing and a NW–SE row orientation.
Remote sensed images were acquired in a single flight campaign performed on 21 January 2022 at 12:00 p.m. GMT+2 using a multirotor DJI Matrice 300 (SZ DJI Technology Co., Ltd., Shenzhen, China) equipped with a DJI Zenmuse P1 camera, which comprises a full-frame sensor (35.9 mm × 24 mm) with a 35 mm lens able to produce 45 MP RGB images (Figure 1b). The UAV survey was conducted at 30 m above the ground level and achieved a ground resolution of 0.003 m/pixel. With the aim to generate an accurate vine 3D reconstruction, the waypoint route was planned to obtain more than 80% overlap (both the front and side) and acquire oblique images of both sides of the canopy. In detail, the sensor was placed at 55° with respect to the vertical at the ground and 45° with respect to the direction of advancement, which was planned to follow the row direction, allowing images of the area of interest to be acquired. A dataset of 512 images was pre-processed using Agisoft Metashape Professional v.1.6.3 (Agisoft LLC, St. Petersburg, Russia) to generate a dense cloud, a digital elevation model and an orthomosaic of the study vineyard.

2.2. Image Processing Workflow

Figure 2 illustrates the workflow used in this study to identify the vines and missing vines from the UAV RGB imagery acquisition. In detail, the scheme presents, in boxes of different colors, the succession of the phases critical to the generation of the validated output. First, the blue box shows the pre-processing phase, which, from the acquired images, leads to the generation of photogrammetric output, namely, orthomosaic and point cloud, of which the latter is classified to create the two products used as raw datasets for the methods: a slice between 0.40 and 0.50 m and a second slice between 1.10 and 3.00 m. Subsequently, the green box describes the processing of the raw datasets in QGIS for the generation of shapefiles and the conversion of the points extracted from the dense cloud into polygons. The orange box highlights the 4 methods that generate plant identification information layers and missing plants. Finally, the last box at the bottom of Figure 2 is dedicated to the validation of the results obtained to determine the performance of the 4 methods presented.

2.2.1. Point Cloud Classification

As the first step of the processing workflow, the dense point cloud was elaborated with Lidar 360 v6.0 (GreenValley International, Berkeley, CA, USA) in order to classify, in sequence, the ground points and then two slices of points, according to height (Figure 3a,b). Considering that the adopted flight plan (mainly, flight quote and overlap) did not allow for the reconstruction of very small objects (<0.02 m), such as catch wires and shoots, and that the height of the trunk reaches 0.80 m, the first slice (P, posts) within the height range 1.10–3.00 m was chosen to identify the wooden posts that support the vineyard espalier structure. The second slice (PT, posts and trunks), which was obtained between 0.40–0.50 m was chosen to avoid noise due to the cordon (0.80 m) and the grass present in the soil (up to 0.25 m), was used to extract the trunks of the vine plants and thus to locate each vine along the rows. Since this second slice, in addition to the points of the trunks, also contained points related to the basal part of the posts, four methods (M1–M4) were evaluated to determine the benefits of each. Both the P and PT slices obtained were then exported as standard csv (comma-separated values) files.

2.2.2. Polygons and Centroids Generation

Subsequently, the two datasets were processed in QGIS (QGIS–http://www.qgis.org/ (accessed on 1 April 2023)) using a ‘heatmap’ plugin, aiming at the definition of 2D geometric shapes (polygons and centroids) by means of a horizontal section of the group of points of each object. The plugin uses the ‘kernel density estimation’ function, which allowed the conversion of the csv point vector layers in the density raster files, also defined as concentration maps, based on the number of points in a defined portion of space. Then, by setting the radius value at 0.15 m, it was possible to identify hotspots from high points of concentration (Figure 3c) around all the objects (posts and trunks) and define the orthogonal projection of each cluster (Figure 3d). As the next step, the heatmap rasters were binarized (Figure 3e) and converted into polygonal vectors (Figure 3f), and then the centroid was extracted for each polygon (Figure 3g). As a consequence of the lower width of the trunk section (average 0.05 m) with respect to the post (average 0.12 m), the best results were identified in the binarization step with a threshold of 35 pixels from the P dataset (pixel concentration > 35 as 1 value) and a threshold of 1 pixel from the PT dataset (pixel concentration > 1 as 1 value). Finally, the ‘create grid’ function was used to generate a series of parallel lines at a distance of 2.30 m (based on the vine spacing) over the rows, and then a buffer of 0.50 m was created to assign the row number to the polygons/centroids placed within that buffer area using the ‘merge attributes by location’ function.
Figure 3. Detail (a) and full view (b) of the classification of the two slices using Lidar 360 software, according to the height of the point cloud related to the upper part of the posts from 1.10 to 3.00 m (green) and to the trunks plus the lower part of the posts from 0.40 to 0.50 m (magenta). Processing workflow (Qgis software) for generations of the post centroids from the points vector layer: (c) point P dataset (vector); (d) heatmap creation (raster); (e) reclassification in binary values 0–1 (raster); (f) converting raster to polygon (vector); (g) extract centroid (vector).
Figure 3. Detail (a) and full view (b) of the classification of the two slices using Lidar 360 software, according to the height of the point cloud related to the upper part of the posts from 1.10 to 3.00 m (green) and to the trunks plus the lower part of the posts from 0.40 to 0.50 m (magenta). Processing workflow (Qgis software) for generations of the post centroids from the points vector layer: (c) point P dataset (vector); (d) heatmap creation (raster); (e) reclassification in binary values 0–1 (raster); (f) converting raster to polygon (vector); (g) extract centroid (vector).
Drones 07 00349 g003

2.2.3. Post and Trunks Extraction

The identification of the posts was obtained by assigning a unique progressive numbering system to the centroids generated from the P dataset in the previous step. As regards the missing vines detection, the workflow suggested in this research consists of two steps: in the first step, vine detection was performed using different approaches for identifying trunks, and secondly, the number and position of the missing vines were assessed starting from the vine detected layers. In order to achieve an accurate identification of the trunks from the PT dataset, it was necessary to develop an effective approach to discriminate the trunks from the basal section of the posts. To achieve this objective, 4 different methods (M1–M4) were developed and tested (Figure 4).
M1: A spatial intersection selection between PT polygons and P polygons allowed us to eliminate the overlapping polygons of the two vector layers, which were assumed to be posts without any trunk. From the remaining polygons on the PT layer that were assessed as trunks, centroids were extracted to verify the ground truth (Figure 4a).
M2: Considering that a greater size of the TP polygons with respect to the P polygons indicated the presence of a trunk close to the post, a spatial inclusion selection between the centroids extracted from the PT polygons and the P polygons permitted us to eliminate the points that fell within the P polygons, identified as a basal part of the post from the PT layer. The resulting centroids of the PT layer were used to verify the ground truth (Figure 4b).
M3: First, the centroids were extracted from the PT and P polygons, and then circles with a 0.12 m diameter were created from the post centroids with the aim of defining the real size of each post. Finally, by means of a spatial inclusion selection between the PT centroids and the post circles, the PT points that fell within the circles were eliminated, in line with M2. The resulting PT centroids were used to validate the ground truth (Figure 4c).
M4: As the first step, the P polygons were subtracted from the PT polygons using the ’difference’ function. Next, the centroids were generated from the cut PT polygons greater than 0.0025 m2 (derived from trunk average diameter of 0.05 m), which was the threshold applied to identify the presence of a trunk close to a post. The resulting PT centroids were used to validate the ground truth (Figure 4d).

2.2.4. Posts and Trunks Validation

In order to evaluate the accuracy of the four methods in trunk detection, a ground truth points layer (4755 trunk in total) was generated by editing the M4 points (high-performance method) by visual check on the orthomosaic. Considering the time of flight and the vineyard row orientation, a strong presence of post and trunk shadows on the ground greatly improved the visual verification operation; in the very few cases of uncertainty, a field survey for ground observation was carried out. A similar approach was applied to generate the ground truth points layer of each post within the vineyard from the P dataset.
The performance of the four approaches was evaluated by means of a spatial joining of attributes defined by the function ’join attributes for nearest’ (set max distance at 0.20 m), which allowed us to compare the position of the trunk points defined by each method with the ground truth points, and correctly identify the overlapping points or places within a radius of 0.20 m, or identify incorrect classification in the case of points identified by the methods without any match in the ground truth layer, or undetected points for trunks not identified by the methods.

2.2.5. Missing Vines Detection

Once the vines were detected by applying the four methods, each layer was processed with the aim to estimate the number of missing vines along the rows. First, the missing vines ground truth layer was generated starting from the vines ground truth layer that was previously built. Using the ‘point-to-path’ function, polylines were created for each row that joined the individual trunks together. Then, considering that the vine spacing of 0.80 m between the vines is not always exact (it can vary between 0.70–0.90 m), the segment length of 1.40 m was chosen (double length with respect to the minimal distance between the vines) as the threshold below which no missing vines could occur. On the other hand, segments with a length greater than or equal to 1.40 m were divided in classes of missing vine numbers, creating a series of points proportional to their length: 1.40–2.00 m with 1 missing vine, 2.00–3.00 m with 2 missing vines, 3.00–4.00 m with 3 missing vines, 4.00–5.00 m with 4 missing vines, and so on (Figure 5). Finally, the missing vines ground truth layer (1169 missing vines in total) was visually validated as was performed for the others.
The performance of the methods was then evaluated following the same procedure used for trunk detection, by matching the missing vines layers obtained from each method to the ground truth by means of the ’join attributes for nearest’ function.

2.2.6. Statistical Evaluation

In line with previous studies by Pádua et al. [38], Jurado et al. [39] and Hajjar et al. [40], the evaluation of the performance of the methods in terms of the accuracy of object detection (posts, trunks and missing vines) was conducted based on correct (true positive—TP) and incorrect identification (false positive—FP) and no detection (false negatives—FN). With respect to many other studies, the four methods described did not use a fixed regular grid, and the detection of vines and missing vines was performed in separate steps, which led us to consider only 3 classes for evaluation (TP, FP, FN), excluding the true negative TN condition. The accuracy of the four methods is quantified by calculating a series of statistical indicators, represented by precision, recall, F1 score and the overall accuracy, which were computed in Equations (1)–(4), respectively.
P r e c i s i o n = T P T P + F P
R e c a l l = T P T P + F N
F 1   s c o r e = 2 · P r e c i s i o n · R e c a l l P r e c i s i o n + R e c a l l
O A = T P T P + F P + F N
Subsequently, a global evaluation of the four methods was performed, taking in account both vines and missing vines together, by calculating a modified version of previous statistical indicators, as in the following Equations (5)–(8).
G l o b a l   P r e c i s i o n = T P t r u n k s + T P   m i s s i n g   p l a n t s ( T P + F P ) t r u n k s + ( T P + F P ) m i s s i n g   p l a n t s
G l o b a l   R e c a l l = T P t r u n k s + T P   m i s s i n g   p l a n t s ( T P + F N ) t r u n k s + ( T P + F N ) m i s s i n g   p l a n t s
G l o b a l   F 1   s c o r e = 2 · P r e c i s i o n t r u n k s · R e c a l l   m i s s i n g   p l a n t s P r e c i s i o n t r u n k s + R e c a l l   m i s s i n g   p l a n t s
G l o b a l   O A = T P t r u n k s + T P   m i s s i n g   p l a n t s ( T P + F P + F N ) t r u n k s + ( T P + F P + F N ) m i s s i n g   p l a n t s
Finally, the number of vines and missing vines were estimated for each row to evaluate the performance of the best method (M4), computing the relative error as a percentage of the non-detected object (vines or missing vines) with respect to the total number of vines along each row.

3. Results

This section is divided by subheadings focused on the detection performance for posts, vines and missing vines.

3.1. Posts and Vines Detection

The results of the approaches described for the identification of posts and trunks are summarized in Table 1. Considering a study area of approximately 1.3 ha, the methodology adopted to locate the posts resulted in the correct identification of 797 posts out of a total of 806 posts (ground truth) with the precision, recall and F1 percentages corresponding to 99.50%, 98.88% and 99.19%, respectively, while the OA was 98.40%. Nine posts were not identified (1.12%) and only four objects were wrongly identified as FP.
Table 1 presents the results of the accurate localization of the vines along the rows provided by the four methods with respect to the ground truth (a total of 4755 vines). The best precision was obtained by M1 (99.21%) and M2 (97.42%), but both methods performed worse in terms of recall (90.07% and 93.84%, respectively); in M3, the lower precision value related to the number of FP (316) resulted in a low OA value (90.95%), which was slightly higher than in M1 (89.43%). The best percentages of recall, the F1 score and OA were obtained in M4, even if, as in M3, the precision did not record the best result (95.61%).

3.2. Missing Vines Detection

Taking into account the evaluation of the performance provided by the four methods for missing vine detection with respect to ground truth values (1169 missing plants), the results are shown in Table 2. The best recall values were obtained in M1 (97.26%) and M2 (96.15%), but in terms of accuracy, the two methods recorded low values (73.31% and 77.84%, respectively) compared to M3 (82.06%) and M4 (88.10%). The best results in terms of the F1 score and OA were obtained in M3 (90.14% and 82.06%) and M4 (93.67% and 88.10%).

3.3. Overall Evaluation of the Four Methods

The global assessment of the results obtained from the four methods considering both vines and missing vines estimation (5924 objects in total as ground truth) is shown in Table 3. In general, the values obtained tend to improve as the geometric form used to identify the posts changes from M1 to M4. Overall, M2 and M3 have a similar trend. The best results for all the parameters were obtained for M4.
The findings of an overall investigation of the total vines and missing vines detection at a single row level using M4, which proved to be most accurate method, are presented in Table 4. The results show an overall average relative error of 3.40% for vines and 1.85% for missing vines detection with respect to the total number of vines.

4. Discussion

4.1. Posts and Vines Detection

The analysis of the point cloud section between 1.10–3.00 m (P dataset) provided high performance in post detection, with only four posts that were incorrectly classified (false positive) out of a total of 806 objects, due to rare cases of grouping of high vigorous shoots. The low number of false negative detection (9 posts) was mainly due to a smaller size of the end of the posts. While in new vineyards, the positioning of the posts is regular and easily identifiable by a visual analysis of an orthomosaic, in older vineyards, this is not equally simple due to the non-precise replacement of the posts over time. This occurs as a result of mechanical damage caused by the passage of the machines in the vineyard; it therefore becomes necessary to replace the posts to avoid the collapse of the unsupported row. In regards to this problem, the described method has considerable potential in managing the replacement plan, quickly identifying the damaged or missing posts with respect to the regular planting grid and providing precise indications to the farmer on where to send an excavator to remove the broken post and install a new one.
Regarding to the vine trunks identification, four different approaches were identified to manage the PT dataset relating to the dense cloud slice extracted between 0.40 and 0.50 m above ground level. The discrimination methods were essentially based on the geometric relationships between posts and trunks, and, as expected, the results show an improvement in the detection accuracy for the true positive and false negative classes. The methods provided a high performance with minimal errors, which were mainly related to the small size of young vines (average diameter of 0.02 m) planted the previous year, which were not adequately reconstructed by the point cloud; secondly, in a few cases, the geometric reconstruction failed as a consequence of the covering effect of grass growth around the trunk or dropping shoots. Considering the false positive class, a higher performance was obtained using M1, with 34 trunks identified as ‘incorrect detection’, since this method assumed that there were no trunks close to the posts. False positive cases were mainly due to vigorous groups of drooping shoots; the presence of grass and brushwood along the row; the points of the posts that were detected in the PT dataset but were not detected in the point cloud slice at 1.10–3.00 m and therefore considered as trunks; and rare noises in the point cloud derived by the reconstruction photogrammetric process. Although the approach was substantially similar in M2 and M3, the different shape and geometric size of the circles, assimilated to the actual size of the posts used in M3, reduced the selection area of the centroids, improving the performance of the correctly identified trunks (150 more than M2), but also increasing the number of incorrect detections (198 more than M2). Among all the methods tested, M4, which is based on the difference between polygons derived from the P and PT datasets, was the most accurate in vine trunk detection, achieving the best overall statistical results.

4.2. Missing Vines Detection

The detection of missing vines was carried out using the results of the four methods in terms of the trunks identified. Consequently, M4 was the best, given that the more accurate the identification of the trunks, the more accurate the location of the missing vines. The results reported in Table 2 show high values of false positive cases for M1 (414), which gradually decreased until reaching their lowest value with M4 (83). This is directly correlated with the greater number of ‘not detected’ vine trunks reported in Table 1 (472 and 90 for M1 and M4, respectively); in fact, during the processing phase, the algorithm incorrectly identified missing vines by constructing more segments longer than 1.40 m, which also covered areas with undetected vines. The high number of missing vines that was accurately recognized in M1 and M2 is supported, however, by high values of false positives, which seriously reduce the accuracy values compared to M4. The good performances of the methodology described are also evident at the overall level by observing the slightest difference between the total number of missing vines monitored on the ground and those estimated by M4 (Table 4), limited to only 17 units (1169 ground truth vs. 1186 M4). The results obtained with M4 translate from a quantitative point of view into a precise and timely purchase of rooted cuttings from nurseries for the management of replacements, while from a point of view of spatial accuracy, it is possible to generate a precise map of the missing plants, reducing the time it takes to find the points of interest and replace the missing vines. In addition, the use of this method allows us to quickly support the farmer in the evaluation of the real production level of the vineyard, which can be considerably reduced with respect to the potential derived from the theoretical number of plants based on the vine spacing.

4.3. Evaluation of Results vs. Other Studies in Literature

The topic of vines and missing vines detection from UAV has been extensively investigated and many works in the literature have described methodologies generally carried out mainly in row-based vineyards in a full vegetative phase. Primicerio et al. [36] and Matese and Di Gennaro [17] used approaches based on 2D orthomosaics, providing an overall accuracy in line with this paper: the first research study reached an overall accuracy of 93.76% (number estimated from the A1 model in Table 3 supplied by the authors), while the second work obtained an accuracy of 80.00%, referring specifically to missing plants. A limit of this these studies is related to the extraction of canopy pixels using masks obtained from color thresholds, which are easily affected by environmental conditions, which can change considerably within the same day (cloud cover, time of acquisition). In that case, an accurate radiometric correction is therefore mandatory, which is a delicate process that demands great care both in the field and during data analysis.
With regard to approaches based on 2.5D data, where DSM or CHM are used to extract the canopy information, the literature proposes a higher number of works. De Castro et al. [8], in the largest investigated plot, found an average accuracy value of 95.50% for vines detection along the rows and 97.84% for missing plants. Pádua et al. [38] recorded an average overall accuracy (vines and missing vines) of 97.49% in various plots. Another study on missing vines detection proposed by Di Gennaro and Matese [12] achieved an average accuracy of 84.62%. Hajjar et al. [40] achieved a very high overall average accuracy of 97.89%; however, the study was placed in bush-trained vineyards with a chessboard structure, where the plants were well separated with respect to row-based architecture and could therefore more easily be discriminated from each other. Unfortunately, in the work carried out by Su et al. [35], referred to in the introduction, no survey accuracy data were reported that were useful for comparison. In general, the M4 approach demonstrated an overall accuracy a few percentage points lower than other works reported, but as previously mentioned, the methodology described has the advantage of being able to be employed for a period of time and in vegetative conditions when other methods cannot be used.
Finally, as far as 3D approaches are concerned, the main works based on a point clouds dataset elaboration refer to Di Gennaro and Matese [12], with an average accuracy in missing vine detection of 91.41% and Jurado et al. [39] achieving an overall accuracy for all the data of 83.90%. Among all the listed works, the research of Jurado et al. [39], which proposes a direct analysis of the point cloud to identify both trunks and posts, turns out to be the one of the most innovative and is similar to the analysis described in this paper. The authors achieved excellent results, recording a global OA of 83.90%, which was validated in an experimental parcel of about five rows for a total of 221 vines and 69 missing vines observed on the ground. Compared to that research study, this work presents an improvement in terms of both the accuracy reached by each method analyzed (Table 3) and by validating the methodology on a much larger sample, represented by 43 rows for a total of 4775 vines and 1169 missing vines monitored, reflecting a less experimental size and a more real production dimension that exceeds a hectare. To summarize, with respect to previous works, the unique aspect of this research lies in the fact that an overall accuracy of 92.72% was achieved using images acquired in winter during the dormancy period, working with plants without canopy leaves, which makes both the 3D reconstruction of the plants and the data analysis groundbreaking research. Therefore, the methods described can be applied in a wide window of time without any overlap with vineyard management operations, which are mainly performed between spring and summer. Moreover, the methods work well on irregular vine spacing systems because they are free from the need to set up a predefined grid or segmentation of the rows to identify individual plants.
Considering the extensive literature analyzed and the experience of the authors, the choice of the flight altitude (30 m) and the used overlap (about 85% on both sides) is considered the best compromise between ground spatial resolution and operating speed to guarantee the correct photogrammetric reconstruction and provide the necessary product quality as input for the methods tested in this work. Achieving better ground resolution by reducing the flight altitude and increasing the overlap in the flight plan would require a slower forward speed and closer lateral transects. This approach would result in a significant increase in the dataset size, image processing time, and most importantly, battery consumption. This would make it challenging to apply any method for application solutions on farms with tens or hundreds of hectares to monitor. Generally, the success and applicability of precision agriculture techniques rely on the speed at which technological innovation can provide operational support for monitoring sites over large areas. These areas are not limited to small surfaces commonly found in high-throughput field phenotyping applications, but rather in real productive contexts covering large areas.
Considering potential improvements, the main issue responsible for a few errors in vine and missing vine identification is related to structural factors of the vineyard, such as the covering effect of grass growth around the trunks. Even different flight plans or higher accuracy sensors may not be able to overcome these aspects and improve the proposed methodology.

5. Conclusions

The present research demonstrates how to exploit the potential of UAV high-resolution photogrammetry in a vineyard, even in the dormant period without canopy leaves, when the management operations in the vineyard are practically absent. An image processing workflow was developed in this work allowed to detect the posts of the canopy support structure, the vine trunks placed along the rows and the location of each missing vine quickly and accurately. Among the four approaches tested, M4 was found to be the best for identification of both trunks and missing vines, reaching approximately 93% overall accuracy. Considering that the methodology described does not require pixel values and therefore a DN radiometric calibration can be utilized for the identification of trunks and posts, it could feasibly be used with LIDAR dense cloud data, as well, allowing fast surveys and lightweight data analysis with respect to a heavy RGB dataset and long photogrammetric image processing. Moreover, each geometric polygon generated representing a plant or a missing plant could also be used by the farmer to collect historical information about each vine, such as the cultivar, rootstock, disease symptoms and treatments, as well as the yield and quality data, which could easily be added from a mobile device directly in the field. Finally, the polygon shapefile could also be used to extract UAV remote sensing data acquired during the vegetative period, such as multispectral information, to generate a vegetation index map at a single plant level.

Author Contributions

Conceptualization, S.F.D.G. and G.L.V.; methodology, G.L.V., S.F.D.G. and A.B.; formal analysis, G.L.V., A.M. and S.F.D.G.; data curation, G.L.V., P.T. and R.D.; writing—review and editing, G.L.V., S.F.D.G., P.T. and R.D.; supervision, A.M. and S.F.D.G. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the E-Crops Project—Technology for Sustainable Digital Agriculture—ARS01_01136, within the National Operational Programme on Research and Innovation 2014–2020 (Italy).

Data Availability Statement

The data presented in this study are available from the corresponding author upon reasonable request.

Acknowledgments

The authors gratefully acknowledge Ellis Topini, Alessandro Chellini and Marco Nardini (Fèlsina S.p.A. Società Agricola) for having hosted and supported the experimental activities.

Conflicts of Interest

The authors declare no conflict of interest. The funder had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

References

  1. Candiago, S.; Remondino, F.; De Giglio, M.; Dubbini, M.; Gattelli, M. Evaluating Multispectral Images and Vegetation Indices for Precision Farming Applications from UAV Images. Remote Sens. 2015, 7, 4026–4047. [Google Scholar] [CrossRef]
  2. Pádua, L.; Matese, A.; Di Gennaro, S.F.; Morais, R.; Peres, E.; Sousa, J.J. Vineyard Classification Using OBIA on UAV-Based RGB and Multispectral Data: A Case Study in Different Wine Regions. Comput. Electron. Agric. 2022, 196, 106905. [Google Scholar] [CrossRef]
  3. Sassu, A.; Gambella, F.; Ghiani, L.; Mercenaro, L.; Caria, M.; Pazzona, A.L. Advances in Unmanned Aerial System Remote Sensing for Precision Viticulture. Sensors 2021, 21, 956. [Google Scholar] [CrossRef] [PubMed]
  4. Andújar, D.; Moreno, H.; Bengochea-Guevara, J.M.; de Castro, A.; Ribeiro, A. Aerial Imagery or On-Ground Detection? An Economic Analysis for Vineyard Crops. Comput. Electron. Agric. 2019, 157, 351–358. [Google Scholar] [CrossRef]
  5. Torres-Sánchez, J.; Mesas-Carrascosa, F.J.; Santesteban, L.G.; Jiménez-Brenes, F.M.; Oneka, O.; Villa-Llop, A.; Loidi, M.; López-Granados, F. Grape Cluster Detection Using UAV Photogrammetric Point Clouds as a Low-Cost Tool for Yield Forecasting in Vineyards. Sensors 2021, 21, 3083. [Google Scholar] [CrossRef] [PubMed]
  6. Mesas-Carrascosa, F.J.; De Castro, A.I.; Torres-Sánchez, J.; Triviño-Tarradas, P.; Jiménez-Brenes, F.M.; García-Ferrer, A.; López-Granados, F. Classification of 3D Point Clouds Using Color Vegetation Indices for Precision Viticulture and Digitizing Applications. Remote Sens. 2020, 12, 317. [Google Scholar] [CrossRef]
  7. Matese, A.; Di Gennaro, S.F. Beyond the Traditional NDVI Index as a Key Factor to Mainstream the Use of UAV in Precision Viticulture. Sci. Rep. 2021, 11, 2721. [Google Scholar] [CrossRef]
  8. De Castro, A.I.; Jimenez-Brenes, F.M.; Torres-Sanchez, J.; Pena, J.M.; Borra-Serrano, I.; Lopez-Granados, F. 3-D Characterization of Vineyards Using a Novel UAV Imagery-Based OBIA for Precision Viticulture Applications. Remote Sens. 2018, 10, 584. [Google Scholar] [CrossRef]
  9. Pádua, L.; Marques, P.; Hruška, J.; Adão, T.; Bessa, J.; Sousa, A.; Peres, E.; Morais, R.; Sousa, J.J. Vineyard Properties Extraction Combining UAS-Based RGB Imagery with Elevation Data. Int. J. Remote Sens. 2018, 39, 5377–5401. [Google Scholar] [CrossRef]
  10. Towers, P.C.; Poblete-echeverría, C. Effect of the Illumination Angle on NDVI Data Composed of Mixed Surface Values Obtained over Vertical-Shoot-Positioned Vineyards. Remote Sens. 2021, 13, 855. [Google Scholar] [CrossRef]
  11. Comba, L.; Biglia, A.; Ricauda Aimonino, D.; Tortia, C.; Mania, E.; Guidoni, S.; Gay, P. Leaf Area Index Evaluation in Vineyards Using 3D Point Clouds from UAV Imagery. Precis. Agric. 2020, 21, 881–896. [Google Scholar] [CrossRef]
  12. Di Gennaro, S.F.; Matese, A. Evaluation of Novel Precision Viticulture Tool for Canopy Biomass Estimation and Missing Plant Detection Based on 2.5D and 3D Approaches Using RGB Images Acquired by UAV Platform. Plant Methods 2020, 16, 91. [Google Scholar] [CrossRef] [PubMed]
  13. Martín-Vélez, V.; Van Leeuwen, C.H.A.; Sánchez, M.I.; Hortas, F.; Shamoun-Baranes, J.; Thaxter, C.B.; Lens, L.; Camphuysen, C.J.; Green, A.J. Spatial Patterns of Weed Dispersal by Wintering Gulls within and beyond an Agricultural Landscape. J. Ecol. 2021, 109, 1947–1958. [Google Scholar] [CrossRef]
  14. Campos, J.; García-Ruíz, F.; Gil, E. Assessment of Vineyard Canopy Characteristics from Vigour Maps Obtained Using UAV and Satellite Imagery. Sensors 2021, 21, 2363. [Google Scholar] [CrossRef]
  15. Kalisperakis, I.; Stentoumis, C.; Grammatikopoulos, L.; Karantzalos, K. Leaf Area Index Estimation in Vineyards from UAV Hyperspectral Data, 2D Image Mosaics and 3D Canopy Surface Models. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2015, XL-1-W4, 299–303. [Google Scholar] [CrossRef]
  16. Mathews, A.J.; Jensen, J.L.R. Visualizing and Quantifying Vineyard Canopy LAI Using an Unmanned Aerial Vehicle (UAV) Collected High Density Structure from Motion Point Cloud. Remote Sens. 2013, 5, 2164–2183. [Google Scholar] [CrossRef]
  17. Matese, A.; Di Gennaro, S.F. Practical Applications of a Multisensor UAV Platform Based on Multispectral, Thermal and RGB High Resolution Images in Precision. Agriculture 2018, 8, 116. [Google Scholar] [CrossRef]
  18. Gracia-Romero, A.; Vergara-Díaz, O.; Thierfelder, C.; Cairns, J.E.; Kefauver, S.C.; Araus, J.L. Phenotyping Conservation Agriculture Management Effects on Ground and Aerial Remote Sensing Assessments of Maize Hybrids Performance in Zimbabwe. Remote Sens. 2018, 10, 349. [Google Scholar] [CrossRef]
  19. López-García, P.; Intrigliolo, D.S.; Moreno, M.A.; Martínez-Moreno, A.; Ortega, J.F.; Pérez-álvarez, E.P.; Ballesteros, R. Assessment of Vineyard Water Status by Multispectral and RGB Imagery Obtained from an Unmanned Aerial Vehicle. Am. J. Enol. Vitic. 2021, 72, 285–297. [Google Scholar] [CrossRef]
  20. Sepúlveda-Reyes, D.; Ingram, B.; Bardeen, M.; Zúñiga, M.; Ortega-Farías, S.; Poblete-Echeverría, C. Selecting Canopy Zones and Thresholding Approaches to Assess Grapevine Water Status by Using Aerial and Ground-Based Thermal Imaging. Remote Sens. 2016, 8, 822. [Google Scholar] [CrossRef]
  21. Bellvert, J.; Zarco-Tejada, P.J.; Girona, J.; Fereres, E. Mapping Crop Water Stress Index in a ‘Pinot-Noir’ Vineyard: Comparing Ground Measurements with Thermal Remote Sensing Imagery from an Unmanned Aerial Vehicle. Precis. Agric. 2014, 15, 361–376. [Google Scholar] [CrossRef]
  22. Albetis, J.; Jacquin, A.; Goulard, M.; Poilvé, H.; Rousseau, J.; Clenet, H.; Dedieu, G.; Duthoit, S. On the Potentiality of UAV Multispectral Imagery to Detect Flavescence Dorée and Grapevine Trunk Diseases. Remote Sens. 2018, 11, 23. [Google Scholar] [CrossRef]
  23. Bendel, N.; Kicherer, A.; Backhaus, A.; Klück, H.C.; Seiffert, U.; Fischer, M.; Voegele, R.T.; Töpfer, R. Evaluating the Suitability of Hyper- and Multispectral Imaging to Detect Foliar Symptoms of the Grapevine Trunk Disease Esca in Vineyards. Plant Methods 2020, 16, 142. [Google Scholar] [CrossRef] [PubMed]
  24. Vélez, S.; Ariza-Sentís, M.; Valente, J. Mapping the Spatial Variability of Botrytis Bunch Rot Risk in Vineyards Using UAV Multispectral Imagery. Eur. J. Agron. 2023, 142, 126691. [Google Scholar] [CrossRef]
  25. De Castro, A.I.; Peña, J.M.; Torres-Sánchez, J.; Jiménez-Brenes, F.; López-Granados, F. Mapping Cynodon Dactylon in Vineyards Using UAV Images for Site-Specific Weed Control. Adv. Anim. Biosci. 2017, 8, 267–271. [Google Scholar] [CrossRef]
  26. Jiménez-Brenes, F.M.; López-Granados, F.; Torres-Sánchez, J.; Peña, J.M.; Ramírez, P.; Castillejo-González, I.L.; de Castro, A.I. Automatic UAV-Based Detection of Cynodon Dactylon for Site-Specific Vineyard Management. PLoS ONE 2019, 14, e0218132. [Google Scholar] [CrossRef]
  27. Ballesteros, R.; Intrigliolo, D.S.; Ortega, J.F.; Ramírez-Cuesta, J.M.; Buesa, I.; Moreno, M.A. Vineyard Yield Estimation by Combining Remote Sensing, Computer Vision and Artificial Neural Network Techniques. Precis. Agric. 2020, 21, 1242–1262. [Google Scholar] [CrossRef]
  28. Di Gennaro, S.F.; Toscano, P.; Cinat, P.; Berton, A.; Matese, A. A Low-Cost and Unsupervised Image Recognition Methodology for Yield Estimation in a Vineyard. Front. Plant Sci. 2019, 10, 559. [Google Scholar] [CrossRef]
  29. López-García, P.; Ortega, J.F.; Pérez-Álvarez, E.P.; Moreno, M.A.; Ramírez, J.M.; Intrigliolo, D.S.; Ballesteros, R. Yield Estimations in a Vineyard Based on High-Resolution Spatial Imagery Acquired by a UAV. Biosyst. Eng. 2022, 224, 227–245. [Google Scholar] [CrossRef]
  30. Hamza, M.A.; Anderson, W.K. Soil Compaction in Cropping Systems: A Review of the Nature, Causes and Possible Solutions. Soil Tillage Res. 2005, 82, 121–145. [Google Scholar] [CrossRef]
  31. Pádua, L.; Marques, P.; Hruška, J.; Adão, T.; Peres, E.; Morais, R.; Sousa, J.J. Multi-Temporal Vineyard Monitoring through UAV-Based RGB Imagery. Remote Sens. 2018, 10, 1907. [Google Scholar] [CrossRef]
  32. Di Gennaro, S.F.; Toscano, P.; Gatti, M.; Poni, S.; Berton, A.; Matese, A. Spectral Comparison of UAV-Based Hyper and Multispectral Cameras for Precision Viticulture. Remote Sens. 2022, 14, 449. [Google Scholar] [CrossRef]
  33. Mielcarek, M.; Kamińska, A.; Stereńczak, K. Digital Aerial Photogrammetry (DAP) and Airborne Laser Scanning (ALS) as Sources of Information about Tree Height: Comparisons of the Accuracy of Remote Sensing Methods for Tree Height Estimation. Remote Sens. 2020, 12, 1808. [Google Scholar] [CrossRef]
  34. Lombardi, E.; Rodríguez-Puerta, F.; Santini, F.; Chambel, M.R.; Climent, J.; de Dios, V.R.; Voltas, J. UAV-LiDAR and RGB Imagery Reveal Large Intraspecific Variation in Tree-Level Morphometric Traits across Different Pine Species Evaluated in Common Gardens. Remote Sens. 2022, 14, 5904. [Google Scholar] [CrossRef]
  35. Su, B.F.; Xue, J.R.; Xie, C.Y.; Fang, Y.L.; Song, Y.Y.; Fuentes, S. Digital Surface Model Applied to Unmanned Aerial Vehicle Based Photogrammetry to Assess Potential Biotic or Abiotic Effects on Grapevine Canopies. Int. J. Agric. Biol. Eng. 2016, 9, 119–130. [Google Scholar] [CrossRef]
  36. Primicerio, J.; Caruso, G.; Comba, L.; Crisci, A.; Gay, P.; Guidoni, S.; Genesio, L.; Aimonino, D.R.; Vaccari, F.P. Individual Plant Definition and Missing Plant Characterization in vineyards from High-Resolution UAV Imagery. Eur. J. Remote Sens. 2017, 50, 179–186. [Google Scholar] [CrossRef]
  37. Comba, L.; Gay, P.; Primicerio, J.; Ricauda Aimonino, D. Vineyard Detection from Unmanned Aerial Systems Images. Comput. Electron. Agric. 2015, 114, 78–87. [Google Scholar] [CrossRef]
  38. Pádua, L.; Adão, T.; Sousa, A.; Peres, E.; Sousa, J.J. Individual Grapevine Analysis in a Multi-Temporal Context Using UAV-Based Multi-Sensor Imagery. Remote Sens. 2020, 12, 139. [Google Scholar] [CrossRef]
  39. Jurado, J.M.; Padua, L.; Feito, F.R.; Sousa, J.J. Automatic Grapevine Trunk Detection on UAV-Based Point Cloud. Remote Sens. 2020, 12, 3043. [Google Scholar] [CrossRef]
  40. Hajjar, C.; Ghattas, G.; Sarkis, M.K.; Chamoun, Y.G. Vine Identification and Characterization in Goblet-Trained Vineyards Using Remotely Sensed Images. Remote Sens. 2021, 13, 2992. [Google Scholar] [CrossRef]
Figure 1. Study vineyard (a) monitored in dormant period and UAV platform (b) equipped with DJI Zenmuse P1 RGB camera.
Figure 1. Study vineyard (a) monitored in dormant period and UAV platform (b) equipped with DJI Zenmuse P1 RGB camera.
Drones 07 00349 g001
Figure 2. Scheme of the processing workflow used to identify vines and missing vines within the vineyard. The boxes show, in sequence, the functional steps of image processing for the extraction of dense cloud regions classified in height, the generation of polygon shapefiles from the point clouds, the application of the 4 methods and, finally, the validation of the results.
Figure 2. Scheme of the processing workflow used to identify vines and missing vines within the vineyard. The boxes show, in sequence, the functional steps of image processing for the extraction of dense cloud regions classified in height, the generation of polygon shapefiles from the point clouds, the application of the 4 methods and, finally, the validation of the results.
Drones 07 00349 g002
Figure 4. Graphical representation and workflow scheme of each method: (a) M1, (b) M2, (c) M3, (d) M4. In detail, the figure shows the input data in blue rectangles, the sequence of the QGis functions used in green hexagons and, finally, the output datasets representing the results of each method in red rectangles.
Figure 4. Graphical representation and workflow scheme of each method: (a) M1, (b) M2, (c) M3, (d) M4. In detail, the figure shows the input data in blue rectangles, the sequence of the QGis functions used in green hexagons and, finally, the output datasets representing the results of each method in red rectangles.
Drones 07 00349 g004
Figure 5. (a) Polylines between vines are color classified as a function of the length: 0.00–1.40 m green, 1.40–2.00 m light blue, 2.00–3.00 m blue, 3.00–4.00 m orange, 4.00–5.00 m red, 5.00–6.00 m pink. (b) Building of missing vines points layer for segments ≥ 1.40 m.
Figure 5. (a) Polylines between vines are color classified as a function of the length: 0.00–1.40 m green, 1.40–2.00 m light blue, 2.00–3.00 m blue, 3.00–4.00 m orange, 4.00–5.00 m red, 5.00–6.00 m pink. (b) Building of missing vines points layer for segments ≥ 1.40 m.
Drones 07 00349 g005
Table 1. Results of the evaluation of the proposed methods for identification of posts and vines with respect to ground truth.
Table 1. Results of the evaluation of the proposed methods for identification of posts and vines with respect to ground truth.
True PositiveFalse PositiveFalse NegativePrecision (%)Recall (%)F1 Score (%)OA (%)
Correct DetectionIncorrect DetectionNot Detected
Posts7974999.50%98.88%99.19%98.40%
TrunksM142833447299.21%90.07%94.42%89.43%
M2446211829397.42%93.84%95.60%91.57%
M3461231614393.59%96.99%95.26%90.95%
M446652149095.61%98.11%96.84%93.88%
Table 2. Results of the evaluation of the proposed methods for missing vines identification with respect to ground truth.
Table 2. Results of the evaluation of the proposed methods for missing vines identification with respect to ground truth.
True
Positive
False PositiveFalse NegativePrecision (%)Recall (%)F1 Score (%)OA
(%)
Correct DetectionIncorrect DetectionNot Detected
Missing vinesM111374143273.31%97.26%83.60%71.83%
M211242754580.34%96.15%87.54%77.84%
M3106112410889.54%90.76%90.14%82.06%
M41103836693.00%94.35%93.67%88.10%
Table 3. Evaluation of the four methods’ performance by mean calculation of overall statistic indicators, considering both vines and missing vines detection together.
Table 3. Evaluation of the four methods’ performance by mean calculation of overall statistic indicators, considering both vines and missing vines detection together.
Global PrecisionGlobal RecallGlobal F1 ScoreGlobal OA
Vines and missing vinesM192.37%91.49%91.93%85.06%
M293.43%94.29%93.86%88.43%
M392.80%95.76%94.26%89.14%
M495.10%97.37%96.22%92.72%
Table 4. Detailed evaluation of M4’s performance in assessing vines and missing vines at a single row level with respect to ground truth observations.
Table 4. Detailed evaluation of M4’s performance in assessing vines and missing vines at a single row level with respect to ground truth observations.
VinesMissing VinesRelative Error
Id RowGround TruthM4DeltaGround TruthM4DeltaVinesMissing Vines
12223−12114.55%4.17%
22626057−20.00%6.45%
33230257−26.25%5.41%
43738−16602.70%0.00%
54241178−12.38%2.04%
6424111216−42.38%7.41%
7515108800.00%0.00%
810710522022−21.87%1.57%
9109111−2232211.83%0.76%
1010210113235−30.98%2.24%
1111211202932−30.00%2.13%
1212212202425−10.00%0.68%
1313012822225−31.54%1.97%
1413913631822−42.16%2.55%
15140145−51820−23.57%1.27%
1613213112528−30.76%1.91%
17141142−1232210.71%0.61%
1815215021821−31.32%1.76%
1915315301415−10.00%0.60%
20155160−5191903.23%0.00%
21166170−4910−12.41%0.57%
22156159−3202001.92%0.00%
23156159−32527−21.92%1.10%
2417217111116−50.58%2.73%
25132138−6494454.55%2.76%
26135139−44345−22.96%1.12%
27112121−9615658.04%2.89%
28137140−33541−62.19%3.49%
29121131−10494458.26%2.94%
30141148−7262514.96%0.60%
31122128−63943−44.92%2.48%
32114121−7474526.14%1.24%
33109115−6474345.50%2.56%
34105109−4484713.81%0.65%
3510810804445−10.00%0.66%
36115121−63435−15.22%0.67%
37101110−9474618.91%0.68%
389296−4514834.35%2.10%
39115119−4262513.48%0.71%
40101110−9353058.91%3.68%
41102114−123028211.76%1.52%
429497−3353413.19%0.78%
43103109−6282805.83%0.00%
Total47554879−12411691186−173.40%1.85%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Di Gennaro, S.F.; Vannini, G.L.; Berton, A.; Dainelli, R.; Toscano, P.; Matese, A. Missing Plant Detection in Vineyards Using UAV Angled RGB Imagery Acquired in Dormant Period. Drones 2023, 7, 349. https://doi.org/10.3390/drones7060349

AMA Style

Di Gennaro SF, Vannini GL, Berton A, Dainelli R, Toscano P, Matese A. Missing Plant Detection in Vineyards Using UAV Angled RGB Imagery Acquired in Dormant Period. Drones. 2023; 7(6):349. https://doi.org/10.3390/drones7060349

Chicago/Turabian Style

Di Gennaro, Salvatore Filippo, Gian Luca Vannini, Andrea Berton, Riccardo Dainelli, Piero Toscano, and Alessandro Matese. 2023. "Missing Plant Detection in Vineyards Using UAV Angled RGB Imagery Acquired in Dormant Period" Drones 7, no. 6: 349. https://doi.org/10.3390/drones7060349

Article Metrics

Back to TopTop