Next Article in Journal
Glacier Motion Monitoring Using a Novel Deep Matching Network with SAR Intensity Images
Next Article in Special Issue
Editorial for the Special Issue “Advances of Remote Sensing in the Analysis of the Spatial and Temporal Variability of Land Surface”
Previous Article in Journal
Time-Lagged Ensemble Quantitative Precipitation Forecasts for Three Landfalling Typhoons in the Philippines Using the CReSS Model, Part II: Verification Using Global Precipitation Measurement Retrievals
Previous Article in Special Issue
A Smart Procedure for Assessing the Health Status of Terrestrial Habitats in Protected Areas: The Case of the Natura 2000 Ecological Network in Basilicata (Southern Italy)
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Automatic Filtering and Classification of Low-Density Airborne Laser Scanner Clouds in Shrubland Environments

1
Institute of Methodologies for Environmental Analysis-National Research Council of Italy (IMAA-CNR), 85050 Tito Scalo, PZ, Italy
2
GEOCART S.p.A, Viale del Basento, 120, 85100 Potenza, PZ, Italy
*
Author to whom correspondence should be addressed.
Remote Sens. 2022, 14(20), 5127; https://doi.org/10.3390/rs14205127
Submission received: 18 July 2022 / Revised: 10 September 2022 / Accepted: 10 October 2022 / Published: 13 October 2022

Abstract

:
The monitoring of shrublands plays a fundamental role, from an ecological and climatic point of view, in biodiversity conservation, carbon stock estimates, and climate-change impact assessments. Laser scanning systems have proven to have a high capability in mapping non-herbaceous vegetation by classifying high-density point clouds. On the other hand, the classification of low-density airborne laser scanner (ALS) clouds is largely affected by confusion with rock spikes and boulders having similar heights and shapes. To identify rocks and improve the accuracy of vegetation classes, we implemented an effective and time-saving procedure based on the integration of geometric features with laser intensity segmented by K-means clustering (GIK procedure). The classification accuracy was evaluated, taking into account the data unevenness (small size of rock class vs. vegetation and terrain classes) by estimating the Balanced Accuracy (BA range 89.15–90.37); a comparison with a standard geometry-based procedure showed an increase in accuracy of about 27%. The classical overall accuracy is generally very high for all the classifications: the average is 92.7 for geometry-based and 94.9 for GIK. At class level, the precision (user’s accuracy) for vegetation classes is very high (on average, 92.6% for shrubs and 99% for bushes) with a relative increase for shrubs up to 20% (>10% when rocks occupy more than 8% of the scene). Less pronounced differences were found for bushes (maximum 4.13%). The precision of rock class is quite acceptable (about 64%), compared to the complete absence of detection of the geometric procedure. We also evaluated how point cloud density affects the proposed procedure and found that the increase in shrub precision is also preserved for ALS clouds with very low point density (<1.5 pts/m2). The easiness of the approach also makes it implementable in an operative context for a non-full expert in LiDAR data classification, and it is suitable for the great wealth of large-scale acquisitions carried out in the past by using monowavelength NIR laser scanners with a small footprint configuration.

Graphical Abstract

1. Introduction

Airborne laser scanning (ALS) systems, also known as aerial light detection and ranging scanners (hereafter LiDAR), have undergone an extraordinary development in the last 20 years [1,2,3]. Their capabilities in 3D mapping, based on active remote sensing technology, have enabled LiDAR applications in diverse contexts, such as basilar digital elevation model extraction and morphological analysis [4]; erosion risk estimation [5]; floods [6,7] and landslides [8]; archeology [9,10] and urban applications [11]; and volume estimation [12,13] and tree species identification [1] in forestry. In these applications, three-dimensional point clouds acquired by LiDAR are preprocessed (by denoising, attenuation correction, geo-rectification, and filtering) and then analyzed through classification procedures to characterize observed objects.
Point cloud classifications traditionally focus on features related to height and shape (e.g., maximum and minimum height and difference between heights) to label each point according to target features, such as ground, vegetation, road, building, or shadows. A lot of the literature reports point cloud classifications based on different supervised approaches, such as: maximum likelihood (ML) [14]; support vector machines (SVM) [15,16]; random forests (RF) [17]; Markov random field (MRF) [18]; conditional random field (CRF) [19]. More recently, deep learning methods, such as convolutional neural networks (CNN), have been largely implemented [18]. To obtain more accurate land cover information, many papers analyzed the combined use of additional parameters. For example, Shaker et al. [20] combined normal heights, intensity texture, surfaces slopes, and Principal Component Analysis (PCA) with the original LiDAR data to study the impact of these additional layers on the classification accuracy. Antonarakis et al. [21] used airborne LiDAR intensity and elevation data to identify and classify different forest types and forest ages. Garcia et al. [22] used height and intensity data from a discrete-return LiDAR system to estimate the carbon content in the vegetation biomass fractions of a Mediterranean forest. Farid et al. [23] performed a supervised classification of altitude and intensity images using the local maximum algorithm for differentiating age classes of cottonwood trees in a riparian area using small-footprint airborne LiDAR data. Mesas et al. [24] combined LiDAR intensity data with aerial camera data to discriminate agricultural land uses. Hellesen and Matikainen [25] combined a color-infrared orthoimage (NIR–red–green) and elevation data for the classification of shrubs and trees. In general, results showed that the integration of spectral information (aerial or satellite) largely improves classification performance. Studies based only on LiDAR data showed that a combination of both geometric and radiometric features allows us to obtain better results than classifications using solely structural features alone or intensity features [1,26].
If vegetation is considered as a single class or at most is divided into high (trees and bushes) and low (herbaceous) vegetation, the accuracy of ALS-based land cover classifications is generally very high (>85%) at the scene level [27]. When trying to separate shrubs and bushes from rock spikes and round boulders in the point cloud, the accuracy of the class is significantly reduced [28], as such classes are hard to distinguish [29].
The monitoring of shrublands plays a fundamental role in climatic and ecological studies. Shrubs are small- to medium-sized perennial woody plants differing from trees by their stem system (multiple stems vs. few dominant steams of trees) and height (generally below 6–8 m). Shrub vegetation is highly adapted in different environments with nutrient-poor soil conditions; thus, it is greatly relevant for the conservation of biodiversity [30,31,32] and ecosystem services [33,34]. Recent climate changes and the abandonment of cultivated areas favoring shrub encroachment processes threaten herbaceous vegetation [35,36,37]. The assessment of shrub extension and characteristics support the evaluation of climate impact [32,38,39] and the correct quantification of vegetation carbon stock estimates [40,41].
In steep-variable terrains, the presence of shrubs can even compromise the ground filtering [3]. In flat rocky areas, Karila et al. [42] obtained a low class accuracy when low vegetation alternates with rocks. The accuracy of such classes further decrease for late spring acquisitions (due to phenological peak) [42] and in large-area mapping (due to point density and class heterogeneity) [43], even if combined with optical multispectral data (such as Landsat data) [44]. Moreover, Fernández-Guisuraga et al. [45] found that LiDAR metrics derived from the height distribution of the returns tend to underestimate mean shrub height and shrub canopy cover in Atlantic and Mediterranean environments. Similar results were also found in steppe environments, where the separation of low vegetation and ground is particularly critical [46].
Most LiDAR systems use one wavelength scanner, generally in green, near-infrared, or shortwave infrared. More recently, the introduction of multispectral airborne laser scanner (mALS) systems that simultaneously acquire 3D point clouds in different channels (e.g., Titan by Teledyne Optech, the first commercial mALS released in 2014, which acquires at green 532 nm, NIR 1064 nm, and SWIR 1550 nm channels) has largely improved object classification, particularly for tree-species identification in rural [47,48] and urban environments [28]. Among the different types of available monowavelength LiDARs, the full-waveform (FW) system provides the best accuracy in the vegetation characterization [49]. Waveform features generally outperform discrete-return LiDAR and also single-photon LiDAR (SPL), despite its high point density [1,50,51]. Very detailed results in mapping low vegetation were obtained by integrating FW-LiDAR with a synchronized hyperspectral acquisition [52], but such an integration can be quite expensive for large areas and periodic mapping. Many misclassifications can be avoided with multi-season acquisitions [51], as changes in vegetation shape and intensity allow for the correct separation of bushes and shrubs from rocky areas. Again, repeated surveys in the same sites show cost drawbacks for aerial mapping over large areas.
Currently, an automatic separation between low vegetation and rock classes is feasible in high-density point clouds by exploiting different shape-based features and mALS or by integrating different data sources [53,54]. In low-density point clouds, an accurate separation can be reached only by implementing a time-consuming manual filtering/segmentation [55,56,57]. Therefore, the development of a cost-effective procedure capable of separating low non-herbaceous vegetation from outcropping rocks with a good precision by using single-channel ALS is still an open challenge [30,46,58]. Such a procedure would be of particular relevance for monitoring shrubland environment and fully exploiting the great wealth of large-scale acquisitions carried out in the past by using monowavelength laser scanners.
In this study, we evaluated the classification performance of single-date and low-density ALS point clouds to separate shrubs and rocky areas by integrating geometric features and laser intensity via automatic routines. Raw ALS intensity data were used as an additional information source to integrate automatic filtering and object classification based on geometric parameters to propose a procedure easily implementable for a non-full expert in LiDAR data classification.

2. Materials and Methods

2.1. Data

LiDAR data were collected by the Geocart SpA using a RIEGL LMS-Q560 Full-Waveform scanner, onboard a helicopter. Its laser wavelength is 1550 nm, with repetition rates up to 100 KHz. The near-infrared (NIR) is in general a suitable range for analyzing spectral separability of the objects because it has different reflected energy from different targets. At 1550 nm, there is a notable difference in reflected energy between vegetation types, health of the plants, and the water content in the leaves; whatever the condition of the vegetation, it is well separable from rocks (see Figure S1).
In ALS systems, the amount of energy reflected (backscattered) by objects represents the intensity. It is defined as the ratio between the light reflected by the target (backscattered) and that emitted by the laser pulse [59]. The ALS intensity values are generally given as digital numbers converted from the amount of power backscattered from objects (for RIEGL LMS-Q560 reference values, see [60]). In this study, intensity information is recorded at 2 bytes per point and stored as 16-bit integer. The laser data are archived in the LAS 1.1 format (American Society for Photogrammetry and Remote Sensing (ASPRS) LiDAR data exchange format). From LAS files, height and intensity information are used for the implementation of the proposed procedure.
The ALS data were acquired over a hilly area (about 50 ha) located in the southeastern part of the Sardinia region, one of the largest Italian islands, with a typical rural Mediterranean environment: a mixture of maquis with shrubs, rocks, and stones (see Figure 1). Within the acquired surface, we identified four areas with different dimension and morphological characteristics: Area 1 includes a small hill with large spikes of rock and a very steep slope; Area 2 is characterized by a channel and small scattered spikes of rock; Area 3 has a gentle slope and a few spikes of various sizes; Area 4 shows a mound with medium-sized and scattered spikes of rock.
Details on ALS acquisition for each area are shown in Table 1.
For this acquisition, the Geocart ALS system (http://www.geocartspa.it/english/index.php/technologies/sensors-and-instruments.html, last accessed on 8 July 2022) is also supplemented by a digital camera, a 39 MP Digicam H39, which allows for collecting orthophotos and ALS data at the same time. Both orthophotos and ALS data are characterized by the same flight parameters. The acquired orthophotos (composed of RGB bands) have a spatial resolution of 0.20 m and were used to generate the reference classification for the validation procedure (see Section 2.3).

2.2. Methods

To improve the classification of shrubs and rocky areas in an operative procedure, we propose the integration of the standard ALS classification based on geometric features, with intensity data segmentation via standard automatic routines. In particular, we integrated geometric features with laser intensity segmented by K-means clustering (GIK procedure).
The proposed procedure is based on height and intensity LiDAR data, without including waveform-derived features. Thus, it can be easily implementable also for discrete return systems and by a non-full expert in LiDAR classification. The performance of the proposed procedure was evaluated by comparing the results with those obtained by the standard operative geometric filtering and classification. A manually labeled detailed point cloud was adopted as reference classification.
ALS points were processed using the commercial software Terrasolid v.20.001, which includes TerraScan, TerraPhoto, and TerraModel [61]. It is in MicroStation Development Language and is running in MicroStation—Bentley. Even if there is a large availability of open-source tools for ALS processing (e.g., LAStool, FUSION, Whitebox, and CloudCompare), they are mainly used and implemented by academic communities for their limitations in file-size dimension or user-friendly interface and the reduced possibility for integrating layers from different sources (e.g., orthophotos and satellite images) [62]. Among commercial software, Terrascan still represents the most complete and utilized software in operative environment for laser points processing [63,64].
ALS points were classified automatically by using the routine based on geometric parameters (Section 2.2.1) and then refined with the routine based on intensity values (Section 2.2.2). Intensity values were obtained analyzing the raster intensity data at the wavelength 1550 nm (Section 2.2.2). The accuracy was evaluated by comparison with a reference classification and estimation of indices from confusion matrices (Section 2.3). Finally, the differences in accuracy were evaluated as a function of the point cloud density (Section 2.4). The workflow of our analyses is synthesized in Figure 2.

2.2.1. Filtering and Standard Geometric Classification

The data filtering process starts with the identification of anomalous points (i.e., low, isolated, and aerial points) which are assigned to the category of unclassified cloud points. Successively, the ground surface is defined using the ground classification routine [61], based on the method presented by Axelsson [65]. The process starts with a sparse triangulated irregular network (TIN) of ground points derived from neighborhood minima and then is iteratively densified by geometric threshold values (i.e., iteration angle, iteration distance, terrain angle, and max building size), which prescribes possible deviations from the average topographic surface and builds a triangulated model. After the identification of ground, points located below the ground surface are furtherly filtered out.
In a standard operational flow, if cars, walls, buildings, or similar features are not present above the ground (as in this study), the points not labeled as unclassified are generally considered as vegetation and are classified according to their heights. From a botanical point of view, the classification of shrub vegetation by height considers as low shrubs (also called dwarfshrubs) the vegetation that is less than 2 m high, such as rosemary (Salvia Rosmarinus), myrtle (Myrtus communis), and gorse (Spartium junceum), and as tall shrubs the vegetation higher than 2 m including strawberry tree (Arbutus unedo), lentisk (Pistacia lentiscus), juniper (Juniperus communis), and evergreen oak such as holm (Quercus ilex) and cork oaks (Quercus suber), which are generally around 6 to 8 m [66,67]. Thus, for our study areas, the following thresholds for vegetation were applied on height above ground:
-
low (<0.25 m), for herbaceous vegetation;
-
medium (0.25 m to 2 m), for low shrubs;
-
high (>2 m), for tall shrubs.
Such height filtering and classification steps represent the standard geometric classification procedure. Once the shrub classes are identified via these standard routines, they are refined by the GIK classification to separate rocks from vegetation.

2.2.2. Intensity Segmentation and GIK Classification

The TerraScan intensity routine classifies points within a given range of intensity values. In order to identify the intensity values of the different features within the study areas and to extract ranges corresponding to vegetation and rock, a raster image of intensity is required.
In this study, a raster intensity with a spatial resolution of 0.5 m was produced; such a resolution assures a presence of >90% of pixels containing one laser point (i.e., pixels with multiple points are <10%); such a pixel size also allows a good preservation of details of the elements in the investigated landscape. To avoid the contribution of pixels with no LiDAR data in the clustering process, they are flagged as NoData (-9999). As rock has intensity values higher than leaves, to separate rock spikes from vegetation, the maximum intensity values have been selected as a rule for the intensity rasterization.
To automatically identify intensity ranges of the point cloud, the intensity image was segmented by applying the K-means unsupervised classification. The K-means clustering can be well-adapted to LiDAR data processing [68,69,70]. The algorithm calculates the initial K classes around the centroids (or class means) evenly distributed in the data space; then, pixels are iteratively assigned to a cluster according to their similarity measure, and new centroids are calculated by using a minimum distance technique. Therefore, each iteration recalculates class means and reclassifies pixels with respect to the new centroids. All pixels are classified to the nearest centroids unless a standard deviation or distance threshold value is matched (i.e., the centroids have stabilized, and there is no change in their values). Alternatively, the clustering terminates if the defined number of iterations has been achieved, but, in this case, some pixels may be unclassified. Consequently, it is important to verify the convergence of the algorithm and whether the number of iterations needs to be increased.
Once the intensity clusters are identified, the corresponding range of intensity values are assigned to vegetation (low intensity) and rock (high intensity). Such ranges are used in TerraScan intensity routine to refine the vegetation classes and obtain the final classification based on geometric and intensity features (GIK).

2.3. Validation Procedure

To quantify the performance of the proposed method, a reference point cloud was generated using manual labeling. This processing, performed by a trained expert, allowed us to distinguish rocks, stones, and masonry from vegetation. The objects in the scene were identified with two steps: (i) a visual inspection of the corresponding orthophoto; and (ii) the assignment of orthophoto-RGB attributes to the ALS points. In addition, in ambiguous cases and shadowed targets in the aerial photos, the manual classification was also supported by multitemporal Google imagery. Manual labeling was implemented on vegetation classes (low and tall shrub) previously obtained by the standard geometric procedure (see Figure 2).
To evaluate the degree of correctness of the proposed method, reference and automatic processed point clouds were compared, obtaining the corresponding confusion matrix (also known as error matrix). From the matrix, with reference data in the columns and predicted classes in rows, we estimated the following accuracy metrics [71,72,73]: overall accuracy (OA), producer’s accuracy (PA), user’s accuracy (UA), and balanced accuracy (BA).
OA = 1 N i = 1 r n ii
PA = n ii n icol
UA = n i i n i r o w
BA = 1 C ref   i = 1 r 1 2 n ii n irow + i = 1 r n ii n ii i = 1 r n irow n irow
where nii is the number of points correctly classified in a class; N is the total number of observations in the confusion matrix (number of points in our case); r is the number of rows; nicol and nirow are the total of columns and rows, respectively; and Cref is the number of classes in the reference classification. To simplify the reading of the formulas, the scheme of the general matrix is reported in the supplementary material (Figure S2).
The overall accuracy (OA) represents the widest adopted metric for evaluating thematic maps in remote sensing applications. It incorporates the major diagonal (true positives) and gives the percentage of pixels correctly allocated. Producer’s PA and user’s UA accuracy are class metrics and account for the omission and commission errors, respectively. PA (also referred to as the true positive rate, recall, or sensitivity) shows what percentage of a category of the reference map is correctly classified and can define a measure of points omitted from its reference class (omission error OE (%) = 100 − PA (%)). Similarly, from UA (also referred to as the positive predicted value or precision), commission errors can be estimated (CE (%) = 100 − UA (%)); they represent the percentage of points that do not truly belong to the reference class, but they are committed in the predicted class from other reference classes.
The balanced accuracy (BA) overcomes the drawbacks of the overall accuracy in presence of unbalanced distributions [74] (i.e., when the class sizes largely differ), as in our study where the rock class is largely smaller than vegetation and terrain classes (see Figure 1). On an imbalanced dataset, the overall accuracy gives a misleading idea about generalization performance of classifiers leading to false conclusions about the significance with which an algorithm has performed better than chance [73]. Smaller class performance is not shaded by the proportional influence of large classes. The overall and balanced accuracy coincide for a fully balanced dataset (evenness in class distribution). A simplified scheme of the balanced accuracy is shown in the supplementary material (Figure S3).
The quantitative measures of classification performance were estimated in each of the four selected areas (yellow frames shown in Figure 1).

2.4. Evaluation of the Effect of Point Density on the Procedure Performance

As our procedure is primarily devoted to improve rock and shrub classifications in ALS acquisitions with low point density, we also evaluated how the use of segmented K-means intensity is affected by the sampling of laser points. Therefore, we evaluated how the accuracy of these classes changes as a function of the point cloud density.
For such a purpose, starting from the original LiDAR clouds, we performed sub-sampling of point density and re-elaborated the confusion matrices for each new point cloud. Then, the accuracy metrics listed in the previous section were recalculated for both the standard geometric procedure and the method based on the integration of segmented intensity.
Such an evaluation was implemented in CloudCompare open software (www.cloudcompare.org, last accessed on 7 July 2022) by using the space-based sampling routine (range of minimum point distance 0.5–1.5 m).

3. Results

The points cloud classifications, overlaid on the corresponding orthophoto, are shown in Figure 3, the rows of which report for each study area respectively: (A) the corresponding orthophoto tile, (B) the classification output of the standard procedure based on geometric features (Geometric), (C) the classification obtained by integrating the standard geometric procedure with the laser intensity segmented by K-means (GIK), and (D) the classification resulting from the standard procedure based on geometric characteristics and the manual separation of rocky areas from vegetation (Reference). In the orthophotos, the typical Mediterranean shrubland (maquis) with patchy vegetation alternating with rock outcrops is clearly identifiable, as are the different morphologies that characterize the four areas.
The classification output of the standard procedure based on geometric features (column B in Figure 3) does not include the rocky class, so just high and medium vegetation classes are identified, as its algorithm is designed (see Section 2.2.1). Instead, the combination of the intensity information allowed for the separation of rock spikes from the low and tall shrubs of the Mediterranean maquis (column C in Figure 3).
The vegetation of shrubland, such as the maquis of the study areas, is characterized by a height and shape that are very similar to those of rock spikes and boulders. Therefore, once the rocky elements have been excluded from the soil classification, the geometric-based procedure includes them in the vegetation classes (see an example in Figure 4). In such cases, the differences in intensity values between rocky elements and vegetation allow the K-means segmentation approach to separate these classes (Figure 4e), providing a classification very similar to that of the manual rock separation (Figure 4f).
The differences in overall accuracy between the two classification procedures are quite slight. The balanced accuracy instead shows substantial differences (Table 2). In particular, OA values are very high for all the classifications (92.7 and 94.9 for geometric and K-means classifications, respectively), even if the rock class is missing by the geometry-based procedure. OA is reasonably improved by GIK only in Area 4 (ΔOA 4.51). Improvements in BA in favor of the K-means approach are on average about 27%, with the highest value for Area 1 (30.83%).
Notable differences are also evident at the class level, where the increase in class accuracy (UA) for low shrubs ranges from 3.5 to 14.3. Less pronounced differences were found for tall shrubs (max 4.13). The rocky areas reached a medium high accuracy (average 63.9%) in the classification that integrates the laser-intensity information, whereas it is unclassified by the standard procedure based on geometric features. The producer’s accuracy is very high for tall shrub vegetation (98.41%–100%) as well as for low shrubs (91.76%–96.60%) in both the classifications. For the intensity-based classification, the rock class sensitivity is quite satisfactory (average PA of 60.57%), taking into account that this class is completely undetected by the geometric procedure.
The accuracy level of rocks is linked to commission errors of points principally belonging to the low shrub class (see error matrices in Supplementary Material). Very few points are committed from the tall shrub class (about 4%). Such commission errors have no definite patterns, as they are scattered in the scene. From a physical point of view, they mainly refer to areas in which the shrub vegetation is slightly thinned out, and the underlying rocks emerge. Therefore, the laser intensity threshold excludes them from the vegetation classes (see, e.g., Figure 5a). About 0.2% of these false rock identifications actually belong to the shrub canopy, showing anomalous high intensity values that can be linked to some uncorrected effects of bidirectional scattering. The omission errors for the rock class are on average 30% and also mainly concern the low shrub class in this case. The classification of Area 2 greatly differs from this value, reaching more than 60%. In this area, there are rock outcrops completely covered by vegetation, such as moss or short dense grass (see, e.g., Figure 5c). In this situation, the intensity values are guided by the presence of herbs or moss, and, therefore, no threshold value of the intensity can limit this commission.
The increase in overall accuracy is strictly related to the dimension within the scene of the rock class: the larger the rocky area is, the higher the improvement in the classification accuracy, by accounting for the intensity values (Figure 6a). Similarly, the user’s accuracy of low shrub class increases more in scenes with large patterns of rocks (Figure 6b), reaching a relative improvement of more than 20% (in Area 4, UA is 14.3%, with respect to the 70.74% of the geometric-based classification). In particular, the precision (UA) increment for low shrubs is higher than 10% when rocks are present in more than 8% of the scene.
Taking into account that the improvement in low shrub classification accuracy is greater than 10% for Areas 1 and 4, we evaluated whether the GIK performance varies as a function of the density of the point cloud by subsampling the laser points of these two areas. The accuracy metrics recalculated for each resampled cloud show that the overall accuracy and class precision are increasing both in geometry and intensity-based classifications, by reducing the cloud density. As for the original clouds, the difference between the two approaches is particularly evident for the shrub class (containing higher commission errors from the rock). In detail, the improvement provided by intensity slightly decreases up a point density of about 1.5 pts m−2, then it increases as the cloud density is reduced. Figure 7 shows the improvement (K-means vs. Geometric) in precision accuracy for shrub normalized by the rock class dimension of the given point cloud (subsampled), in order to eliminate the effects of class dimension on the accuracy (i.e., minimize the effect shown in Figure 6). The behavior unevenly follows a second order polynomial, with a minimum between 1.5–2.0 pts m−2, to which an absolute improvement higher than 10% always corresponds (see values of the point labels in Figure 7a,b). The polynomial fit is higher for the single areas (R2 = 0.51 for Area 1 and R2 = 0.96 for Area 4) than for all the resampled clouds together (R2 = 0.40), but the behavior is essentially the same.
Thus, such results highlight the relevance of integrating intensity values to identify rock and improve shrub classification accuracy, especially in an ALS cloud with low point density.

4. Discussion

Results obtained in this study confirm the relevance of combining laser intensity and elevation data for class discrimination not only in forest [21,22] and agricultural [23,24] environments but also in natural scrubland ecosystems. The increase in class accuracy for low shrubs is particularly relevant (relative incidence up to 20%), and the obtained precision (UA) values are similar or higher than those obtained with more complex classifiers (e.g., deep learning classifier [26] and random forests [75,76]) and ALS systems (such as multispectral LiDAR [75]) on highly dense clouds. The processing time of the fairly K-means algorithm is largely shorter than the one required by machine learning methods, which keep the classification confusion between small vegetation and bedrock anyways [76], and, certainly, this saves time compared to manual refining [55,56].
The difference in accuracy for scene metrics (overall vs. balanced accuracy, as shown in Table 2) highlights the relevance of adopting a suitable metric for unbalanced classification scenarios. The uneven class distribution within the scene also provided very high values of overall accuracy (OA) when a class is completely missed (such as the undetected rocks that are included in shrub classes by the Geometric procedure, see Figure 3 and Table S1). Since data representing the real world are generally imbalanced [77], such a behavior can lead to a wrong selection of the most suitable classifier and also to invalidate analyses based on an inaccurate classification [78]. This is particularly relevant when the work is focused on a specific class, such as the analysis of shrub alteration as an impact of climatic changes or the estimation of its carbon sequestration [30,32,40]. Unfortunately, overall accuracy is still largely adopted by the remote sensing community jointly with other metrics also suffering from similar problems, such as the F-score and Cohen’s K coefficient [79,80]. Moreover, K has proven to not be appropriate for map accuracy assessment or comparison because a very accurate classification could be associated with a wide range of K values [81]. The implemented extension of the BA formula from the binary to a multiclass case can represent a suitable and feasible tool for the remote sensing community to evaluate and compare the performance of different classifications.
Even if very low (0.2% of commissions on rock, corresponding to about 0.012% on the scenes), the presence of high intensity values on vegetation canopy can probably be linked to some uncorrected effects of bidirectional scattering distribution function (BSDF). In a multiclass detection, to accurately account for the scattering properties of different targets, Sun et al. [82] suggest the use of multiple BSDF models instead of a single standard model. Contrarily, in a detailed study, Roth et al. [83] showed that Lambertian transmittance assumption can represent a solid approximation on forest ecosystem. For non-transmitting leaves, NIR-based LiDAR systems produced reasonably results with a small footprint configuration; conversely, significant errors were observed with large footprints (5 m). For transmitting leaves, significant errors can be incurred if Lambertian leaf assumptions are made for LiDAR systems operating at visible wavelengths. In our case, the NIR acquisition has a footprint substantially smaller than 5 m, and the sclerophyllous shrubs of maquis generally have coriaceous leaves that can be assumed as non-transmitting. Thus, a similar study for characterizing BSDF impact also in shrublands is fundamental to confirm the possibility of Lambertian assumption in this type of environment.
Although the proposed approach greatly improves the discrimination between shrubs and rock, the analysis of commission and omission errors highlights a physical limitation of the intensity feature (see Figure 4b). In the classification of point clouds with low density, where the shape-based classifications are not applicable, the separation of vegetation classes from rock points covered by moss or short dense grass (undetectable by intensity values) still represents an open question, so a different approach needs to be developed.

5. Conclusions

The characteristic vegetation of shrublands, such as the Mediterranean maquis, has a height and shape very similar to those of rock spikes and outcrops. Consequently, they are not discernable by adopting a standard procedure based on height geometric features in low-density LiDAR point clouds. In such cases, laser intensity information is also confirmed as a precious feature for class separation in scrubland environments, and the results obtained by the K-means segmentation (GIK classification) support our hypothesis for an effective procedure to integrate geometric and intensity information. One of the main characteristics of the proposed procedure is represented by its easiness and time-saving, making the approach also implementable in operative contexts for a non-full expert in LiDAR data classification.
From a methodological point of view, the analyses of classification performance corroborate the relevance of adopting suitable accuracy metrics in the presence of uneven class distribution (i.e., differences in class size) within the scene [78,79,80]. The multiclass extension of balanced accuracy (BA) formula from binary classification can represent a suitable and feasible tool for the remote sensing community to evaluate classification precision and compare different classification models.
From an operative point of view, the analysis of classification errors (particularly, commission on the rock class) provides a hint for the Lambertian assumption for sclerophyllous shrubs with minimal impact, at least for the adopted configuration (NIR-based full-waveform LiDAR acquisition with a small footprint), similarly to what was obtained in a forest environment for non-transmitting leaves [83]. Further detailed studies are required for evaluating such a hypothesis in other configurations.
The improvement in low-shrub classification accuracy is particularly relevant (>10%) when rock occupies more than 8% of the scene, and it is also preserved for an ALS cloud with low point density (<1.5 pts/m2). The provided possibility of separating the rock class is relevant for Digital Terrain Model (DTM) generation, since ground filtering generally excludes rock spikes and outcrops, being essentially based on smoothness, slope thresholds, and elevation differences [3]. The obtainable improvements in shrub-class accuracy (particularly for low shrubs) can better support ecological studies for biodiversity conservation and carbon stock estimation [32,40,84]. Such a result is of great relevance for fully exploiting the great wealth of large-scale acquisitions carried out in the past by using monowavelength laser scanners, at most coupled with orthophoto acquisitions.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/rs14205127/s1. Figure S1: Spectral position of the RIEGL LMS-Q560 channel; Figure S2: Base matrix for accuracy metrics; Figure S3: Simplified scheme of Balanced Accuracy; Table S1: Error matrices. References [71,72,73,85] are cited in the supplementary materials.

Author Contributions

Conceptualization, T.S. and R.C.; methodology, T.S. and R.C.; formal analysis, T.S. and R.C.; data curation, T.S., R.C. and A.G.; writing—original draft preparation, R.C.; writing—review and editing, T.S., R.C., A.G., M.L., V.I. and C.S.; project administration, T.S.; funding acquisition, A.G. and T.S. All authors have read and agreed to the published version of the manuscript.

Funding

Primary implementation of this research was funded by the project “Research agreement for innovation, technology and know-how transfer on the use of advanced techniques for the classification of hyperspectral and laser data in thematic map production”, financed under the Measures 2.1.B ASSE III “Productive Competitiveness” of the Basilicata Regional Operational Programme (ROP) 2007–2013; the research extension and finalization was supported by the 2014–2020 Rural Development Programme of Basilicata Region Measure 16.1, within the project InnForestGo (CUP C36C18000420006).

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Acknowledgments

We would like to thank Biago Lacovara of Geocart SpA for their contribution to the airborne laser scanner data acquisition and calibration.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

References

  1. Michałowska, M.; Rapiński, J. A Review of Tree Species Classification Based on Airborne LiDAR Data and Applied Classifiers. Remote Sens. 2021, 13, 353. [Google Scholar] [CrossRef]
  2. Lohani, B.; Ghosh, S. Airborne LiDAR Technology: A Review of Data Collection and Processing Systems. Proc. Natl. Acad. Sci. India Sect. Phys. Sci. 2017, 87, 567–579. [Google Scholar] [CrossRef]
  3. Meng, X.; Currit, N.; Zhao, K. Ground Filtering Algorithms for Airborne LiDAR Data: A Review of Critical Issues. Remote Sens. 2010, 2, 833–860. [Google Scholar] [CrossRef] [Green Version]
  4. Hofierka, J.; Gallay, M.; Bandura, P.; Šašak, J. Identification of Karst Sinkholes in a Forested Karst Landscape Using Airborne Laser Scanning Data and Water Flow Analysis. Geomorphology 2018, 308, 265–277. [Google Scholar] [CrossRef]
  5. Zelaya Wziątek, D.; Terefenko, P.; Kurylczyk, A. Multi-Temporal Cliff Erosion Analysis Using Airborne Laser Scanning Surveys. Remote Sens. 2019, 11, 2666. [Google Scholar] [CrossRef] [Green Version]
  6. Costabile, P.; Costanzo, C.; De Lorenzo, G.; De Santis, R.; Penna, N.; Macchione, F. Terrestrial and Airborne Laser Scanning and 2-D Modelling for 3-D Flood Hazard Maps in Urban Areas: New Opportunities and Perspectives. Environ. Model. Softw. 2021, 135, 104889. [Google Scholar] [CrossRef]
  7. Samela, C.; Persiano, S.; Bagli, S.; Luzzi, V.; Mazzoli, P.; Humer, G.; Reithofer, A.; Essenfelder, A.; Amadio, M.; Mysiak, J.; et al. Safer_RAIN: A DEM-Based Hierarchical Filling-&-Spilling Algorithm for Pluvial Flood Hazard Assessment and Mapping across Large Urban Areas. Water 2020, 12, 1514. [Google Scholar] [CrossRef]
  8. Mezaal, M.R.; Pradhan, B.; Rizeei, H.M. Improving Landslide Detection from Airborne Laser Scanning Data Using Optimized Dempster–Shafer. Remote Sens. 2018, 10, 1029. [Google Scholar] [CrossRef] [Green Version]
  9. Lasaponara, R.; Coluzzi, R.; Gizzi, F.T.; Masini, N. On the LiDAR Contribution for the Archaeological and Geomorphological Study of a Deserted Medieval Village in Southern Italy. J. Geophys. Eng. 2010, 7, 155–163. [Google Scholar] [CrossRef]
  10. Coluzzi, R.; Lanorte, A.; Lasaponara, R. On the LiDAR Contribution for Landscape Archaeology and Palaeoenvironmental Studies: The Case Study of Bosco Dell’Incoronata (Southern Italy). Adv. Geosci. 2010, 24, 125–132. [Google Scholar] [CrossRef]
  11. Megahed, Y.; Shaker, A.; Yan, W.Y. Fusion of Airborne LiDAR Point Clouds and Aerial Images for Heterogeneous Land-Use Urban Mapping. Remote Sens. 2021, 13, 814. [Google Scholar] [CrossRef]
  12. Coops, N.C.; Tompalski, P.; Goodbody, T.R.H.; Queinnec, M.; Luther, J.E.; Bolton, D.K.; White, J.C.; Wulder, M.A.; Van Lier, O.R.; Hermosilla, T. Modelling Lidar-Derived Estimates of Forest Attributes over Space and Time: A Review of Approaches and Future Trends. Remote Sens. Environ. 2021, 260, 112477. [Google Scholar] [CrossRef]
  13. McRoberts, R.E.; Tomppo, E.O. Remote Sensing Support for National Forest Inventories. Remote Sens. Environ. 2007, 110, 412–419. [Google Scholar] [CrossRef]
  14. Bartels, M.; Hong, W. Maximum Likelihood Classification of LIDAR Data Incorporating Multiple Co-Registered Bands. In Proceedings of the 4th International Workshop on Pattern Recognition in Remote Sensing in Conjunction with the 18th International Conference on Pattern Recognition 2006, Hong Kong, 20–24 August 2006; pp. 17–20. [Google Scholar]
  15. Lodha, S.K.; Kreps, E.J.; Helmbold, D.P.; Fitzpatrick, D. Aerial LiDAR Data Classification Using Support Vector Machines (SVM). In Proceedings of the 3rd International Symposium on 3D Data Processing, Visualization and Transmission (3DPVT 2006), Chapel Hill, NC, USA, 14 June 2006. [Google Scholar]
  16. Ghamisi, P.; Höfle, B. LiDAR Data Classification Using Extinction Profiles and a Composite Kernel Support Vector Machine. IEEE Geosci. Remote Sens. Lett. 2017, 14, 659–663. [Google Scholar] [CrossRef] [Green Version]
  17. Ni, H.; Lin, X.; Zhang, J. Classification of ALS Point Cloud with Improved Point Cloud Segmentation and Random Forests. Remote Sens. 2017, 9, 288. [Google Scholar] [CrossRef] [Green Version]
  18. Zhu, J.; Sui, L.; Zang, Y.; Zheng, H.; Jiang, W.; Zhong, M.; Ma, F. Classification of Airborne Laser Scanning Point Cloud Using Point-Based Convolutional Neural Network. ISPRS Int. J. Geo-Inf. 2021, 10, 444. [Google Scholar] [CrossRef]
  19. Vosselman, G.; Coenen, M.; Rottensteiner, F. Contextual Segment-Based Classification of Airborne Laser Scanner Data. ISPRS J. Photogramm. Remote Sens. 2017, 128, 354–371. [Google Scholar] [CrossRef]
  20. Shaker, A.; El-Ashmawy, N. Land Cover Information Extraction Using LiDAR Data. In Proceedings of the International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Melbourne, Australia, 25 August–1 September 2012; Volume XXXIX-B7, pp. 167–172. [Google Scholar]
  21. Antonarakis, A.S.; Richards, K.S.; Brasington, J. Object-Based Land Cover Classification Using Airborne LiDAR. Remote Sens. Environ. 2008, 112, 2988–2998. [Google Scholar] [CrossRef]
  22. García, M.; Riaño, D.; Chuvieco, E.; Danson, F.M. Estimating Biomass Carbon Stocks for a Mediterranean Forest in Central Spain Using LiDAR Height and Intensity Data. Remote Sens. Environ. 2010, 114, 816–830. [Google Scholar] [CrossRef]
  23. Farid, A.; Goodrich, D.C.; Sorooshian, S. Using Airborne Lidar to Discern Age Classes of Cottonwood Trees in a Riparian Area. West. J. Appl. For. 2006, 21, 149–158. [Google Scholar] [CrossRef]
  24. Mesas-Carrascosa, F.J.; Castillejo-González, I.L.; De la Orden, M.S.; Porras, A.G.-F. Combining LiDAR Intensity with Aerial Camera Data to Discriminate Agricultural Land Uses. Comput. Electron. Agric. 2012, 84, 36–46. [Google Scholar] [CrossRef]
  25. Hellesen, T.; Matikainen, L. An Object-Based Approach for Mapping Shrub and Tree Cover on Grassland Habitats by Use of LiDAR and CIR Orthoimages. Remote Sens. 2013, 5, 558–583. [Google Scholar] [CrossRef] [Green Version]
  26. Balado, J.; Arias, P.; Díaz-Vilariño, L.; González-deSantos, L.M. Automatic CORINE Land Cover Classification from Airborne LIDAR Data. Procedia Comput. Sci. 2018, 126, 186–194. [Google Scholar] [CrossRef]
  27. Yan, W.Y.; Shaker, A.; El-Ashmawy, N. Urban Land Cover Classification Using Airborne LiDAR Data: A Review. Remote Sens. Environ. 2015, 158, 295–310. [Google Scholar] [CrossRef]
  28. Cetin, Z.; Yastikli, N. The Use of Machine Learning Algorithms in Urban Tree Species Classification. ISPRS Int. J. Geo-Inf. 2022, 11, 226. [Google Scholar] [CrossRef]
  29. Waldhauser, C.; Hochreiter, R.; Otepka, J.; Pfeifer, N.; Ghuffar, S.; Korzeniowska, K.; Wagner, G. Automated Classification of Airborne Laser Scanning Point Clouds. In Proceedings of the Solving Computationally Expensive Engineering Problems; Koziel, S., Leifsson, L., Yang, X.-S., Eds.; Springer International Publishing: Cham, Switzerland, 2014; pp. 269–292. [Google Scholar]
  30. Bispo, P.D.C.; Rodríguez-Veiga, P.; Zimbres, B.; Do Couto de Miranda, S.; Henrique Giusti Cezare, C.; Fleming, S.; Baldacchino, F.; Louis, V.; Rains, D.; Garcia, M.; et al. Woody Aboveground Biomass Mapping of the Brazilian Savanna with a Multi-Sensor and Machine Learning Approach. Remote Sens. 2020, 12, 2685. [Google Scholar] [CrossRef]
  31. Imbrenda, V.; Lanfredi, M.; Coluzzi, R.; Simoniello, T. A Smart Procedure for Assessing the Health Status of Terrestrial Habitats in Protected Areas: The Case of the Natura 2000 Ecological Network in Basilicata (Southern Italy). Remote Sens. 2022, 14, 2699. [Google Scholar] [CrossRef]
  32. Morandi, P.S.; Marimon, B.S.; Marimon-Junior, B.H.; Ratter, J.A.; Feldpausch, T.R.; Colli, G.R.; Munhoz, C.B.R.; Da Silva Júnior, M.C.; De Souza Lima, E.; Haidar, R.F.; et al. Tree Diversity and Above-Ground Biomass in the South America Cerrado Biome and Their Conservation Implications. Biodivers. Conserv. 2020, 29, 1519–1536. [Google Scholar] [CrossRef] [Green Version]
  33. Carone, M.T.; Simoniello, T.; Manfreda, S.; Caricato, G. Watershed Influence on Fluvial Ecosystems: An Integrated Methodology for River Water Quality Management. Environ. Monit. Assess. 2008, 152, 327. [Google Scholar] [CrossRef]
  34. Wasser, L.; Chasmer, L.; Day, R.; Taylor, A. Quantifying Land Use Effects on Forested Riparian Buffer Vegetation Structure Using LiDAR Data. Ecosphere 2015, 6, art10. [Google Scholar] [CrossRef]
  35. Leipe, S.C.; Carey, S.K. Rapid Shrub Expansion in a Subarctic Mountain Basin Revealed by Repeat Airborne LiDAR. Environ. Res. Commun. 2021, 3, 071001. [Google Scholar] [CrossRef]
  36. Quaranta, G.; Salvia, R.; Salvati, L.; Paola, V.D.; Coluzzi, R.; Imbrenda, V.; Simoniello, T. Long-Term Impacts of Grazing Management on Land Degradation in a Rural Community of Southern Italy: Depopulation Matters. Land Degrad. Dev. 2020, 31, 2379–2394. [Google Scholar] [CrossRef]
  37. Simoniello, T.; Coluzzi, R.; Imbrenda, V.; Lanfredi, M. Land Cover Changes and Forest Landscape Evolution (1985–2009) in a Typical Mediterranean Agroforestry System (High Agri Valley). Nat. Hazards Earth Syst. Sci. 2015, 15, 1201–1214. [Google Scholar] [CrossRef] [Green Version]
  38. Coluzzi, R.; D’Emilio, M.; Imbrenda, V.; Giorgio, G.A.; Lanfredi, M.; Macchiato, M.; Ragosta, M.; Simoniello, T.; Telesca, V. Investigating Climate Variability and Long-Term Vegetation Activity across Heterogeneous Basilicata Agroecosystems. Geomat. Nat. Hazards Risk 2019, 10, 168–180. [Google Scholar] [CrossRef] [Green Version]
  39. Simoniello, T.; Lanfredi, M.; Liberti, M.; Coppola, R.; Macchiato, M. Estimation of Vegetation Cover Resilience from Satellite Time Series. Hydrol. Earth Syst. Sci. 2008, 12, 1053–1064. [Google Scholar] [CrossRef] [Green Version]
  40. Peng, D.; Wang, Y.; Xian, G.; Huete, A.R.; Huang, W.; Shen, M.; Wang, F.; Yu, L.; Liu, L.; Xie, Q.; et al. Investigation of Land Surface Phenology Detections in Shrublands Using Multiple Scale Satellite Data. Remote Sens. Environ. 2021, 252, 112133. [Google Scholar] [CrossRef]
  41. Beier, C.; Emmett, B.A.; Tietema, A.; Schmidt, I.K.; Peñuelas, J.; Láng, E.K.; Duce, P.; De Angelis, P.; Gorissen, A.; Estiarte, M.; et al. Carbon and Nitrogen Balances for Six Shrublands across Europe. Glob. Biogeochem. Cycles 2009, 23. [Google Scholar] [CrossRef] [Green Version]
  42. Karila, K.; Matikainen, L.; Litkey, P.; Hyyppä, J.; Puttonen, E. The Effect of Seasonal Variation on Automated Land Cover Mapping from Multispectral Airborne Laser Scanning Data. Int. J. Remote Sens. 2019, 40, 3289–3307. [Google Scholar] [CrossRef] [Green Version]
  43. Alonzo, M.; Dial, R.J.; Schulz, B.K.; Andersen, H.-E.; Lewis-Clark, E.; Cook, B.D.; Morton, D.C. Mapping Tall Shrub Biomass in Alaska at Landscape Scale Using Structure-from-Motion Photogrammetry and Lidar. Remote Sens. Environ. 2020, 245, 111841. [Google Scholar] [CrossRef]
  44. Rittenhouse, C.D.; Berlin, E.H.; Mikle, N.; Qiu, S.; Riordan, D.; Zhu, Z. An Object-Based Approach to Map Young Forest and Shrubland Vegetation Based on Multi-Source Remote Sensing Data. Remote Sens. 2022, 14, 1091. [Google Scholar] [CrossRef]
  45. Fernández-Guisuraga, J.M.; Calvo, L.; Fernandes, P.M.; Suárez-Seoane, S. Short-Term Recovery of the Aboveground Carbon Stock in Iberian Shrublands at the Extremes of an Environmental Gradient and as a Function of Burn Severity. Forests 2022, 13, 145. [Google Scholar] [CrossRef]
  46. Moudrý, V.; Klápště, P.; Fogl, M.; Gdulová, K.; Barták, V.; Urban, R. Assessment of LiDAR Ground Filtering Algorithms for Determining Ground Surface of Non-Natural Terrain Overgrown with Forest and Steppe Vegetation. Measurement 2020, 150, 107047. [Google Scholar] [CrossRef]
  47. Imangholiloo, M.; Saarinen, N.; Holopainen, M.; Yu, X.; Hyyppä, J.; Vastaranta, M. Using Leaf-Off and Leaf-On Multispectral Airborne Laser Scanning Data to Characterize Seedling Stands. Remote Sens. 2020, 12, 3328. [Google Scholar] [CrossRef]
  48. Torabzadeh, H.; Leiterer, R.; Hueni, A.; Schaepman, M.E.; Morsdorf, F. Tree Species Classification in a Temperate Mixed Forest Using a Combination of Imaging Spectroscopy and Airborne Laser Scanning. Agric. For. Meteorol. 2019, 279, 107744. [Google Scholar] [CrossRef]
  49. Koenig, K.; Höfle, B. Full-Waveform Airborne Laser Scanning in Vegetation Studies—A Review of Point Cloud and Waveform Features for Tree Species Classification. Forests 2016, 7, 198. [Google Scholar] [CrossRef] [Green Version]
  50. Luo, S.; Wang, C.; Xi, X.; Nie, S.; Fan, X.; Chen, H.; Ma, D.; Liu, J.; Zou, J.; Lin, Y.; et al. Estimating Forest Aboveground Biomass Using Small-Footprint Full-Waveform Airborne LiDAR Data. Int. J. Appl. Earth Obs. Geoinf. 2019, 83, 101922. [Google Scholar] [CrossRef]
  51. Prieur, J.-F.; St-Onge, B.; Fournier, R.A.; Woods, M.E.; Rana, P.; Kneeshaw, D. A Comparison of Three Airborne Laser Scanner Types for Species Identification of Individual Trees. Sensors 2022, 22, 35. [Google Scholar] [CrossRef]
  52. Frati, G.; Launeau, P.; Robin, M.; Giraud, M.; Juigner, M.; Debaine, F.; Michon, C. Coastal Sand Dunes Monitoring by Low Vegetation Cover Classification and Digital Elevation Model Improvement Using Synchronized Hyperspectral and Full-Waveform LiDAR Remote Sensing. Remote Sens. 2021, 13, 29. [Google Scholar] [CrossRef]
  53. Bulluck, L.; Lin, B.; Schold, E. Fine Resolution Imagery and LIDAR-Derived Canopy Heights Accurately Classify Land Cover with a Focus on Shrub/Sapling Cover in a Mountainous Landscape. Remote Sens. 2022, 14, 1364. [Google Scholar] [CrossRef]
  54. Carbonell-Rivera, J.P.; Torralba, J.; Estornell, J.; Ruiz, L.Á.; Crespo-Peremarch, P. Classification of Mediterranean Shrub Species from UAV Point Clouds. Remote Sens. 2022, 14, 199. [Google Scholar] [CrossRef]
  55. Doneus, M.; Mandlburger, G.; Doneus, N. Archaeological Ground Point Filtering of Airborne Laser Scan Derived Point-Clouds in a Difficult Mediterranean Environment. J. Comput. Appl. Archaeol. 2020, 3, 92–108. [Google Scholar] [CrossRef] [Green Version]
  56. Dorji, Y.; Annighöfer, P.; Ammer, C.; Seidel, D. Response of Beech (Fagus sylvatica L.) Trees to Competition—New Insights from Using Fractal Analysis. Remote Sens. 2019, 11, 2656. [Google Scholar] [CrossRef] [Green Version]
  57. Štular, B.; Lozić, E. Comparison of Filters for Archaeology-Specific Ground Extraction from Airborne LiDAR Point Clouds. Remote Sens. 2020, 12, 3025. [Google Scholar] [CrossRef]
  58. Wang, K.; Wang, T.; Liu, X. A Review: Individual Tree Species Classification Using Integrated Airborne LiDAR and Optical Imagery with a Focus on the Urban Environment. Forests 2019, 10, 1. [Google Scholar] [CrossRef] [Green Version]
  59. Song, J.-H.; Han, S.-H.; Yu, K.; Kim, Y.-I. Assessing the Possibility of Land-Cover Classification Using Lidar Intensity Data. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2002, 34, 259–262. [Google Scholar]
  60. RIEGL. Laser Measurement Systems. Airborne Laser Scanner for Full-Waveform Analysis. RIEGL LMS-Q560. 2010. Available online: http://www.riegl.com/uploads/tx_pxpriegldownloads/10_DataSheet_Q560_20-09-2010_01.pdf (accessed on 8 July 2022).
  61. Soininen, A. TerraScan for Microstation, Users Guide; Terrasolid Ltd.: Helsinki, Finland, 2005. [Google Scholar]
  62. Roussel, J.-R.; Auty, D.; Coops, N.C.; Tompalski, P.; Goodbody, T.R.H.; Meador, A.S.; Bourdon, J.-F.; De Boissieu, F.; Achim, A. LidR: An R Package for Analysis of Airborne Laser Scanning (ALS) Data. Remote Sens. Environ. 2020, 251, 112061. [Google Scholar] [CrossRef]
  63. Badenko, V.; Zotov, D.; Muromtseva, N.; Volkova, Y.; Chernov, P. Comparison of Software for Airborne Laser Scanning Data Processing in Smart City Applications. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2019, 42, 9–13. [Google Scholar] [CrossRef] [Green Version]
  64. Zhang, X.; Bao, Y.; Wang, D.; Xin, X.; Ding, L.; Xu, D.; Hou, L.; Shen, J. Using UAV LiDAR to Extract Vegetation Parameters of Inner Mongolian Grassland. Remote Sens. 2021, 13, 656. [Google Scholar] [CrossRef]
  65. Axelsson, P. DEM Generation from Laser Scanner Data Using Adaptive TIN Models. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2000, 33, 110–117. [Google Scholar]
  66. Walker, D.A.; Acevedo, W.; Everett, K.R.; Gaydos, L.; Brown, J.; Webber, P.J. CRREL Report; Cold Regions Research and Engineering Laboratory: Hanover, NH, USA, 1982; p. 80. ISSN 0501-5782. [Google Scholar]
  67. Camarda, I. Biotopi di Sardegna: Guida a Dodici Aree di Rilevante Interesse Botanico; Carlo Delfino Editore: Sassari, Italy, 1988. [Google Scholar]
  68. Alvites, C.; Santopuoli, G.; Maesano, M.; Chirici, G.; Moresi, F.V.; Tognetti, R.; Marchetti, M.; Lasserre, B. Unsupervised Algorithms to Detect Single Trees in a Mixed-Species and Multilayered Mediterranean Forest Using LiDAR Data. Can. J. For. Res. 2021, 51, 1766–1780. [Google Scholar] [CrossRef]
  69. Morsdorf, F.; Meier, E.; Kötz, B.; Itten, K.I.; Dobbertin, M.; Allgöwer, B. LIDAR-Based Geometric Reconstruction of Boreal Type Forest Stands at Single Tree Level for Forest and Wildland Fire Management. Remote Sens. Environ. 2004, 92, 353–362. [Google Scholar] [CrossRef]
  70. Chehata, N.; David, N.; Bretar, F. LIDAR Data Classification Using Hierarchical K-Means Clustering. In Proceedings of the ISPRS Congress Beijing, Beijing, China, 3–11 July 2008; Voloume XXXVII, pp. 325–330. [Google Scholar]
  71. Congalton, R.G. A Review of Assessing the Accuracy of Classifications of Remotely Sensed Data. Remote Sens. Environ. 1991, 37, 35–46. [Google Scholar] [CrossRef]
  72. Liu, C.; Frazier, P.; Kumar, L. Comparative Assessment of the Measures of Thematic Classification Accuracy. Remote Sens. Environ. 2007, 107, 606–616. [Google Scholar] [CrossRef]
  73. Brodersen, K.H.; Ong, C.S.; Stephan, K.E.; Buhmann, J.M. The Balanced Accuracy and Its Posterior Distribution. In Proceedings of the 2010 20th International Conference on Pattern Recognition, Istanbul, Turkey, 23–26 August 2010; pp. 3121–3124. [Google Scholar]
  74. Grandini, M.; Bagli, E.; Visani, G. Metrics for Multi-Class Classification: An Overview. arXiv 2020, arXiv:2008.05756. Available online: https://arxiv.org/abs/2008.05756 (accessed on 8 July 2022).
  75. Matikainen, L.; Karila, K.; Hyyppä, J.; Litkey, P.; Puttonen, E.; Ahokas, E. Object-Based Analysis of Multispectral Airborne Laser Scanner Data for Land Cover Classification and Map Updating. ISPRS J. Photogramm. Remote Sens. 2017, 128, 298–313. [Google Scholar] [CrossRef]
  76. Weidner, L.; Walton, G.; Kromer, R. Classification Methods for Point Clouds in Rock Slope Monitoring: A Novel Machine Learning Approach and Comparative Analysis. Eng. Geol. 2019, 263, 105326. [Google Scholar] [CrossRef]
  77. Chawla, N.V. Data Mining for Imbalanced Datasets: An Overview. In Data Mining and Knowledge Discovery Handbook; Maimon, O., Rokach, L., Eds.; Springer: Boston, MA, USA, 2010; pp. 875–886. ISBN 978-0-387-09823-4. [Google Scholar]
  78. Carrillo, H.; Brodersen, K.H.; Castellanos, J.A. Probabilistic Performance Evaluation for Multiclass Classification Using the Posterior Balanced Accuracy. In ROBOT2013: First Iberian Robotics Conference; Springer: Berlin/Heidelberg, Germany, 2014; pp. 347–361. [Google Scholar]
  79. Stehman, S.V.; Foody, G.M. Key Issues in Rigorous Accuracy Assessment of Land Cover Products. Remote Sens. Environ. 2019, 231, 111199. [Google Scholar] [CrossRef]
  80. Luque, A.; Carrasco, A.; Martín, A.; De las Heras, A. The Impact of Class Imbalance in Classification Performance Metrics Based on the Binary Confusion Matrix. Pattern Recognit. 2019, 91, 216–231. [Google Scholar] [CrossRef]
  81. Foody, G.M. Explaining the Unsuitability of the Kappa Coefficient in the Assessment and Comparison of the Accuracy of Thematic Maps Obtained by Image Classification. Remote Sens. Environ. 2020, 239, 111630. [Google Scholar] [CrossRef]
  82. Sun, J.; Zhou, X.; Fan, Z.; Wang, Q. Investigation of Light Scattering Properties Based on the Modified Li-Liang BRDF Model. Infrared Phys. Technol. 2022, 120, 103992. [Google Scholar] [CrossRef]
  83. Roth, B.D.; Goodenough, A.A.; Brown, S.D.; Van Aardt, J.A.; Saunders, M.G.; Krause, K. Simulations of Leaf BSDF Effects on Lidar Waveforms. Remote Sens. 2020, 12, 2909. [Google Scholar] [CrossRef]
  84. Cunliffe, A.M.; J Assmann, J.; N Daskalova, G.; Kerby, J.T.; Myers-Smith, I.H. Aboveground Biomass Corresponds Strongly with Drone-Derived Canopy Height but Weakly with Greenness (NDVI) in a Shrub Tundra Landscape. Environ. Res. Lett. 2020, 15, 125004. [Google Scholar] [CrossRef]
  85. Kokaly, R.F.; Clark, R.N.; Swayze, G.A.; Livo, K.E.; Hoefen, T.M.; Pearson, N.C.; Wise, R.A.; Benzel, W.M.; Lowers, H.A.; Driscoll, R.L.; et al. USGS Spectral Library Version 7; Data Series; U.S. Geological Survey: Reston, VA, USA, 2017; Volume 1035, p. 68. [Google Scholar]
Figure 1. (a) Location of the study area (yellow dot); (b) orthophoto acquired with the 39 MP Digicam H39; yellow frames show the four selected areas in which the quantitative measures of performance were made.
Figure 1. (a) Location of the study area (yellow dot); (b) orthophoto acquired with the 39 MP Digicam H39; yellow frames show the four selected areas in which the quantitative measures of performance were made.
Remotesensing 14 05127 g001
Figure 2. Flowchart of the implemented classification and testing procedures.
Figure 2. Flowchart of the implemented classification and testing procedures.
Remotesensing 14 05127 g002
Figure 3. Ortophotos (A) and classifications of the four study areas obtained by (B) the standard procedure based on geometric features (Geometric), (C) the combination of geometric feature with laser intensity segmented through the K-means (GIK) algorithm, and (D) reference classification obtained with a manual identification of rocky areas (Reference). The high and medium vegetation correspond to tall and low shrubs, respectively; the ground class (coincident for all the classifications) is switched off.
Figure 3. Ortophotos (A) and classifications of the four study areas obtained by (B) the standard procedure based on geometric features (Geometric), (C) the combination of geometric feature with laser intensity segmented through the K-means (GIK) algorithm, and (D) reference classification obtained with a manual identification of rocky areas (Reference). The high and medium vegetation correspond to tall and low shrubs, respectively; the ground class (coincident for all the classifications) is switched off.
Remotesensing 14 05127 g003
Figure 4. Example of rock configuration similar to shrub vegetation: (a) position of the cloud point slice (blue points) including a large rock outcrop highlighted by red arrows; (b) slice of points colored with the corresponding RGB of the orthophoto; (c) laser intensity; (d) slice classified by the standard procedure based on geometric features, (e) slice classified by GIK (combination of geometric feature with laser intensity segmented by the K-means algorithm); (f) reference classification of the slice obtained with a manual identification of rocky areas.
Figure 4. Example of rock configuration similar to shrub vegetation: (a) position of the cloud point slice (blue points) including a large rock outcrop highlighted by red arrows; (b) slice of points colored with the corresponding RGB of the orthophoto; (c) laser intensity; (d) slice classified by the standard procedure based on geometric features, (e) slice classified by GIK (combination of geometric feature with laser intensity segmented by the K-means algorithm); (f) reference classification of the slice obtained with a manual identification of rocky areas.
Remotesensing 14 05127 g004
Figure 5. Examples of errors in the identification of rock class by the GIK approach (combined procedure based on geometric and intensity features): (a) commission errors (blue points in red circles) due to chinks in the shrub canopy; (b) canopy chinks in the corresponding orthophoto; (c) omission errors (green points in yellow circles) due to the presence of short vegetation on the rock.
Figure 5. Examples of errors in the identification of rock class by the GIK approach (combined procedure based on geometric and intensity features): (a) commission errors (blue points in red circles) due to chinks in the shrub canopy; (b) canopy chinks in the corresponding orthophoto; (c) omission errors (green points in yellow circles) due to the presence of short vegetation on the rock.
Remotesensing 14 05127 g005
Figure 6. Improvement (Δ) of classification accuracy as a function of rocky area presence in the scene (point cloud percentage): (a) overall accuracy (OA); (b) precision (user’s accuracy) for the low shrub class (Δ = GIK-Geometric).
Figure 6. Improvement (Δ) of classification accuracy as a function of rocky area presence in the scene (point cloud percentage): (a) overall accuracy (OA); (b) precision (user’s accuracy) for the low shrub class (Δ = GIK-Geometric).
Remotesensing 14 05127 g006
Figure 7. Improvement of user’s accuracy for the low shrub class as a function of point density for Area 1 (a), Area 4 (b), and both Areas 1 and 4 (c). The difference between geometric and GIK accuracy (Δ) is normalized by the low-shrub presence in the scene (point cloud percentage of the given subsampled cloud). Labels report the absolute (i.e., not normalized) differences of class accuracy (Δ = GIK-Geometric).
Figure 7. Improvement of user’s accuracy for the low shrub class as a function of point density for Area 1 (a), Area 4 (b), and both Areas 1 and 4 (c). The difference between geometric and GIK accuracy (Δ) is normalized by the low-shrub presence in the scene (point cloud percentage of the given subsampled cloud). Labels report the absolute (i.e., not normalized) differences of class accuracy (Δ = GIK-Geometric).
Remotesensing 14 05127 g007
Table 1. Parameters of ALS full-waveform acquisitions with RIEGL LMS-Q560. Intensity and heights values are expressed as minimum–maximum values.
Table 1. Parameters of ALS full-waveform acquisitions with RIEGL LMS-Q560. Intensity and heights values are expressed as minimum–maximum values.
Area 1Area 2Area 3Area 4
First Return146,66795,589391,11523,501
Second Return19,03417,89147,4473123
Last146,64695,581390,95923,477
Single127,61377,689343,52520,353
First of Many19,05417,90047,5903148
Second of Many19,03417,89147,4473123
Third of Many186918035231401
Last of Many19,03317,89247,4343124
Point Count167,634115,331444,21327,043
Mean Point Density (pts/m2)2.442.443.092.56
Scan Angle60–8465–10861–12093–102
Intensity10–29110–22210–2910–209
Height (m)523–678460–581587–745541–611
Table 2. Performance of ALS cloud classification obtained with a standard procedure based on geometric features (GEO) and by combining laser intensity with the K-means segmentation (GIK).
Table 2. Performance of ALS cloud classification obtained with a standard procedure based on geometric features (GEO) and by combining laser intensity with the K-means segmentation (GIK).
AreaSamplesOverall
Accuracy
BalancedClass PrecisionClass Sensitivity
Accuracy(User’s Accuracy)(Producer’s Accuracy)
Low ShrubsTall ShrubsRockLow ShrubsTall ShrubsRock
GEOGIKGEOGIKGEOGIKGEOGIKGEOGIKGEOGIKGEOGIKGEOGIK
Area 177,21891.8194.8659.5590.3780.1692.3798.4898.96na70.27100.0092.93100.0099.72na64.65
Area 259,48397.6297.8163.4289.7490.7194.2799.5499.54na55.9100.0096.60100.0099.99na36.23
Area 3200,24092.5993.5163.4287.9891.8798.7593.7697.93na54.16100.0092.06100.0098.41na81.07
Area 410,62888.8893.3961.8889.1570.7485.0399.8299.82na75.37100.0091.76100.00100.00na60.32
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Simoniello, T.; Coluzzi, R.; Guariglia, A.; Imbrenda, V.; Lanfredi, M.; Samela, C. Automatic Filtering and Classification of Low-Density Airborne Laser Scanner Clouds in Shrubland Environments. Remote Sens. 2022, 14, 5127. https://doi.org/10.3390/rs14205127

AMA Style

Simoniello T, Coluzzi R, Guariglia A, Imbrenda V, Lanfredi M, Samela C. Automatic Filtering and Classification of Low-Density Airborne Laser Scanner Clouds in Shrubland Environments. Remote Sensing. 2022; 14(20):5127. https://doi.org/10.3390/rs14205127

Chicago/Turabian Style

Simoniello, Tiziana, Rosa Coluzzi, Annibale Guariglia, Vito Imbrenda, Maria Lanfredi, and Caterina Samela. 2022. "Automatic Filtering and Classification of Low-Density Airborne Laser Scanner Clouds in Shrubland Environments" Remote Sensing 14, no. 20: 5127. https://doi.org/10.3390/rs14205127

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop